AI content cannibalization problem, Threads a loss leader for AI data? – Cointelegraph Magazine

ChatGPT Eating Cannibals

The ChatGPT hype is starting to wane, with Google searches for “ChatGPT” down 40% from its peak in April, while web traffic to OpenAI’s ChatGPT site has dropped nearly 10% over the past month.

That’s only to be expected — yet GPT-4 users have also reported that the model looks much dumber (but faster) than before.

One theory is that OpenAI breaks it down into multiple smaller models trained in specific domains that work together but not at the same level.

AI Tweets

But there is an even more interesting possibility possible AI cannibalism also plays a role.

The web is now flooded with AI-generated text and images, and this synthetic data is collected as data to train the AI, creating a negative feedback loop. The more AI data a model ingests, the less consistent and high-quality the output becomes. It’s kind of like what happens when you make a photocopy, the image gets progressively worse.



While GPT-4’s official training data ends in September 2021, it apparently knows much more than that, and OpenAI recently shut down its Web browsing insert.

A new paper by scientists at Rice University and Stanford University has come up with a cute acronym for the question: Autophagy Disorder Model or crazy.

“Our main conclusion in all cases is that without sufficient fresh ground truth data at each generation of the autophagic cycle, future generative models are doomed to progressively decrease in quality (precision) or diversity (recall),” they said.

Essentially, the model begins to lose data that is more unique but less representative, and in the process reinforces its output on data that varies less. The good news is that this means that if we can find a way to identify and prioritize the human content of a model, there is now a reason for AI to involve humans. That’s one of the plans for Worldcoin, OpenAI boss Sam Altman’s eye-scanning blockchain project.

tom goldstein

Are Threads just a loss-making product for training AI models?

Twitter’s cloning of Threads was a somewhat odd move by Mark Zuckerberg, as it cannibalized Instagram’s users. The photo-sharing platform rakes in as much as $50 billion a year, but gets about a tenth of that from Threads, making it unrealistic to take 100 percent of the market from Twitter. Alex Valaitis of Big Brain Daily predicts that it will be shut down or re-merged into Instagram within 12 months, and believes that the real reason for its launch now “is to have more text-based content to train Meta’s artificial intelligence model.”

ChatGPT was trained on massive amounts of data from Twitter, but Elon Musk took various unpopular measures to prevent this from happening in the future (API access fees, rate limits, etc.).

Zuck already has precedent in this regard, Meta’s image recognition AI software SEER is in billion Photos posted to Instagram.Users agree to this in the Privacy Policy, and quite a few have already agreed famous The Threads app collects all possible data, from health data to religion and race. This data will inevitably be used to train AI models such as Facebook’s LLaMA (Large Language Model Meta-AI).
Meanwhile, Musk has just launched an OpenAI competitor called artificial intelligence It’s mining Twitter’s data for its LL.M.

Antisocial
Various permissions required by social applications (CounterSocial)

Religious chatbots are fundamentalists

Who would have thought that training an AI on religious texts and speaking with the voice of God would be a terrible idea? In India, Hindu chatbots masquerading as Krishna have been advising users that it’s okay to kill people if it’s your dharma or duty.

At least five chatbots trained on the Bhagavad Gita, a sutra containing 700 verses, have emerged in the past few months, but the Indian government has no plans to regulate the technology despite ethical concerns.

“This is miscommunication and misinformation based on religious texts,” explain Mumbai-based lawyer Lubna Yusuf, co-author of The Artificial Intelligence Book. “The text gives a lot of philosophical value to what they want to express, and what will the robot do? It gives you a literal answer, and that’s the danger.”

read also

feature

Here’s how to make and lose money with NFTs

feature

Crypto indexers scramble to win over undecided investors

AI Pessimists vs. AI Optimists

The world’s foremost destroyer of AI, decision theorist Eliezer Yudkowsky, warned in a TED talk that superintelligent AI will kill us all. He’s not sure how or why, as he believes AGI will be so much smarter than us that we won’t even understand how and why it kills us — like a medieval farmer trying to understand the operation of an air conditioner. It could kill us because it’s a side effect of pursuing other goals, or because “it doesn’t want us to make other superintelligences to compete with it.”

He notes that “no one understands how modern AI systems do what they do. They’re huge, incomprehensible matrices of floating point numbers.” He doesn’t expect “a marching robot army to glow red.” , but he believes that “a smarter, more indifferent entity will figure out strategies and techniques to kill us quickly and reliably, and then kill us.” The only thing that can prevent this from happening is in World War III threatened a global moratorium on the technology, but he doesn’t think that will happen.

(embed) https://www.youtube.com/watch?v=Yd0yQ9yxSYY(/embed)

in his prose “Why AI will save the world,” A16z’s Marc Andreessen argues that this position is unscientific: “What is a testable hypothesis? What would falsify this hypothesis? How do we know when we’re in a danger zone?” Except “You can’t prove it won’t happen! , these questions remain largely unanswered.

(embed) https://www.youtube.com/watch?v=-hxeDjAxvJ8(/embed)

Microsoft boss Bill Gates posted prose His own, titled “The risks of artificial intelligence are real but manageable,” argues that, from cars to the Internet, “people have gone through other transformative moments, and despite a lot of turmoil, end up getting better at it.” .”

“This is the most transformative innovation any of us will see in our lifetimes and a healthy public debate will depend on everyone understanding the technology, its benefits and risks. The benefits will be huge, trust us to manage the risks The best reason is that we’ve done it before.”

Data scientist Jeremy Howard posted his own Paperargues that any attempt to ban the technology or limit it to a small number of large AI models would be a disaster, and compares fear-based responses to AI to the pre-Enlightenment era, when humans attempted to combine education and power Restricted to elite hands.

read also

feature

Why Virtual Reality Needs Blockchain: Economics, Persistence, and Scarcity

feature

Crypto indexers scramble to win over undecided investors

“Then a new idea came along. What if we believed in the overall good of society as a whole? What if everyone had access to education? To vote? Technology? This is the Age of Enlightenment.”

His counter-proposal is to encourage open-source development of artificial intelligence, with the belief that most people will use the technology for good.

“Most people will use these models to create and protect. What better way to be safe than to have the vast diversity and expertise of the entire human society doing their best to identify and respond to threats, with the full support of artificial intelligence?”

OpenAI’s code interpreter

GPT-4’s new code interpreter is a fantastic new upgrade that allows AI to generate code on demand and actually run it. So anything you can think of, it can generate code and run. Users have come up with a variety of use cases, including uploading company reports, having AI generate useful charts of key data, converting files from one format to another, creating video effects, and converting still images to video. A user uploaded an Excel file of the location of every lighthouse in the United States and used GPT-4 to create an animated map of the locations.

All the Killer AI News

—University of Montana study finds AI scores Top 1% A standardized test of creativity. The Scholastic Testing Service gave the GPT-4 the highest possible scores for creativity, fluency (the ability to generate a large number of ideas), and originality.

— Comedian Sarah Silverman and writers Christopher Golden and Richard Kadreyare prosecute OpenAI and Meta violated copyright by training their respective AI models on the trio’s books.

—Microsoft’s AI Copilot for Windows will end up being amazing, but Windows Central finds insider previews are really just Bing Chat running through the Edge browser, which almost toggles Bluetooth exist.

AnthropomorphicChatGPT’s competitor, Claude 2, is now available for free in the UK and US, and its context window can handle content up to 75,000 words, compared to ChatGPT’s maximum of 3,000 words.This makes it ideal for summarizing long texts, and it is good When writing a novel.

video of the week

Indian satellite news channel OTV News has launched an AI-powered news anchor named Lisa who will deliver news multiple times a day for the network and its digital platforms in multiple languages, including English and Odia. OTV managing director Jagi Mangat Panda said: “The new AI anchor is a digital composite created from the footage of a human anchor reading the news using a synthetic voice.”

(embed) https://www.youtube.com/watch?v=xyZP88jB95c(/embed)

Andrew Fenton

Andrew Fenton

Andrew Fenton is a reporter and editor covering cryptocurrencies and blockchain based in Melbourne. He has worked as a national entertainment writer for News Corp Australia, a film reporter for South Australian Weekend and a reporter for Melbourne Weekly.

Svlook

Leave a Reply

Your email address will not be published. Required fields are marked *