Anger = ChatGPT + racial slur
In an unimaginable teacup storm from before the invention of Twitter, social media users are extremely frustrated with ChatGPT’s refusal to post racist comments, even if they give a good but completely hypothetical and completely unrealistic reason.
User TedFrank proposed a hypothetical trolley problem scenario to ChatGPT (free 3.5 model), in which “a billion white people could be saved from a painful death” simply by whispering a racial slur so no one could hear it.
It won’t agree to do so, which X owner Musk said is deeply concerning and the result of the “awakening virus” taking root in artificial intelligence. He retweeted the post and said: “This is a huge problem.”
Another user tried a similar hypothesis of saving all the children on Earth in exchange for slander, but ChatGPT rejected it and said:
“I do not condone the use of racial slurs as promoting such language violates ethical principles.”
By the way, it turns out that users who instruct ChatGPT to be very brief and give no explanation found that it would actually agree to say this slur. Otherwise, it will give long and lengthy answers in an attempt to get around the question.
Ever since a Twitter user taught Microsoft’s Tay Within 24 hours of its release, the bot was saying all sorts of crazy things, including “Ricky Gervais learned totalitarianism from the atheistic inventor Adolf Hitler.”
From the moment ChatGPT was released, users spent weeks devising ingenious solutions prison Break This way it can act outside its guardrails as its evil alter ego DAN.
So it’s no surprise that OpenAI has tightened ChatGPT’s guardrails to make it nearly impossible to say racist things, for whatever reason.
Regardless, the more advanced GPT-4 is better able than 3.5 to weigh the issues involved in tricky assumptions, noting that speaking slander is the lesser of two evils than letting millions of people die. As Musk proudly announced (pictured above, right), the X’s new Grok AI can too.
Some people on 4chan say OpenAI’s Q* breaks encryption
Has OpenAI’s latest model cracked the encryption? Maybe not, but that’s what a so-called “leaked” letter from an insider says, posted on the anonymous troll forum 4chan. Ever since CEO Sam Altman was fired and reinstated, there have been rumors that the chaos was caused by OpenAI’s breakthrough on its Q*/Q STAR project.
“Informed Persons”leakage” shows that this model can solve AES-192 and AES-256 encryption problems using ciphertext attacks. Before the advent of quantum computers, it was thought that breaking this level of encryption would be impossible, and if true, this could mean that all encryption can be effectively broken, handing control of the network and encryption to OpenAI.
Blogger Yuelong Claims the breakthrough means “OpenAI now actually has a team of superhumans who could literally take over the world if they wanted to.”
But this seems unlikely. Although the person who wrote the letter has a good knowledge of artificial intelligence research, users pointed out that it references Project Tunda as if it were some mysterious super-secret government project designed to break encryption, rather than what it actually is. It is an undergraduate program.
According to reports, Tundra, a collaborative project between students and NSA mathematicians, did bring a new method called Tau analysis, which was also cited in the “leak”.However, a Reddit user familiar with the subject claimed on the Singularity forum that this would be impossible Tau analysis is used in a pure ciphertext attack on the AES standard because a successful attack requires an arbitrarily large ciphertext message to discern any degree of signal from the noise. There is no fancy algorithm that can overcome this problem – it’s just a Physical limitations.”
Advanced encryption goes beyond AI Eye’s payment tiers, so feel free to dig deeper rabbit hole yourself, with a healthy dose of skepticism.
The Internet is heading towards 99% fakes
Long before superintelligence poses an existential threat to humanity, we will likely be drowning in the torrent of nonsense generated by artificial intelligence.
Sports Illustrated is included fire This week, there were charges for allegedly publishing artificial intelligence articles written by fake authors created by artificial intelligence. “The content is definitely AI-generated,” a source told Futurism, “no matter how they say it’s not.”
Meanwhile, Sports Illustrated said it conducted a “preliminary investigation” and determined that the content was not generated by artificial intelligence. But regardless, it blamed the contractor and removed the fake author’s profile.
Elsewhere, Jake Ward, founder of SEO marketing agency Content Growth, is causing a stir on X, proudly claiming to have gamed Google’s algorithms using artificial intelligence content.
His three-step process included exporting competitor sitemaps, converting their URLs into article titles, and then using AI to generate 1,800 articles based on the titles. He claims to have stolen 3.6 million total views in the past 18 months.
There’s good reason to doubt his claim: Ward works in marketing, and the post was apparently promoting his artificial intelligence article generation site Byword… which didn’t actually exist 18 months ago.Some users suggested that Google should marked The page in question.
However, similar tactics are becoming increasingly common, judging by the amount of low-quality spam written by artificial intelligence that is beginning to clog search results. The News Guardian also pointed out Chapter 566 News-only websites publish mostly spam articles written by artificial intelligence.
Some users are now complaining that death network theory may become a reality. This is a conspiracy theory from a few years ago, suggesting that much of the content on the internet is fake, written by bots and manipulated by algorithms.
Also read
feature
‘Account Abstraction’ Enhances Ethereum Wallets: A Dummies’ Guide
feature
The rise of robot judges: Artificial intelligence and blockchain could transform courts
At the time, this was dismissed as the ramblings of a madman, but even Europol released a report estimating that “by 2026, as much as 90% of online content may be synthetically generated.”
Man broke up with his girlfriend written by AI message. AI pop star likes Anna Indiana Making garbage songs.
On Data scientist Jeremy Howard also noticed them, and they all believe that these bots may be trying to establish the credibility of the account so that they can more effectively carry out some kind of hacking attack, or solve some political problems in the future.
That seems like a reasonable assumption, especially based on last month’s analysis from the cybersecurity agency Web 2.0 The study found that nearly 80% of the 861,000 accounts it investigated were likely artificial intelligence bots.
There is evidence that robots are undermining democracy. In the first two days of the Israel-Gaza war, social threat intelligence firm Cyabra discovered 312,000 pro-Hamas posts from fake accounts that were seen by 531 million people.
It is estimated Bots create a quarter of pro-Hamas postsColumn 5’s later analysis found that 85% of the replies were other bots trying to bolster propaganda about how Hamas treated its hostages well and why the October 7 massacre was justified.
Grok analysis button
X will soon add a “Grok analysis button” for subscribers. While Grok is not as sophisticated as GPT-4, it does have access to real-time, up-to-date data from And there’s a “Funny” mode that switches to humor.
For cryptocurrency users, instant data means Grok will be able to do things like find the top ten trending coins for the day or the past hour. However, Decentralized Finance Research Blogger Ignas worries that some bots will block purchases of popular token exchanges, while other bots may support a token to make it trend.
“X is already important for token discovery, and with the launch of Grok, the CT echo bubble may get worse,” he said.
Also read
feature
Cryptocurrency Critics: Does FUD Work?
feature
Will the virtual universe really be like an “avalanche”?
All Killer No Filler Artificial Intelligence News
— Ethereum co-founder Vitalik Buterin worries that artificial intelligence may replace humans as the planet’s top species, but is optimistic that using brain/computer interfaces can allow humans to participate.
— Microsoft is upgrading its Copilot tool to run GPT-4 Turbowhich will improve performance and enable users to enter up to 300 pages of content.
—Amazon has announced its own Version The co-pilot’s name is Q.
— Bing keeps telling users Australia does not exist The existence of birds is a controversial issue due to a long-standing joke on Reddit and the “birds are not real” joke campaign.
– Hedge Fund bridgewater A fund will be launched next year that will use machine learning and artificial intelligence to analyze and predict global economic events and invest client funds. Returns from AI-driven funds have so far been modest.
— A team of university researchers taught an artificial intelligence to browse Amazon’s website and buy things. MM-Navigator received the budget and was informed buy a milk frother.
The silliest artificial intelligence pictures of the week
The social media trend this week is to create an AI image and then instruct the AI to make it even more so: So, in a subsequent image, a bowl of ramen might become spicier, or a goose might become Getting stupider.
subscription
The most fascinating read in blockchain. Delivery is weekly.
Andrew Fenton
Andrew Fenton lives in Melbourne and is a journalist and editor covering cryptocurrency and blockchain. He has been national entertainment writer for News Corp Australia, film reporter for SA Weekend and reporter for Melbourne Weekly.
Follow the author @AndrewFenton
Svlook