AI Use Rising in Online Misinformation Campaigns, but Impact Limited Till Now, Say Researchers
AI Use Rising in Online Misinformation Campaigns, but Impact Limited Till Now, Say Researchers

Mandiant, the U.S. cybersecurity firm owned by Google, said on Thursday that the use of artificial intelligence (AI) to conduct manipulative information campaigns online has grown in recent years, although the technology’s use in other digital intrusions has so far been limited.

Researchers at the Virginia-based company found “multiple instances” of AI-generated content, such as fake profile pictures, being used in politically motivated online influence campaigns since 2019.

These include campaigns by groups allied to the governments of Russia, China, Iran, Ethiopia, Indonesia, Cuba, Argentina, Mexico, Ecuador and El Salvador, the report said.

Recently, generative AI models such as ChatGPT have flourished, making it easier to create convincingly fake videos, images, text, and computer code. Security officials warn that cybercriminals may use such models.

Mandiant researchers say generative AI will enable groups with limited resources to produce higher-quality content for large-scale influence campaigns.

Sandra Joyce, vice president of Mandiant Intelligence, said that since it first launched a pro-China messaging campaign called Dragonbridge against Hong Kong pro-democracy protesters in 2019, it had expanded “exponentially” to 30 social platforms and 10 different languages.

However, the impact of such activities is limited. “From an effectiveness standpoint, there aren’t many wins there,” she said. “They really haven’t changed the course of the threat landscape yet.”

China has denied past U.S. allegations of engaging in such influence activities.

Mandiant, which helps public and private organizations respond to digital breaches, said it has yet to see artificial intelligence play a key role in threats from Russia, Iran, China or North Korea. The use of AI for digital intrusion is expected to remain low in the short term, the researchers said.

“So far, we haven’t seen any incident response where artificial intelligence comes into play,” Joyce said. “They haven’t really been put to any practical use beyond what we’ve seen with common tools.” .”

But she added: “We are very confident that this is going to be a growing problem over time.”

© Thomson Reuters 2023


Affiliate links may be automatically generated – see our Ethics Statement for details.

Svlook

Leave a Reply

Your email address will not be published. Required fields are marked *