A horrific video circulated on social media platforms last week showed a young girl being set on fire by a mob, with the person who shared it claiming it was the work of Hamas. But in fact, this video was filmed in Guatemala in 2015, long before Palestinian groups attacked Israel.
It was just one lie in a week in which violent misinformation regularly flooded social apps, causing confusion and sparking outrage as the conflict unfolded. Platforms such as Elon Musk’s X, Telegram and TikTok have drawn the ire of regulators for failing to block a flood of misleading information that has quickly spread into mainstream media and real-world politics.
In this new information battlefield, many widely shared posts, including Qatar’s viral threat to cut off gas exports, have proven false. But others fall into a gray area and have proven evidence of atrocities.
For example, horrifying accusations that Hamas “beheads babies” made the front pages of tabloids and even appeared in a speech by President Joe Biden. The White House later acknowledged that the president’s assertion had not been independently verified. Israel has released images of Hamas killing and burning babies, but there is no evidence of infant beheadings.
Jean-Claude Goldenstein, chief executive of CREOpoint, a business intelligence group focused on disinformation, said his research found that the number of viral rumors fact-checkers found about the conflict between Israel and Hamas had “exploded 100-fold since the weekend.” ” and fact-checkers found these rumors to be false. Compared to the rest of 2023.
“The proliferation of falsehoods online has led to strong emotions across multiple time zones, with huge global and social consequences,” he said. “The scale and spread of this disease are unprecedented.”
These lies not only affect public opinion, but may also affect the calculations of the protagonists of the war. A Hamas official who spoke to the Financial Times approvingly cited Israeli television Channel 10’s reports of mass desertions from the IDF. Not only is the report false, Channel 10 has not aired it since 2019.
X (formerly Twitter) now faces an EU investigation into its handling of illegal content and misinformation. Chinese-owned TikTok and Mark Zuckerberg’s Meta have received warnings from Brussels.
In addition, officials have expressed concerns about the use of these platforms to encourage violence and threatening behavior. On Friday, New York State Attorney General Letitia James sent a letter to Google, Meta,
TikTok said on Sunday it would remove content that mocks victims of attacks or incites violence and add restrictions on its live broadcasts.
For years, social media platforms have debated how to deal with fake news and misleading information, which have surged in the wake of conflicts such as Russia’s full-scale invasion of Ukraine.
But researchers say information warfare has created a unique landscape in which horrific wartime images taken out of context or doctored go viral instantly. This is exacerbated by users’ desire for instant updates and the tense nature of the Israeli-Palestinian conflict.
Algorithms often promote the most provocative content. The lack of moderation guardrails and other changes on platforms like X and Telegram make it harder than ever for academics and analysts to collect data and track the flow of information.
“It’s a perfect storm,” said Gordon Pennycook, an associate professor of psychology at Cornell University who studies misinformation. He pointed to the “seriousness of the problem” and “vested interests” as contributing factors.
Due to growing distrust of mainstream media and social pressure to take a stand or show solidarity, some users have inadvertently shared misinformation. Pop star Justin Bieber posted a since-deleted photo of a destroyed city on Instagram that read: “Pray for Israel.” This photo actually shows Gaza. In other cases, footage of military battles is taken from completely different conflicts or even from video games.
The messaging app Telegram has become a local information center and a key communication tool for Iran-backed militant groups such as Hezbollah in Lebanon. Arieh Kovler, a Jerusalem-based political analyst and independent researcher, said many Israelis follow Telegram channels with official names, which are quick to share videos without context and without accurate Sexual censorship speculation and rumors.
A report by the Atlantic Council’s Digital Forensic Research Laboratory found that Hamas relies on Telegram as its “primary means of communication” to disseminate statements to supporters. The Telegram channel of the Al-Qassam Brigades, the group’s military wing, has tripled in size from pre-war levels, with more than 619,000 subscribers, the report said. The brigade’s spokesman, Abu Obaida, has more than 400,000 subscribers on his channel.
Pro-Hamas accounts spread misinformation and stoke fear. Goldenstein said they circulated videos shortly after the attack began, falsely claiming to show Israeli troops withdrawing from bases near Gaza and Israeli generals being captured.
“There’s disinformation on all sides,” said Kathleen Carley, a researcher at Carnegie Mellon University’s CyLab Security and Privacy Institute. “There’s also a third-party agenda. In some ways, some countries in the Middle East are Use it to promote their country or (criticize) their opponents.”
Andrew Borene, executive director of cybersecurity firm Flashpoint National Security Solutions, said he expects disinformation to “really escalate.” He said his analysts have tracked discussions among online groups and hacktivists on dark web forums, suggesting they plan to join the fight. He noted that while Iran, one of the largest cyber actors, was not directly linked to the attacks, it was expected to continue supporting Hamas.
Meta, which has been criticized for failing to adequately police its content, said on Friday that Hamas remained banned from its platform under its “dangerous organizations and individuals” policy, as did “praise or substantial support” for the group. It added that it had set up a special operations center and removed hundreds of thousands of pieces of content that violated its rules.
For platforms with free speech leanings (X and Telegram), ideals have been tested and the threat of regulatory penalties now looms. After the EU announced an investigation into X, the streamlined agency that followed Musk’s takeover, it took action to remove content and suspend bad actors, including deleting “newly created Hamas-affiliated accounts.”
Kovler questioned whether Telegram would take action. He pointed out that after the riots in Washington on January 6, 2021, the Dubai-based organization finally closed the pipeline used by the terrorist organization “Islamic State” and the pipeline of far-right extremists.
Telegram said in a statement that it was “evaluating the best approach and…” . seeking input from a wide range of third parties,” adding that it wanted to be “careful not to exacerbate an already serious situation with any hasty action.”
Some experts say that as technologies such as artificial intelligence make misinformation spread faster and easier, platforms need to invest in more moderation resources, including labeling, fact-checking and language capabilities.
Now, researchers at the Fact-Checking and Disinformation Center, set up to track disinformation, say their efforts are being hampered by platforms charging researchers higher fees or introducing other restrictions on access to their data.
“Last year we could have told you how much content the bots on X were spreading, but this year we couldn’t afford it,” Carley said. “Any NGO or think tank (in this area), they are undermined.”
Additional reporting by Raya Jalabi in Beirut and Samer Al-Atrush in Dubai