Google will require election ads to ‘prominently disclose’ AI content
Google will require election ads to ‘prominently disclose’ AI content

Receive free AI updates

Google will require “notable disclosure” from verified advertisers when campaign ads “untruely depict” people or events, the company said on Wednesday, in an effort to combat the distribution of images that are digitized for political gain.

Google list Examples of ads that require disclosure include those that “look as if a person is saying or doing something they didn’t say or do” and those that change the camera to “portray scenes that didn’t actually happen.”

The tech company said the policy would take effect in mid-November, a year before the U.S. presidential and congressional elections.

The announcement comes just a week after top tech executives including Google CEO Sundar Pichai, Microsoft boss Satya Nadella and former Microsoft CEO Bill Gates Will attend an AI forum hosted by Senate Majority Leader Chuck Schumer in Washington that is likely to form the basis for AI legislation. Other attendees at the closed-door AI Insights Forum included Elon Musk and Mark Zuckerberg.

The rise of artificial intelligence has fueled concerns that the 2024 U.S. election will be doctored to deceive voters. July, a advertise Never Back Down, a fundraising group supporting Florida Gov. Ron DeSantis, appears to have used artificial intelligence to recreate the voice of former President Donald Trump reading his social media posts.

The recent boom in generative AI models like ChatGPT and Midjourney means users can easily create convincing fake videos and images.

Mandiant, the cybersecurity firm owned by Google, said last month there had been an increase in the use of artificial intelligence for online manipulation of information, but added that the impact so far had been limited. Its report said it tracked the activities of groups linked to the governments of Russia, China and other countries.

Google has been under pressure for years to limit misinformation on its search engine, one of the most widely used sources of information, and on other platforms such as YouTube. In 2017, the company announced its first attempt to stop the spread of “fake news” on its search engine with a tool that allows users to report misleading content.

In June, the European Union ordered platforms such as Google and Meta to step up efforts to combat disinformation, including labeling content generated by artificial intelligence.

Facebook, one of the largest political advertising platforms, updated its policy on videos posted on its platform in 2020, banning “synthetic” “misleading and manipulative media” including “deep fakes” in which a person is digitized Modified to appear as someone else. It has no specific policy against AI-generated political ads.

X, formerly known as Twitter, last month reversed a policy that had banned all political advertising globally since 2019, raising concerns about misinformation ahead of the 2024 election.

The Federal Election Commission declined to comment on Google’s new policy.

Svlook

Leave a Reply

Your email address will not be published. Required fields are marked *