Receive free Artificial intelligence updates
We’ll send you a myFT Daily Digest email rounding up the latest Artificial intelligence news every morning.
Google will require verified advertisers to “prominently disclose” when a campaign ad “inauthentically depicts” people or events, in a bid to combat the spread of digitally manipulated images for political gain, the company said on Wednesday.
Google listed examples of ads that would require the disclosure, including those that make “it appear as if a person is saying or doing something they didn’t say or do” and those that change footage to “depict scenes that did not actually take place”.
The tech company said the policy would take effect in mid-November, a year ahead of the US presidential and congressional elections.
The announcement comes just a week before top tech executives including Google chief executive Sundar Pichai, Microsoft boss Satya Nadella and Microsoft’s former chief Bill Gates are set to attend an artificial intelligence forum hosted by Senate majority leader Chuck Schumer in Washington that is likely to become the foundation of legislation on AI. Other attendees at the closed-door AI Insight Forum will include Elon Musk and Mark Zuckerberg.
The rise of AI has increased fears of altered content deceiving voters in the 2024 US elections. In July, an ad from Never Back Down, a fundraising group supporting Florida governor Ron DeSantis, appeared to use AI to re-create former president Donald Trump’s voice reading a message he posted on social media.
A recent boom in generative AI models, such as ChatGPT and Midjourney, mean users can easily create convincing fake videos and images.
Mandiant, a cyber security firm owned by Google, said last month it had seen an increase in the use of AI to conduct manipulative information campaigns online, but added that the impact had so far been limited. Its report said it had tracked campaigns from groups linked to the government of Russia, China and other nations.
Google has come under pressure to restrict misinformation on its search engine — one of the most widely used information sources — and other platforms, such as YouTube, for years. In 2017, it announced its first attempt to stop the circulation of “fake news” on its search engine with tools that allowed users to report misleading content.
In June, the EU ordered platforms such as Google and Meta to improve their efforts to fight against false information, including by adding labels to content generated by AI.
Facebook, which is one of the largest platforms for political ads, updated its policy for videos posted on its platforms in 2020, which banned “misleading manipulative media” that had been “synthesised”, including “deepfakes”, in which a person has been digitally altered to appear as someone else. It does not have a specific policy for AI-generated political ads.
X, formerly Twitter, last month reversed a policy that had banned all political ads globally since 2019, raising concerns about misinformation ahead of the 2024 election.
The Federal Election Commission declined to comment on Google’s new policy.
Read the full article here