By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
IndebtaIndebta
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Notification Show More
Aa
IndebtaIndebta
Aa
  • Banking
  • Credit Cards
  • Loans
  • Dept Management
  • Mortgage
  • Markets
  • Investing
  • Small Business
  • Videos
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Follow US
Indebta > News > The danger of deepfakes is not what you think
News

The danger of deepfakes is not what you think

News Room
Last updated: 2024/06/20 at 9:13 AM
By News Room
Share
6 Min Read
SHARE

Stay informed with free updates

Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.

One of our shoutiest moral panics these days is the fear that artificial intelligence-enabled deepfakes will degrade democracy. Half of the world’s population are voting in 70 countries this year. Some 1,500 experts polled by the World Economic Forum in late 2023 reckoned that misinformation and disinformation were the most severe global risk over the next two years. Even extreme weather risks and interstate armed conflict were seen as less threatening. 

But, type it gently, their concerns appear overblown. Not for the first time, the Davos consensus might be wrong.

Deception has been a feature of human nature since the Greeks dumped a wooden horse outside Troy’s walls. More recently, the Daily Mail’s publication of the Zinoviev letter — a forged document purportedly from the Soviet head of Comintern — had a big impact on the British general election of 1924.

Of course, that was before the internet age. The concern now is that the power of AI might industrialise such disinformation. The internet has cut the cost of content distribution to zero. Generative AI is slashing the cost of content generation to zero. The result may be an overwhelming volume of information that can, as the US political strategist Steve Bannon memorably put it, “flood the zone with shit”.

Deepfakes — realistic, AI-generated audio, image or video impersonations — pose a particular threat. The latest avatars generated by leading AI companies are so good that they are all but indistinguishable from the real thing. In such a world of “counterfeit people”, as the late philosopher Daniel Dennett called them, who can you trust online? The danger is not so much that voters will trust the untrustworthy but that they will distrust the trustworthy.

Yet, so far at least, deepfakes are not wreaking as much political damage as feared. Some generative AI start-ups argue that the problem is more about distribution than generation, passing the buck to the giant platform companies. At the Munich Security Conference in February, 20 of those big tech companies, including Google, Meta and TikTok, pledged to stifle deepfakes designed to mislead. How far the companies are living up to their promises is, as yet, hard to tell but the relative lack of scandals is encouraging. 

The open-source intelligence movement, which includes legions of cyber sleuths, has also been effective at debunking disinformation. US academics have created a Political Deepfakes Incidents Database to track and expose the phenomenon, recording 114 cases up to this January. And it could well be that the increasing use of AI tools by millions of users is itself deepening public understanding of the technology, inoculating people against deepfakes.

Tech-savvy India, which has just held the world’s biggest democratic election with 642mn people casting a vote, was an interesting test case. There was extensive use of AI tools to impersonate candidates and celebrities, generate endorsements from dead politicians and throw mud at opponents in the political maelstrom of Indian democracy. Yet the election did not appear to be disfigured by the digital manipulation.

Two Harvard Kennedy School experts, Vandinika Shukla and Bruce Schneier, who studied the use of AI in the campaign, concluded that the technology was mostly used constructively.

For example, some politicians used the official Bhashini platform and AI apps to dub their speeches into India’s 22 official languages, deepening connections with voters. “The technology’s ability to produce non-consensual deepfakes of anyone can make it harder to tell truth from fiction, but its consensual uses are likely to make democracy more accessible,” they wrote.

This does not mean use of deepfakes is always benign. They have already being used to cause criminal damage and personal distress. Earlier this year, the British engineering company Arup was scammed out of $25mn in Hong Kong after fraudsters used digitally cloned video of a senior manager to order a financial transfer. This month, explicit deepfake images of 50 girls from Bacchus Marsh Grammar school in Australia were circulated online. It appeared that the girls’ photos had been lifted from social media posts and manipulated to create the images.

Criminals are often among the earliest adopters of any new technology. It is their sinister use of deepfakes to target private individuals that should concern us most. Public uses of the technology for nefarious means are more likely to be rapidly exposed and countered. We should worry more about politicians spouting authentic nonsense than fake AI avatars generating inauthentic gibberish.

[email protected]

Read the full article here

News Room June 20, 2024 June 20, 2024
Share this Article
Facebook Twitter Copy Link Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Finance Weekly Newsletter

Join now for the latest news, tips, and analysis about personal finance, credit cards, dept management, and many more from our experts.
Join Now
Gold’s decline could be the start of a correction. 📉

Watch full video on YouTube

How Does The Black Box Survive Airplane Crashes

Watch full video on YouTube

The chutzpah of Marjorie Taylor Greene

Unlock the White House Watch newsletter for freeYour guide to what Trump’s…

What economists got wrong in 2025

Welcome back. As this is my last edition before the new year,…

Police respond to shootings at Sydney’s Bondi Beach

Unlock the Editor’s Digest for freeRoula Khalaf, Editor of the FT, selects…

- Advertisement -
Ad imageAd image

You Might Also Like

News

The chutzpah of Marjorie Taylor Greene

By News Room
News

What economists got wrong in 2025

By News Room
News

Police respond to shootings at Sydney’s Bondi Beach

By News Room
News

BIV: Inflation Uncertainty And Why I’m Moving From Buy To Hold (NYSEARCA:BIV)

By News Room
News

Jamie Dimon signals support for Kevin Warsh in Fed chair race

By News Room
News

Europe’s rocky relations with Donald Trump

By News Room
News

China signals concern over falling investment

By News Room
News

lululemon athletica inc. (LULU) Q3 2026 Earnings Call Transcript

By News Room
Facebook Twitter Pinterest Youtube Instagram
Company
  • Privacy Policy
  • Terms & Conditions
  • Press Release
  • Contact
  • Advertisement
More Info
  • Newsletter
  • Market Data
  • Credit Cards
  • Videos

Sign Up For Free

Subscribe to our newsletter and don't miss out on our programs, webinars and trainings.

I have read and agree to the terms & conditions
Join Community

2023 © Indepta.com. All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?