By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
IndebtaIndebta
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Notification Show More
Aa
IndebtaIndebta
Aa
  • Banking
  • Credit Cards
  • Loans
  • Dept Management
  • Mortgage
  • Markets
  • Investing
  • Small Business
  • Videos
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Follow US
Indebta > News > Generative AI models are skilled in the art of bullshit
News

Generative AI models are skilled in the art of bullshit

News Room
Last updated: 2025/05/23 at 4:10 AM
By News Room
Share
6 Min Read
SHARE

Stay informed with free updates

Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.

Lies are not the greatest enemy of the truth, according to the philosopher Harry Frankfurt. Bullshit is worse. 

As he explained in his classic essay On Bullshit (1986), a liar and a truth teller are playing the same game, just on opposite sides. Each responds to facts as they understand them and either accepts or rejects the authority of truth. But a bullshitter ignores these demands altogether. “He does not reject the authority of truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.” Such a person wants to convince others, irrespective of the facts.

Sadly, Frankfurt died in 2023, just a few months after ChatGPT was released. But reading his essay in the age of generative artificial intelligence provokes a queasy familiarity. In several respects, Frankfurt’s essay neatly describes the output of AI-enabled large language models. They are not concerned with truth because they have no conception of it. They operate by statistical correlation not empirical observation.

“Their greatest strength, but also their greatest danger, is their ability to sound authoritative on nearly any topic irrespective of factual accuracy. In other words, their superpower is their superhuman ability to bullshit,” Carl Bergstrom and Jevin West have written. The two University of Washington professors run an online course — Modern-Day Oracles or Bullshit Machines? — scrutinising these models. Others have renamed the machines’ output as botshit.

One of the best-known and unsettling, yet sometimes interestingly creative, features of LLMs is their “hallucination” of facts — or simply making stuff up. Some researchers argue this is an inherent feature of probabilistic models, not a bug that can be fixed. But AI companies are trying to solve this problem by improving the quality of the data, fine-tuning their models and building in verification and fact-checking systems. 

They would appear to have some way to go, though, considering a lawyer for Anthropic told a Californian court this month that their law firm had itself unintentionally submitted an incorrect citation hallucinated by the AI company’s Claude. As Google’s chatbot flags to users: “Gemini can make mistakes, including about people, so double-check it.” That did not stop Google from this week rolling out an “AI mode” to all its main services in the US. 

The ways in which these companies are trying to improve their models, including reinforcement learning from human feedback, itself risks introducing bias, distortion and undeclared value judgments. As the FT has shown, AI chatbots from OpenAI, Anthropic, Google, Meta, xAI and DeepSeek describe the qualities of their own companies’ chief executives and those of rivals very differently. Elon Musk’s Grok has also promoted memes about “white genocide” in South Africa in response to wholly unrelated prompts. xAI said it had fixed the glitch, which it blamed on an “unauthorised modification”.

Such models create a new, even worse category of potential harm — or “careless speech”, according to Sandra Wachter, Brent Mittelstadt and Chris Russell, in a paper from the Oxford Internet Institute. In their view, careless speech can cause intangible, long-term and cumulative harm. It’s like “invisible bullshit” that makes society dumber, Wachter tells me.

At least with a politician or sales person we can normally understand their motivation. But chatbots have no intentionality and are optimised for plausibility and engagement, not truthfulness. They will invent facts for no purpose. They can pollute the knowledge base of humanity in unfathomable ways.

The intriguing question is whether AI models could be designed for higher truthfulness. Will there be a market demand for them? Or should model developers be forced to abide by higher truth standards, as apply to advertisers, lawyers and doctors, for example? Wachter suggests that developing more truthful models would take time, money and resources that the current iterations are designed to save. “It’s like wanting a car to be a plane. You can push a car off a cliff but it’s not going to defy gravity,” she says. 

All that said, generative AI models can still be useful and valuable. Many lucrative business — and political — careers have been built on bullshit. Appropriately used, generative AI can be deployed for myriad business use cases. But it is delusional, and dangerous, to mistake these models for truth machines. 

john.thornhill@ft.com

Read the full article here

News Room May 23, 2025 May 23, 2025
Share this Article
Facebook Twitter Copy Link Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Finance Weekly Newsletter

Join now for the latest news, tips, and analysis about personal finance, credit cards, dept management, and many more from our experts.
Join Now
Spotify’s Daniel Ek leads €600mn investment in German drone maker Helsing

Unlock the Editor’s Digest for freeRoula Khalaf, Editor of the FT, selects…

Global oil supplies forecast to outstrip demand this year despite Middle East war

Stay informed with free updatesSimply sign up to the Oil & Gas…

Central banks plan to boost gold reserves and trim dollar holdings

Stay informed with free updatesSimply sign up to the Central banks myFT…

Russian missile and drone attack kills at least 14 in Kyiv

Stay informed with free updatesSimply sign up to the War in Ukraine…

Lutnick hails Trump’s $5mn investor visa as almost 70,000 apply

Unlock the White House Watch newsletter for freeYour guide to what Trump’s…

- Advertisement -
Ad imageAd image

You Might Also Like

News

Spotify’s Daniel Ek leads €600mn investment in German drone maker Helsing

By News Room
News

Global oil supplies forecast to outstrip demand this year despite Middle East war

By News Room
News

Central banks plan to boost gold reserves and trim dollar holdings

By News Room
News

Russian missile and drone attack kills at least 14 in Kyiv

By News Room
News

Lutnick hails Trump’s $5mn investor visa as almost 70,000 apply

By News Room
News

Iran’s regime fights for survival

By News Room
News

Donald Trump to leave G7 early after resisting joint statement on Israel-Iran conflict

By News Room
News

Israel-Iran tensions test central banks’ appetite for rate cuts

By News Room
Facebook Twitter Pinterest Youtube Instagram
Company
  • Privacy Policy
  • Terms & Conditions
  • Press Release
  • Contact
  • Advertisement
More Info
  • Newsletter
  • Market Data
  • Credit Cards
  • Videos

Sign Up For Free

Subscribe to our newsletter and don't miss out on our programs, webinars and trainings.

I have read and agree to the terms & conditions
Join Community

2023 © Indepta.com. All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?