By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
IndebtaIndebta
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Notification Show More
Aa
IndebtaIndebta
Aa
  • Banking
  • Credit Cards
  • Loans
  • Dept Management
  • Mortgage
  • Markets
  • Investing
  • Small Business
  • Videos
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Follow US
Indebta > News > OpenAI still has a governance problem
News

OpenAI still has a governance problem

News Room
Last updated: 2025/05/08 at 8:55 AM
By News Room
Share
6 Min Read
SHARE

Stay informed with free updates

Simply sign up to the US companies myFT Digest — delivered directly to your inbox.

It can be hard to train a chatbot. Last month, OpenAI rolled back an update to ChatGPT because its “default personality” was too sycophantic. (Maybe the company’s training data was taken from transcripts of US President Donald Trump’s cabinet meetings . . .)

The artificial intelligence company had wanted to make its chatbot more intuitive but its responses to users’ enquiries skewed towards being overly supportive and disingenuous. “Sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short and are working on getting it right,” the company said in a blog post.

Reprogramming sycophantic chatbots may not be the most crucial dilemma facing OpenAI but it chimes with its biggest challenge: creating a trustworthy personality for the company as a whole. This week, OpenAI was forced to roll back its latest planned corporate update designed to turn the company into a for-profit entity. Instead, it will transition to a public benefit corporation, remaining under the control of a non-profit board. 

That will not resolve the structural tensions at the core of OpenAI. Nor will it satisfy Elon Musk, one of the company’s co-founders, who is pursuing legal action against OpenAI for straying from its original purpose. Does the company accelerate AI product deployment to keep its financial backers happy? Or does it pursue a more deliberative scientific approach to remain true to its humanitarian intentions?

OpenAI was founded in 2015 as a non-profit research lab dedicated to developing artificial general intelligence for the benefit of humanity. But the company’s mission — as well as the definition of AGI — have since blurred. 

Sam Altman, OpenAI’s chief executive, quickly realised that the company needed vast amounts of capital to pay for the research talent and computing power required to stay at the forefront of AI research. To that end, OpenAI created a for-profit subsidiary in 2019. Such was the breakout success of chatbot ChatGPT that investors have been happy to throw money at it, valuing OpenAI at $260bn during its latest fundraise. With 500mn weekly users, OpenAI has become an “accidental” consumer internet giant.

Altman, who was fired and rehired by the non-profit board in 2023, now says that he wants to build a “brain for the world” that might require hundreds of billions, if not trillions, of dollars of further investment. The only trouble with his wild-eyed ambition is — as the tech blogger Ed Zitron rants about in increasingly salty terms — OpenAI has yet to develop a viable business model. Last year, the company spent $9bn and lost $5bn. Is its financial valuation based on a hallucination? There will be mounting pressure on OpenAI from investors rapidly to commercialise its technology.

Moreover, the definition of AGI keeps shifting. Traditionally, it has referred to the point at which machines surpass humans across a wide range of cognitive tasks. But in a recent interview with Stratechery’s Ben Thompson, Altman acknowledged that the term had been “almost completely devalued”. He did accept, however, a narrower definition of AGI as an autonomous coding agent that could write software as well as any human.

On that score, the big AI companies seem to think they are close to AGI. One giveaway is reflected in their own hiring practices. According to Zeki Data, the top 15 US AI companies had been frantically hiring software engineers at a rate of up to 3,000 a month, recruiting a total of 500,000 between 2011 and 2024. But lately their net monthly hiring rate has dropped to zero as these companies anticipate that AI agents can perform many of the same tasks.

A recent research paper from Google DeepMind, which also aspires to develop AGI, highlighted four main risks of increasingly autonomous AI models: misuse by bad actors; misalignment when an AI system does unintended things; mistakes which cause unintentional harm; and multi-agent risks when unpredictable interactions between AI systems produce bad outcomes. These are all mind-bending challenges that carry some potentially catastrophic risks and may require some collaborative solutions. The more potent AI models become, the more cautious developers should be in deploying them. 

How frontier AI companies are governed is therefore not just a matter for corporate boards and investors, but for all of us. OpenAI is still worryingly deficient in that regard, with conflicting impulses. Wrestling with sycophancy is going to be the least of its problems as we get closer to AGI, however you define it.

john.thornhill@ft.com

Read the full article here

News Room May 8, 2025 May 8, 2025
Share this Article
Facebook Twitter Copy Link Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Finance Weekly Newsletter

Join now for the latest news, tips, and analysis about personal finance, credit cards, dept management, and many more from our experts.
Join Now
Exporters ‘shocked and elated’ as China trade cranks back into gear

Unlock the Editor’s Digest for freeRoula Khalaf, Editor of the FT, selects…

Wall Street’s sudden rebound catches investors ‘offside’

The furious rally in US assets sparked by the tariff détente between…

Tesla supplier Panasonic pressed to step up deliveries of US-made batteries

Unlock the Editor’s Digest for freeRoula Khalaf, Editor of the FT, selects…

Credit Suisse bonus cuts were unlawful, court rules

Unlock the Editor’s Digest for freeRoula Khalaf, Editor of the FT, selects…

Alstom in talks to run double-decker trains on Channel Tunnel route

Unlock the Editor’s Digest for freeRoula Khalaf, Editor of the FT, selects…

- Advertisement -
Ad imageAd image

You Might Also Like

News

Exporters ‘shocked and elated’ as China trade cranks back into gear

By News Room
News

Wall Street’s sudden rebound catches investors ‘offside’

By News Room
News

Tesla supplier Panasonic pressed to step up deliveries of US-made batteries

By News Room
News

Credit Suisse bonus cuts were unlawful, court rules

By News Room
News

Alstom in talks to run double-decker trains on Channel Tunnel route

By News Room
News

EU set to impose much higher tariffs on Ukrainian imports

By News Room
News

Inside the Trump administration’s quiet shift on Ukraine

By News Room
News

US warns against using Huawei chips ‘anywhere in the world’

By News Room
Facebook Twitter Pinterest Youtube Instagram
Company
  • Privacy Policy
  • Terms & Conditions
  • Press Release
  • Contact
  • Advertisement
More Info
  • Newsletter
  • Market Data
  • Credit Cards
  • Videos

Sign Up For Free

Subscribe to our newsletter and don't miss out on our programs, webinars and trainings.

I have read and agree to the terms & conditions
Join Community

2023 © Indepta.com. All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?