By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
IndebtaIndebta
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Notification Show More
Aa
IndebtaIndebta
Aa
  • Banking
  • Credit Cards
  • Loans
  • Dept Management
  • Mortgage
  • Markets
  • Investing
  • Small Business
  • Videos
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Follow US
Indebta > News > OpenAI put ‘shiny products’ over safety, departing top researcher says
News

OpenAI put ‘shiny products’ over safety, departing top researcher says

News Room
Last updated: 2024/05/17 at 3:48 PM
By News Room
Share
5 Min Read
SHARE

Unlock the Editor’s Digest for free

Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.

OpenAI’s top safety leaders left the company this week after disagreement over whether to prioritise “shiny products” or safety reached “breaking point”, says one of the departing researchers.

Jan Leike, who led OpenAI’s efforts to steer and control super-powerful AI tools, said he quit on Thursday after clashing with his bosses about the amount of time and resources the start-up is putting into those efforts.

“Over the past years, safety culture and processes have taken a back seat to shiny products,” wrote Leike in a post on social media site X on Friday.

OpenAI has been the frontrunner in a fierce race to build evermore powerful models, competing with rivals including Google, Meta and Anthropic to push the frontiers of AI technology.

The company has raised billions of dollars — including $13bn from Microsoft — to build AI models that can interpret text, speech and images and can demonstrate reasoning abilities. The pace of those advances has stoked concerns about everything from the spread of disinformation to the existential risk should AI tools “go rogue”.

Leike, one of OpenAI’s most highly regarded researchers, left alongside Ilya Sutskever, the company’s co-founder and co-lead of the safety-focused “superalignment team”, who announced his resignation earlier this week.

That in effect disbands the team at OpenAI most explicitly focused on ensuring its technology is developed safely. It has also exposed a growing tension at the heart of the company between capitalising on an early lead in AI and abiding by its core mission of ensuring super-powerful AI “benefits all humanity”.

“We urgently need to figure out how to steer and control AI systems much smarter than us,” Leike wrote. “I joined because I thought OpenAI would be the best place in the world to do this research. However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

Sam Altman, OpenAI’s chief executive, wrote on X that he was “very sad to see [Leike] leave. He’s right we have a lot more to do; we are committed to doing it.”

Concerns over safety were also a factor in November’s boardroom drama at OpenAI, during which Altman as ousted by directors — including Sutskever — only to return four days later. Before he was sacked, Altman had clashed with then-board member Helen Toner, who compared OpenAI’s approach to safety to that of rival Anthropic in a way Altman felt was unfavourable to his company.

OpenAI launched its superalignment team last year, saying it was designed to address concerns superintelligent machines “could lead to the disempowerment of humanity or even human extinction”. At the time, the company suggested AI could outsmart humans within the decade. In the months since, the start-up has been behind a number of major advances.

The company committed to allocate 20 per cent of its computing resources to support the team’s work ensuring AI would align to human interests even as it became exponentially more powerful.

But Leike said not enough attention had been given to the safety and societal impact of more powerful models: “These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.”

The superalignment team struggled to access computing resources which were being sucked into developing new models for consumers such as GPT-4o, OpenAI’s latest model released on Monday, Leike said.

“Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done,” he wrote.

Read the full article here

News Room May 17, 2024 May 17, 2024
Share this Article
Facebook Twitter Copy Link Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Finance Weekly Newsletter

Join now for the latest news, tips, and analysis about personal finance, credit cards, dept management, and many more from our experts.
Join Now
China factory activity returns to growth after record contraction

Stay informed with free updatesSimply sign up to the Chinese economy myFT…

Why this analyst agrees with Michael Burry in Tesla’s overvaluation.

Watch full video on YouTube

Why U.S. Shipbuilding Collapsed — And The Push To Rebuild It

Watch full video on YouTube

Saudi Arabia bombs UAE-backed faction in Yemen

Unlock the Editor’s Digest for freeRoula Khalaf, Editor of the FT, selects…

You make good money – so why aren’t you wealthy yet?

Watch full video on YouTube

- Advertisement -
Ad imageAd image

You Might Also Like

News

China factory activity returns to growth after record contraction

By News Room
News

Saudi Arabia bombs UAE-backed faction in Yemen

By News Room
News

NewMarket: Strong Cash Returns, Poor Growth Drivers (NYSE:NEU)

By News Room
News

SoftBank strikes $4bn AI data centre deal with DigitalBridge

By News Room
News

Allspring Income Plus Fund Q3 2025 Commentary (Mutual Fund:WSINX)

By News Room
News

Pope Leo’s pick to lead New York Catholics signals shift away from Maga

By News Room
News

Why bomb Sokoto? Trump’s strikes baffle Nigerians

By News Room
News

Pressure grows on Target as activist investor builds stake

By News Room
Facebook Twitter Pinterest Youtube Instagram
Company
  • Privacy Policy
  • Terms & Conditions
  • Press Release
  • Contact
  • Advertisement
More Info
  • Newsletter
  • Market Data
  • Credit Cards
  • Videos

Sign Up For Free

Subscribe to our newsletter and don't miss out on our programs, webinars and trainings.

I have read and agree to the terms & conditions
Join Community

2023 © Indepta.com. All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?