Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
OpenAI’s top safety leaders left the company this week after disagreement over whether to prioritise “shiny products” or safety reached “breaking point”, says one of the departing researchers.
Jan Leike, who led OpenAI’s efforts to steer and control super-powerful AI tools, said he quit on Thursday after clashing with his bosses about the amount of time and resources the start-up is putting into those efforts.
“Over the past years, safety culture and processes have taken a back seat to shiny products,” wrote Leike in a post on social media site X on Friday.
OpenAI has been the frontrunner in a fierce race to build evermore powerful models, competing with rivals including Google, Meta and Anthropic to push the frontiers of AI technology.
The company has raised billions of dollars — including $13bn from Microsoft — to build AI models that can interpret text, speech and images and can demonstrate reasoning abilities. The pace of those advances has stoked concerns about everything from the spread of disinformation to the existential risk should AI tools “go rogue”.
Leike, one of OpenAI’s most highly regarded researchers, left alongside Ilya Sutskever, the company’s co-founder and co-lead of the safety-focused “superalignment team”, who announced his resignation earlier this week.
That in effect disbands the team at OpenAI most explicitly focused on ensuring its technology is developed safely. It has also exposed a growing tension at the heart of the company between capitalising on an early lead in AI and abiding by its core mission of ensuring super-powerful AI “benefits all humanity”.
“We urgently need to figure out how to steer and control AI systems much smarter than us,” Leike wrote. “I joined because I thought OpenAI would be the best place in the world to do this research. However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”
Sam Altman, OpenAI’s chief executive, wrote on X that he was “very sad to see [Leike] leave. He’s right we have a lot more to do; we are committed to doing it.”
Concerns over safety were also a factor in November’s boardroom drama at OpenAI, during which Altman as ousted by directors — including Sutskever — only to return four days later. Before he was sacked, Altman had clashed with then-board member Helen Toner, who compared OpenAI’s approach to safety to that of rival Anthropic in a way Altman felt was unfavourable to his company.
OpenAI launched its superalignment team last year, saying it was designed to address concerns superintelligent machines “could lead to the disempowerment of humanity or even human extinction”. At the time, the company suggested AI could outsmart humans within the decade. In the months since, the start-up has been behind a number of major advances.
The company committed to allocate 20 per cent of its computing resources to support the team’s work ensuring AI would align to human interests even as it became exponentially more powerful.
But Leike said not enough attention had been given to the safety and societal impact of more powerful models: “These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.”
The superalignment team struggled to access computing resources which were being sucked into developing new models for consumers such as GPT-4o, OpenAI’s latest model released on Monday, Leike said.
“Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done,” he wrote.
Read the full article here