Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
OpenAI said it had begun training its next-generation artificial intelligence software, even as the start-up backtracked on earlier claims that it wants to build “superintelligent” systems that are smarter than humans.
The San Francisco-based company said on Tuesday that it had started producing a new AI system “to bring us to the next level of capabilities” and that its development would be overseen by a new safety and security committee.
But while OpenAI is racing ahead with AI development, a senior OpenAI executive seemed to backtrack on previous comments by its chief executive Sam Altman that it was ultimately aiming to build a “superintelligence” far more advanced than humans.
Anna Makanju, OpenAI’s vice-president of global affairs, told the Financial Times in an interview that its “mission” was to build artificial general intelligence capable of “cognitive tasks that are what a human could do today”.
“Our mission is to build AGI; I would not say our mission is to build superintelligence,” Makanju said. “Superintelligence is a technology that is going to be orders of magnitude more intelligent than human beings on Earth.”
Altman told the FT in November that he spent half of his time researching “how to build superintelligence”.
At the same time as fending off competition from Google’s Gemini and Elon Musk’s start-up xAI, OpenAI is attempting to reassure policymakers that it is prioritising responsible AI development after several senior safety researchers quit this month.
Its new committee will be led by Altman and board directors Bret Taylor, Adam D’Angelo, and Nicole Seligman, and will report back to the remaining three members of the board.
The company did not say what the follow-up to GPT-4, which powers its ChatGPT app and received a major upgrade two weeks ago, could do or when it would launch.
Earlier this month, OpenAI disbanded its so-called superalignment team — tasked with focusing on the safety of potentially superintelligent systems — after Ilya Sutskever, the team’s leader and a co-founder of the company, quit.
Sutskever’s departure came months after he led a shock coup against Altman in November that ultimately proved unsuccessful.
Closing down the superalignment team has resulted in several employees leaving the company, including Jan Leike, another senior AI safety researcher.
Makanju emphasised that work on the “long-term possibilities” of AI was still being done “even if they are theoretical”.
“AGI does not yet exist,” Makanju added, and said such a technology would not be released until it was safe.
Training is the primary step in how an artificial intelligence model learns, drawing on a huge volume of data and information given to it. After it has digested the data and its performance has improved, the model is then validated and tested before being deployed into products or applications.
This lengthy and highly technical process means OpenAI’s new model may not become a tangible product for many months.
Additional reporting by Madhumita Murgia in London
Read the full article here