Stay informed with free updates
Simply sign up to the Technology sector myFT Digest — delivered directly to your inbox.
The plaintiff is conflicted. The legal arguments appear tenuous. And, in places, the 35-page lawsuit that Elon Musk filed last week with the Superior Court of California against OpenAI reads like a mash-up between a science fiction film script and a letter from a jilted lover. But Musk’s submission that OpenAI has breached its founding charter and endangers humanity by prioritising profit over safety may still turn into the most substantial move yet to scrutinise the company’s attempts to develop artificial general intelligence.
Since releasing its ChatGPT chatbot to slack-jawed astonishment in November 2022, OpenAI has emerged as the world’s hottest start-up with more than 100mn weekly users. The FT reported last month that OpenAI had topped $2bn in revenues on an annualised basis and had surged to a private market valuation of more than $80bn. The company has attracted $13bn of investment from Microsoft while other investors, including Singapore’s giant Temasek fund, are clamouring to jump on board.
Yet OpenAI started out as a far less racy outfit back in 2015. As Musk’s lawsuit spells out, OpenAI was founded as a non-profit research laboratory with a mission to develop AGI — a generalisable form of AI that surpasses human capabilities in most domains — for the public good. Alarmed by the dominance of Google in the field of AI and the possible existential risks of AGI, Musk teamed up with Sam Altman, then president of Y Combinator, to create a different kind of research organisation, “free from financial obligation”. “Our primary fiduciary duty is to humanity,” the company stated. To that end, it promised to share its designs, models and code.
Musk provided much of OpenAI’s early funding, contributing more than $44mn between 2016 and 2020, according to the lawsuit. But the non-profit entity found it hard to compete for talent with the deep-pocketed Google DeepMind, which was also intent on pursuing AGI. The extraordinary computing power needed to develop leading-edge generative AI models also sucked OpenAI into the vortex of the cloud computing provider, Microsoft.
That intense commercial pressure led to OpenAI establishing a for-profit entity and later setting the “founding agreement aflame” in 2023, according to the lawsuit, by accepting Microsoft’s massive investment. OpenAI was transformed into “a closed-source de facto subsidiary of the world’s largest technology company”. Its leading GPT-4 model was also incorporated into Microsoft’s services, primarily serving the giant company’s proprietary commercial interests. The failed attempt by OpenAI’s board to replace Altman as chief executive last year at least partly reflected the tensions between the company’s core founding purpose and its newfound moneymaking intent.
Naturally, OpenAI disputes Musk’s version of events and has moved to dismiss his legal claims. In a blog post, it argued that Musk had supported OpenAI’s move to create a for-profit business entity and had even wanted to fold the company into his car business Tesla. Musk has since launched his own AI company, xAI, to compete with OpenAI and has been trying to poach some of its researchers. “It’s possible that Musk is simply techwashing and creating chaos in the marketplace,” said the Center for AI Policy.
But Musk has a strong moral, if not a legal, case. If OpenAI were able to evolve from a sheltered non-profit enjoying charitable status into a for-profit enterprise then all start-ups would be built that way. And, as the fiasco over Altman’s firing and rehiring showed, OpenAI’s board cannot be counted on to provide robust oversight on its own.
The time to create effective governance regimes for powerful AI companies is rapidly running out. This week, Anthropic, led by researchers who broke away from OpenAI in 2021, launched its Claude 3 model, which some users suggest surpasses GPT-4. “I think that AGI is already here,” Blaise Agüera y Arcas, a top Google AI researcher, told me last week. That achievement could generate great value but also pose significant risks, he argued in an essay co-written with Peter Norvig.
Regulators are currently investigating the competition implications of Microsoft’s tie-up with OpenAI. But the US administration’s promises to create an AI Safety Institute to monitor the leading companies appear to be going nowhere fast. Some may dismiss the row between Musk and Altman as a tiresome legal battle between billionaire tech bros. But, whatever his motives, Musk is performing a notable public service in forcing more transparency and accountability at OpenAI.
Read the full article here