Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
It may sound a bit like a science fiction plot, but in the future artificial intelligence could conceivably reach the point of rapid self-improvement, evade human control, and unleash chaos upon humans through cyber attacks or even nuclear disasters. That is the concern of some scientists and developers, and was the motivation for an AI safety bill in California, which is home to 32 of the world’s 50 leading AI companies. But on Sunday, state governor Gavin Newsom vetoed the legislation. The decision is seen as a big win for Big Tech, a reckless decision for public safety, and a missed opportunity to set de facto AI safety standards nationally. It is not that simple.
Setting out rules to protect against the potential harms of a technology, particularly one still in development, is a tricky balancing act. If it is too overbearing, it risks stifling innovation in the first place, which means society misses out on its potential benefits too. And, although California’s bill was watered down following intense lobbying from Silicon Valley, uncertainties around its effect on AI development and deployment still remained.
One broad aim of California’s bill was to raise developers’ accountability for the misuse of their models. However admirable that may be, it can have side effects. For instance, it is difficult for developers to know ex-ante how their technology might be used. They might reconcile that by pulling back from research. AI experts also worried that the bill’s safety protocols — which included a requirement for companies to build a “kill switch” into models over a certain threshold — could discourage development and the use of open-source models, where much innovation takes place.
Another worry was that the legislation did not specifically target AI systems used in high-risk environments, such as in critical infrastructure, or if they used sensitive data. It applied stringent standards to even basic functions.
Given these concerns, Newsom’s decision seems reasonable. That, however, does not mean tech companies should get a free run. As the AI race gains speed, there is a genuine concern that model builders could overlook weak spots. So it would make sense for lawmakers now to rework the proposed rules and clarify the vague wording, to better balance concerns around the impact on innovation today. Newsom announced a promising partnership with experts to develop “workable guardrails”. It is also welcome that the governor has recently signed bills targeted at regulating clear and present AI risks — rather than hypothetical ones — including those around deepfakes, and misinformation.
While California’s leadership on AI regulation is commendable, it would also be better if safety rules were hashed out and enacted at a federal level. That would provide protections across America, prevent a patchwork of varying state laws from emerging, and avoid putting the Golden State — the epicentre of American and global AI innovation — at a competitive disadvantage.
Indeed, though the allure of the Silicon Valley investor and talent pool remains strong, there is a risk that unilateral and overly stringent AI regulation could push model development elsewhere, weakening the state’s AI tech ecosystem in the process. As it is, California has high taxes and is the most heavily regulated state in the US. Property is expensive, too. Firms including US data analytics business Palantir and brokerage Charles Schwab have left the state recently, and some tech companies have cut office space.
Managing safety concerns around AI development is an art in preserving the good while insuring against the bad. Technological threats to our societies should not be taken lightly, but neither should be stunting the emergence of an innovation that could help diagnose diseases, accelerate scientific research, and boost productivity. It is worth making the effort to get it right.
Read the full article here