By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
IndebtaIndebta
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Notification Show More
Aa
IndebtaIndebta
Aa
  • Banking
  • Credit Cards
  • Loans
  • Dept Management
  • Mortgage
  • Markets
  • Investing
  • Small Business
  • Videos
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Follow US
Indebta > News > We need a Food and Drug Administration for AI
News

We need a Food and Drug Administration for AI

News Room
Last updated: 2024/09/25 at 10:32 PM
By News Room
Share
6 Min Read
SHARE

Stay informed with free updates

Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.

The writer is executive director of the Aspen Strategy Group and a visiting fellow at Stanford University’s Hoover Institution

While millions of lives have been saved through medical drugs, many thousands died during the 19th century by ingesting unsafe medicines sold by charlatans. Across the US and Europe this led to the gradual implementation of food and drug safety laws and institutes — including the US Food and Drug Administration — to ensure that the benefits outweigh the harms.

The rise of artificial intelligence large language models such as GPT-4 is turbocharging industries to make everything from scientific innovation to education to film-making easier and more efficient. But alongside enormous benefits, these technologies can also create severe national security risks. 

We wouldn’t allow a new drug to be sold without thorough testing for safety and efficacy, so why should AI be any different? Creating a “Food and Drug Administration for AI” may be a blunt metaphor, as the AI Now Institute has written, but it is time for governments to mandate AI safety testing.

The UK government under the former prime minister Rishi Sunak deserves real credit here: after just a year of Sunak taking office, the UK held the game-changing Bletchley Park AI Safety Summit, set up a relatively well-funded AI Safety Institute and screened five leading large language models.

The US and other countries such as Singapore, Canada and Japan are emulating the UK’s approach, but these efforts are still in their infancy. OpenAI and Anthropic are voluntarily allowing the US and UK to test their models, and should be commended for this. 

It is now time to go further. The most glaring gap in our current approach to AI safety is the lack of mandatory, independent and rigorous testing to prevent AI from doing harm. Such testing should only apply to the largest models, and be required before it is unleashed on to the public.

While drug testing can take years, the technical teams at the AI Safety Institute have been able to conduct narrowly focused tests in the span of a few weeks. Safety would not therefore meaningfully slow innovation.

Testing should focus specifically on the extent to which the model could cause tangible, physical harms, such as its ability to help create biological or chemical weapons and undermine cyber defences. It is also important to gauge whether the model is challenging for humans to control and capable of training itself to “jailbreak” out of the safety features designed to constrain it. Some of this has already happened — in February 2024 it was discovered that hackers working for China, Russia, North Korea and Iran had used OpenAI’s technology to carry out novel cyber attacks. 

While ethical AI and bias are critical issues as well, there is more disagreement within society about what constitutes such bias. Testing should thus initially focus on national security and physical harm to humans as the most pre-eminent threat posed by AI. Imagine, for example, if a terrorist group were to use AI-powered, self-driven vehicles to target and set off explosives, a fear voiced by Nato. 

Once they pass this initial testing, AI companies — much like those in the pharmaceutical industry — should be required to closely and consistently monitor the possible abuse of their models, and report misuse immediately. Again, this is standard practice in the pharmaceutical industry, and ensures that potentially harmful drugs are withdrawn.

In exchange for such monitoring and testing, companies that co-operate should receive a “safe harbour” to shield them from some legal liability. Both the US and UK legal systems have existing laws that balance the danger and utility of products such as engines, cars, drugs and other technologies. For example, airlines that have otherwise complied with safety regulations are usually not liable for the consequences of unforeseeable natural disasters.

If those building the AI refuse to comply, they should face penalties, just as pharmaceutical companies do if they withhold data from regulators. 

California is paving the way forward here: last month, the state’s legislature passed a bill — currently awaiting approval from Governor Gavin Newsom — requiring AI developers to create safety protocols to mitigate “critical harms”. If not overly onerous, this is a move in the right direction.  

For decades, robust reporting and testing requirements in the pharmaceutical sector have allowed for the responsible advancement of drugs that help, not harm, the human population. Similarly, while the AI Safety Institute in the UK and those elsewhere represent a crucial first step, in order to reap the full benefits of AI we need immediate, concrete action to create and enforce safety standards — before models cause real world harm.

Video: Content creators take the fight to AI | FT Tech

Read the full article here

News Room September 25, 2024 September 25, 2024
Share this Article
Facebook Twitter Copy Link Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Finance Weekly Newsletter

Join now for the latest news, tips, and analysis about personal finance, credit cards, dept management, and many more from our experts.
Join Now
Tesla bull Dan Ives talks why he’s still bullish, AT&T COO talks wireless competition

Watch full video on YouTube

Why The U.S. Is Running Out Of Explosives

Watch full video on YouTube

REX American Resources Corporation 2026 Q3 – Results – Earnings Call Presentation (NYSE:REX) 2025-12-05

This article was written byFollowSeeking Alpha's transcripts team is responsible for the…

AI won’t take your job – but someone using it will

Watch full video on YouTube

Could Crypto-Backed Mortgages Put The U.S. Housing Market At Risk?

Watch full video on YouTube

- Advertisement -
Ad imageAd image

You Might Also Like

News

REX American Resources Corporation 2026 Q3 – Results – Earnings Call Presentation (NYSE:REX) 2025-12-05

By News Room
News

Aurubis AG (AIAGY) Q4 2025 Earnings Call Transcript

By News Room
News

A bartenders’ guide to the best cocktails in Washington

By News Room
News

C3.ai, Inc. 2026 Q2 – Results – Earnings Call Presentation (NYSE:AI) 2025-12-03

By News Room
News

Stephen Witt wins FT and Schroders Business Book of the Year

By News Room
News

Verra Mobility Corporation (VRRM) Presents at UBS Global Technology and AI Conference 2025 Transcript

By News Room
News

Zara clothes reappear in Russia despite Inditex’s exit

By News Room
News

U.S. Stocks Stumble: Markets Catch A Cold To Start December

By News Room
Facebook Twitter Pinterest Youtube Instagram
Company
  • Privacy Policy
  • Terms & Conditions
  • Press Release
  • Contact
  • Advertisement
More Info
  • Newsletter
  • Market Data
  • Credit Cards
  • Videos

Sign Up For Free

Subscribe to our newsletter and don't miss out on our programs, webinars and trainings.

I have read and agree to the terms & conditions
Join Community

2023 © Indepta.com. All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?