By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
IndebtaIndebta
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Notification Show More
Aa
IndebtaIndebta
Aa
  • Banking
  • Credit Cards
  • Loans
  • Dept Management
  • Mortgage
  • Markets
  • Investing
  • Small Business
  • Videos
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Follow US
Indebta > News > US and UK sign landmark agreement on testing safety of AI
News

US and UK sign landmark agreement on testing safety of AI

News Room
Last updated: 2024/04/01 at 10:58 PM
By News Room
Share
5 Min Read
SHARE

Stay informed with free updates

Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.

The US and UK have signed a landmark agreement on artificial intelligence, as the allies become the first countries to formally co-operate on how to test and assess risks from emerging AI models. 

The agreement, signed on Monday in Washington DC by UK science minister Michelle Donelan and US commerce secretary Gina Raimondo, lays out how the two governments will pool technical knowledge, information and talent on AI safety.

The deal represents the first bilateral arrangement on AI safety in the world and comes as governments push for greater regulation of the existential risks from new technology, such as its use in damaging cyber attacks or designing bioweapons.

“The next year is when we’ve really got to act quickly because the next generation of [AI] models are coming out, which could be complete game-changers, and we don’t know the full capabilities that they will offer yet,” Donelan told the Financial Times.  

The agreement will specifically enable the UK’s new AI Safety Institute (AISI), set up in November, and its US equivalent, which is yet to begin its work, to exchange expertise through secondments of researchers from both countries. The institutes will also work together on how to independently evaluate private AI models built by the likes of OpenAI and Google. 

The partnership is modelled on one between the UK’s Government Communications Headquarters (GCHQ) and the US National Security Agency, who work together closely on matters related to intelligence and security. 

“The fact that the United States, a great AI powerhouse, is signing this agreement with us, the United Kingdom, speaks volumes for how we are leading the way on AI safety,” Donelan said.

She added that since many of the most advanced AI companies were currently based in the US, the American government’s expertise was key to both understanding the risks of AI and to holding companies to their commitments. 

However, Donelan insisted that despite conducting research on AI safety and ensuring guardrails were in place, the UK did not plan to regulate the technology more broadly in the near term as it was evolving too rapidly.

The position stands in contrast to other nations and regions. The EU has passed its AI Act, considered the toughest regime on the use of AI in the world. US President Joe Biden has issued an executive order targeting AI models that may threaten national security. China has issued guidelines seeking to ensure the technology does not challenge its long-standing censorship regime.

Raimondo said AI was “the defining technology of our generation”.

“This partnership is going to accelerate both of our institutes’ work across the full spectrum of risks, whether to our national security or to our broader society,” she said.

“Our partnership makes clear that we aren’t running away from these concerns — we’re running at them. Because of our collaboration, our institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance.”

The UK government-backed AISI, which is chaired by tech investor and entrepreneur Ian Hogarth, has hired researchers such as Google DeepMind’s Geoffrey Irving and Chris Summerfield from the University of Oxford to start testing existing and unreleased AI models.

OpenAI, Google DeepMind, Microsoft and Meta are among the tech groups that signed voluntary commitments to open up their latest generative AI models for review by Britain’s AISI, which was established following the UK’s AI Safety Summit in Bletchley Park.

The institute is key to Prime Minister Rishi Sunak’s ambition for the UK to have a central role in tackling the development of AI.

Testing has focused on the risks associated with the misuse of the technology, including cyber security, by leaning on expertise from the National Cyber Security Centre within GCHQ, according to a person with direct knowledge of the matter. 

Donelan said that she and Raimondo planned to discuss shared challenges, such as AI’s impact on upcoming elections this year. The science minister added that they would also discuss the need for computing infrastructure for AI “sharing our skillsets and how we can deepen our collaboration in general to get benefit for the public”. 

Read the full article here

News Room April 1, 2024 April 1, 2024
Share this Article
Facebook Twitter Copy Link Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Finance Weekly Newsletter

Join now for the latest news, tips, and analysis about personal finance, credit cards, dept management, and many more from our experts.
Join Now
Netflix stock falls after Q3 earnings miss, Tesla preview, OpenAI announces new web browser

Watch full video on YouTube

Why Americans are obsessed with denim

Watch full video on YouTube

Why bomb Sokoto? Trump’s strikes baffle Nigerians

It was around 10pm on Christmas Day when residents of the mainly…

Pressure grows on Target as activist investor builds stake

Unlock the Editor’s Digest for freeRoula Khalaf, Editor of the FT, selects…

Mosque bombing in Alawite district in Syria leaves at least 8 dead

Unlock the Editor’s Digest for freeRoula Khalaf, Editor of the FT, selects…

- Advertisement -
Ad imageAd image

You Might Also Like

News

Why bomb Sokoto? Trump’s strikes baffle Nigerians

By News Room
News

Pressure grows on Target as activist investor builds stake

By News Room
News

Mosque bombing in Alawite district in Syria leaves at least 8 dead

By News Room
News

EU will lose ‘race to the bottom’ on regulation, says competition chief

By News Room
News

Columbia Short Term Bond Fund Q3 2025 Commentary (Mutual Fund:NSTRX)

By News Room
News

Franklin Mutual International Value Fund Q3 2025 Commentary (MEURX)

By News Room
News

US bars former EU commissioner Thierry Breton and others over tech rules

By News Room
News

BJ’s Wholesale Club: Gaining More Confidence In Its Ability To Grow EPS

By News Room
Facebook Twitter Pinterest Youtube Instagram
Company
  • Privacy Policy
  • Terms & Conditions
  • Press Release
  • Contact
  • Advertisement
More Info
  • Newsletter
  • Market Data
  • Credit Cards
  • Videos

Sign Up For Free

Subscribe to our newsletter and don't miss out on our programs, webinars and trainings.

I have read and agree to the terms & conditions
Join Community

2023 © Indepta.com. All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?