By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
IndebtaIndebta
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Notification Show More
Aa
IndebtaIndebta
Aa
  • Banking
  • Credit Cards
  • Loans
  • Dept Management
  • Mortgage
  • Markets
  • Investing
  • Small Business
  • Videos
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Follow US
Indebta > News > How to keep the lid on the Pandora’s box of open AI
News

How to keep the lid on the Pandora’s box of open AI

News Room
Last updated: 2023/11/16 at 8:56 AM
By News Room
Share
6 Min Read
SHARE

Stay informed with free updates

Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.

The writer is founder of Sifted, an FT-backed site about European start-ups

It is rapidly emerging as one of the most important technological, and increasingly ideological, divides of our times: should powerful generative artificial intelligence systems be open or closed? How that debate plays out will affect the productivity of our economies, the stability of our societies and the fortunes of some of the world’s richest companies.

Supporters of open-source models, such as Meta’s LLaMA 2 or Hugging Face’s Bloom that enable users to customise powerful generative AI software themselves, say they broaden access to the technology, stimulate innovation and improve reliability by encouraging outside scrutiny. Far cheaper to develop and deploy, smaller open models also inject competition into a field dominated by big US companies such as Google, Microsoft and OpenAI. These companies have invested billions developing massive, closed generative AI systems, which they closely control.

But detractors argue open models risk lifting the lid on a Pandora’s box of troubles. Bad actors can exploit them to disseminate personalised disinformation on a global scale, while terrorists might use them to manufacture cyber or bio weapons. “The danger of open source is that it enables more crazies to do crazy things,” Geoffrey Hinton, one of the pioneers of modern AI, has warned. 

The history of OpenAI, which developed the popular ChatGPT chatbot, is itself instructive. As its name suggests, the research company was founded in 2015 with a commitment to develop the technology as openly as possible. But it later abandoned that approach for both competitive and safety reasons. “Flat out, we were wrong,” Ilya Sutskever, OpenAI’s chief scientist, told The Verge. 

Once OpenAI realised that its generative AI models were going to be “unbelievably potent”, it made little sense to open source them, he said. “I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise.”

Supporters of open models hit back, ridiculing the idea that open generative AI models enable people to access information they could not otherwise find from the internet or a rogue scientist. They also highlight the competitive self-interest of the big tech companies in shouting about the dangers of open models. These companies’ sinister intent, critics suggest, is to capture regulators, imposing higher compliance costs on insurgents and thus entrenching their own market dominance.

But there is an ideological dimension to this debate, too. Yann LeCun, chief scientist of Meta, which has broken ranks with the other Silicon Valley giants by championing open models, has likened rival companies’ arguments for controlling the technology to medieval obscurantism: the belief that only a self-selecting priesthood of experts is wise enough to handle knowledge. 

In the future, he told me recently, all our interactions with the vast digital repository of human knowledge will be mediated through AI systems. We should not want a handful of Silicon Valley companies to control that access. Just as the internet flourished by resisting attempts to enclose it, so AI will thrive by remaining open, LeCun argues, “as long as governments around the world do not outlaw the whole idea of open source AI”.

Recent discussions at the Bletchley Park AI safety summit suggest at least some policymakers may now be moving in that direction. But other experts are proposing more lightweight interventions that would improve safety without killing off competition. 

Wendy Hall, regius professor of computer science at Southampton university and a member of the UN’s AI advisory body, says we do not want to live in a world where only the big companies run generative AI. Nor do we want to allow users to do anything they like with open models. “We have to find some compromise,” she suggests.

Her preferred solution, gaining traction elsewhere, is to regulate generative AI models in a similar way to the car industry. Regulators impose strict safety standards on car manufacturers before they release new models. But they also impose responsibilities on drivers and hold them accountable for their actions. “If you do something with open source that is irresponsible and that causes harm you should go to jail — just like if you kill someone when driving a car,” Hall says.

We should certainly resist the tyranny of the binary when it comes to thinking about AI models. Both open and closed models have their benefits and flaws. As the capabilities of these models evolve, we will constantly have to tweak the weightings between competition and control.

Read the full article here

News Room November 16, 2023 November 16, 2023
Share this Article
Facebook Twitter Copy Link Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Finance Weekly Newsletter

Join now for the latest news, tips, and analysis about personal finance, credit cards, dept management, and many more from our experts.
Join Now
The argument Iranians have in private

Unlock the Editor’s Digest for freeRoula Khalaf, Editor of the FT, selects…

Carmakers sour on EU’s ‘disastrous’ petrol engine rule changes

Stay informed with free updatesSimply sign up to the Electric vehicles myFT…

Risks to the bull market’s record run, Wall Street’s top analyst calls

Watch full video on YouTube

Should Americans be blaming AI for mass layoffs?

Watch full video on YouTube

Elon Musk makes an unhelpful cameo in Warner Bros buyout

Unlock the Editor’s Digest for freeRoula Khalaf, Editor of the FT, selects…

- Advertisement -
Ad imageAd image

You Might Also Like

News

The argument Iranians have in private

By News Room
News

Carmakers sour on EU’s ‘disastrous’ petrol engine rule changes

By News Room
News

Elon Musk makes an unhelpful cameo in Warner Bros buyout

By News Room
News

US defence act passes in rebuke to Trump administration’s stance on Europe

By News Room
News

When business and democracy don’t mix

By News Room
News

Fei-Fei Li of World Labs: AI is incomplete without spatial intelligence

By News Room
News

German fintech hits €12.5bn valuation in deal backed by Peter Thiel

By News Room
News

Harbor Dividend Growth Leaders ETF Q3 2025 Commentary (GDIV)

By News Room
Facebook Twitter Pinterest Youtube Instagram
Company
  • Privacy Policy
  • Terms & Conditions
  • Press Release
  • Contact
  • Advertisement
More Info
  • Newsletter
  • Market Data
  • Credit Cards
  • Videos

Sign Up For Free

Subscribe to our newsletter and don't miss out on our programs, webinars and trainings.

I have read and agree to the terms & conditions
Join Community

2023 © Indepta.com. All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?