By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
IndebtaIndebta
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Notification Show More
Aa
IndebtaIndebta
Aa
  • Banking
  • Credit Cards
  • Loans
  • Dept Management
  • Mortgage
  • Markets
  • Investing
  • Small Business
  • Videos
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Follow US
Indebta > News > Where is artificial general intelligence? My grandfather’s guess is as good as yours
News

Where is artificial general intelligence? My grandfather’s guess is as good as yours

News Room
Last updated: 2024/04/13 at 10:23 AM
By News Room
Share
7 Min Read
SHARE

The writer is a technology analyst

In 1946, my grandfather, writing as “Murray Leinster”, published a science fiction story called “A Logic Named Joe”. In it, everyone has a computer (a “logic”) connected to a global network that does everything from banking to newspapers and video calls. One day, one of these logics, Joe, starts giving helpful answers to any request anywhere on the network: invent an undetectable poison, say, or suggest the best way to rob a bank. Panic ensues — “Check your censorship circuits!” — until they work out what to unplug. 

For as long as we’ve thought about computers, we’ve thought about making “artificial intelligence”, and wondered what that would mean. There’s an old joke that AI is whatever doesn’t work yet, because once it works it’s just software. Calculators do superhuman maths, and databases have superhuman memory, but they can’t do anything else, and they don’t understand what they’re doing, any more than a dishwasher understands dishes. Databases are superhuman, but they’re just software. But people do have something different, and so, on some scale, do dogs, chimpanzees and octopuses and many other creatures. AI researchers call this “general intelligence”.

If we could make artificial general intelligence, or AGI, it should be obvious that this would be as important as computing, or electricity or perhaps steam. Today we print microchips, but what if you could print digital brains at the level of a human, or more than the level of a human, and do it by the billion? At the very least, that would be a huge change in what we could automate, and as my grandfather and a thousand other science fiction writers have pointed out, it might mean a lot more: steam engines did not have opinions about people. 

Every few decades since 1946, there’s been a wave of excitement that this might be close (in 1970 the AI pioneer Marvin Minsky claimed that we would have human-level AGI in three to eight years). The large language models (LLMs) that took off 18 months ago have started another such wave. This week, OpenAI and Meta signalled they were near to releasing new models that might be capable of reasoning and planning. Serious AI scientists who previously thought AGI was decades away now suggest that it might be much closer. 

At the extreme, the so-called “doomers” argue there is a real risk of AGI emerging spontaneously from current research and that this could be a threat to humanity. They call for urgent government action. Some of this comes from self-interested companies seeking barriers to competition (“This is very dangerous and we are building it as fast as possible, but don’t let anyone else do it”), but plenty of it is sincere.  

However, for every expert who thinks AGI might be close, there’s another who doesn’t. There are some who think LLMs might scale all the way to AGI, and others who think we still need an unknown number of unknown further breakthroughs. More importantly, they would all agree that we don’t actually know.

The problem is that we don’t have a coherent theoretical model of what general intelligence really is, nor why people are better at it than dogs. Equally, we don’t know why LLMs seem to work so well, and we don’t know how much they can improve. We have many theories for parts of these, but we don’t know the whole system. We can’t plot people and ChatGPT on a chart and say when one will reach the other. 

Indeed, AGI itself is a thought experiment: what kind of AGI would we actually get? It might scale to 100x more intelligent than a person, or it might be faster but no more clever. We might only produce AGI that’s no more intelligent than a dog. We don’t know. 

This is why all conversations about AGI turn to analogies: if you can compare this to nuclear fission then you know what to do. But again, we had a theory of fission, and we have no such theory of AGI. Hence, my preferred analogy is the Apollo programme. We had a theory of gravity, and a theory of the engineering of rockets. We knew why rockets didn’t explode, why they went up, and how far they needed to go. We have no equivalents here. We don’t know why LLMs work, how big they can get, or how far they have to go. And yet, we keep making them bigger, and they do seem to be getting close. Will they get there? Maybe! 

What, then, is your preferred attitude to real but unknown risks? Do you worry, or shrug? Which thought experiments do you prefer? Presume, though, you decide the doomers are right: what can you do? The technology is in principle public. Open source models are proliferating. For now, LLMs need a lot of expensive chips (Nvidia sold $47.5bn in the last fiscal year and can’t meet demand), but on a decade’s view the models will get more efficient and the chips will be everywhere. In the end, you can’t ban mathematics. It will happen anyway.  

By default, though, this latest excitement will follow all the other waves of AI, and become “just” more software and more automation. Automation has always produced frictional pain, back to the Luddites. The UK’s Post Office scandal reminds us that you don’t need AGI for software to ruin people’s lives. LLMs will produce more pain and more scandals, but life will go on. At least, that’s the answer I prefer. 

Read the full article here

News Room April 13, 2024 April 13, 2024
Share this Article
Facebook Twitter Copy Link Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Finance Weekly Newsletter

Join now for the latest news, tips, and analysis about personal finance, credit cards, dept management, and many more from our experts.
Join Now
US stocks and crypto are in the red to start December, the biggest stock surprises of 2025

Watch full video on YouTube

Why Major U.S. Allies Are Not Signing Up For Trump’s ‘Board Of Peace’

Watch full video on YouTube

Gold slides as rally loses steam

Unlock the Editor’s Digest for freeRoula Khalaf, Editor of the FT, selects…

Markets are in risk-off mode: Some of the ‘bloom is off the rose’ for AI, strategist says

Watch full video on YouTube

Why Iran Is Moving Oil Markets

Watch full video on YouTube

- Advertisement -
Ad imageAd image

You Might Also Like

News

Gold slides as rally loses steam

By News Room
News

Golden Buying Opportunities: Deeply Undervalued With Potential Upside Catalysts

By News Room
News

NewtekOne, Inc. (NEWT) Q4 2025 Earnings Call Transcript

By News Room
News

Tesla lurches into the Musk robotics era

By News Room
News

Keir Starmer meets Xi Jinping in bid to revive strained UK-China ties

By News Room
News

Canadian Pacific Kansas City Limited (CP:CA) Q4 2025 Earnings Call Transcript

By News Room
News

SpaceX weighs June IPO timed to planetary alignment and Elon Musk’s birthday

By News Room
News

Japan’s discount election: why ‘dirt cheap’ shoppers became the key voters

By News Room
Facebook Twitter Pinterest Youtube Instagram
Company
  • Privacy Policy
  • Terms & Conditions
  • Press Release
  • Contact
  • Advertisement
More Info
  • Newsletter
  • Market Data
  • Credit Cards
  • Videos

Sign Up For Free

Subscribe to our newsletter and don't miss out on our programs, webinars and trainings.

I have read and agree to the terms & conditions
Join Community

2023 © Indepta.com. All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?