By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
IndebtaIndebta
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Notification Show More
Aa
IndebtaIndebta
Aa
  • Banking
  • Credit Cards
  • Loans
  • Dept Management
  • Mortgage
  • Markets
  • Investing
  • Small Business
  • Videos
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Follow US
Indebta > News > Pink Floyd strike a chord as scientists recreate song from brain activity
News

Pink Floyd strike a chord as scientists recreate song from brain activity

News Room
Last updated: 2023/08/15 at 10:23 PM
By News Room
Share
5 Min Read
SHARE

Receive free Neurotechnology updates

We’ll send you a myFT Daily Digest email rounding up the latest Neurotechnology news every morning.

“All in all, it was just a brick in the wall” — the chorus of Pink Floyd’s classic song emerged from speakers in a neuroscience lab at the University of California, Berkeley, the rhythms and words sounding muddy but recognisable.

The track was not a recording by the rock band but had been generated using artificial intelligence techniques from the brainwaves of people listening to it, in the world’s first scientific experiment to reconstruct a recognisable song from neural signals.

The findings will be invaluable both for scientists seeking to understand how the brain responds to music and for neurotechnologists who want to help people with severe neurological damage to communicate through brain-computer interfaces in a way that sounds more natural, whether they are speaking or singing.

“Music has prosody [patterns of rhythm and sound] and emotional content,” said Robert Knight, UC Berkeley professor of psychology and neuroscience, who led the research and whose findings were published in the journal PLOS Biology on Tuesday.

“As the whole field of brain-machine interfaces progresses, this provides a way to add human tone and rhythm to future brain implants for people who need speech or voice outputs . . . that’s what we’ve really begun to crack the code on,” added Knight.

The electro-encephalography (EEG) recordings used in the research were obtained in about 2012, at a time when people with severe epilepsy often had large arrays of electrodes — typically 92 each — placed over their brain surface to pinpoint the location of intractable seizures.

The patients volunteered to help scientific research at the same time by allowing researchers to record their brainwaves while they listened to speech and music.

Previous studies based on these experiments gave scientists enough data to reconstruct individual words that people were hearing from recordings of their brain activity. But only now, a decade later, has AI become powerful enough to reconstruct passages of song.

The Berkeley researchers analysed recordings from 29 patients who heard Pink Floyd’s “Another Brick in the Wall (Part 1)”, part of a trilogy of songs from the 1979 album The Wall. They pinpointed areas of the brain involved in detecting rhythm and found that some parts of the auditory cortex, located just behind and above the ear, responded at the onset of a voice or synthesiser while others responded to sustained vocals.

X-rays of patients taking part in the research, showing electrodes on their skulls © Peter Brunner

The findings supported longstanding ideas about the roles played by the brain’s two hemispheres. Although they work closely together, language is processed predominantly on the left side, while “music is more distributed, with a bias towards [the] right”, said Knight.

His colleague Ludovic Bellier, who led the analysis, said that devices used to help people communicate when they cannot speak tend to vocalise words one by one. The sentences spoken by machine have a robotic quality reminiscent of the way the late Stephen Hawking sounded on a speech-generating device.

“We want to give more colour and expressive freedom to the vocalisation, even when people are not singing,” said Bellier.

The Berkeley researchers said brain-reading technology could be extended to the point where musical thoughts could be decoded from someone wearing an EEG cap on the scalp rather than requiring electrodes under the skull on the brain. It might then be possible to imagine or compose music, relay the musical information and hear it played on external speakers.

“Non-invasive techniques are just not accurate enough today,” said Bellier. “Let’s hope that in the future we could, just from electrodes placed outside on the skull, read activity from deeper regions of the brain with a good signal quality.”

Read the full article here

News Room August 15, 2023 August 15, 2023
Share this Article
Facebook Twitter Copy Link Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Finance Weekly Newsletter

Join now for the latest news, tips, and analysis about personal finance, credit cards, dept management, and many more from our experts.
Join Now
The power crunch threatening America’s AI ambitions

Many utility companies are pinning their short-term hopes on “demand response” solutions…

Elon Musk asks Tesla investors to approve $1T pay package, rising oil prices pressure bonds

Watch full video on YouTube

Why beef prices are out of control in the U.S.

Watch full video on YouTube

Yahoo Finance: Market Coverage, Stocks, & Business News

Watch full video on YouTube

How A Million Miles Of Undersea Cables Power The Internet — And Now AI

Watch full video on YouTube

- Advertisement -
Ad imageAd image

You Might Also Like

News

The power crunch threatening America’s AI ambitions

By News Room
News

REX American Resources Corporation 2026 Q3 – Results – Earnings Call Presentation (NYSE:REX) 2025-12-05

By News Room
News

Aurubis AG (AIAGY) Q4 2025 Earnings Call Transcript

By News Room
News

A bartenders’ guide to the best cocktails in Washington

By News Room
News

C3.ai, Inc. 2026 Q2 – Results – Earnings Call Presentation (NYSE:AI) 2025-12-03

By News Room
News

Stephen Witt wins FT and Schroders Business Book of the Year

By News Room
News

Verra Mobility Corporation (VRRM) Presents at UBS Global Technology and AI Conference 2025 Transcript

By News Room
News

Zara clothes reappear in Russia despite Inditex’s exit

By News Room
Facebook Twitter Pinterest Youtube Instagram
Company
  • Privacy Policy
  • Terms & Conditions
  • Press Release
  • Contact
  • Advertisement
More Info
  • Newsletter
  • Market Data
  • Credit Cards
  • Videos

Sign Up For Free

Subscribe to our newsletter and don't miss out on our programs, webinars and trainings.

I have read and agree to the terms & conditions
Join Community

2023 © Indepta.com. All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?