As the strains of Pink Floyd's "Another Brick in the Wall, Part 1" filled the hospital room, neuroscientists at Albany Medical Center meticulously recorded the activity of electrodes attached to the brains of patients being prepped for surgery. of epilepsy.
Aim? Capturing the electrical activity of areas of the brain tuned to the characteristics of the music (pitch, rhythm, harmony, and words) to see if they can reconstruct what the patient hears.
More than a decade later, following a detailed analysis of data from 29 of these patients by neuroscientists at the University of California, Berkeley, the answer is definitely yes.
The phrase "Ultimately, it was just a brick in the wall" is easily recognizable inreconstructed trackthe intact rhythms and the murky but decipherable words. For the first time, researchers have been able to reconstruct a recognizable song from brain recordings.
A short snippet of the Pink Floyd song "Another Brick in the Wall, Part 1" being played for the subjects.
Reconstruction of the piece from records of the electrical activity of the auditory cortex of the brain.
The reconstruction demonstrates the ability to record and translate brain waves to capture the musical elements of speech, as well as syllables. In humans, these musical elements, called prosody (rhythm, stress, stress, and intonation) carry meaning that words alone cannot convey.
Since intracranial electroencephalography (iEEG) recordings can only be made from the surface of the brain, as close to the auditory centers as possible, no one will be hearing the songs in their head any time soon.
However, for people who have communication difficulties, either due to stroke or paralysis, these recordings from electrodes placed on the surface of the brain can help recreate the musicality of speech that is missing from modern robotic reconstructions.
"This is a wonderful result," said Robert Knight, a neuroscientist and professor of psychology at the Helen Wills Institute for Neuroscience at the University of California, Berkeley, who conducted the study with postdoctoral fellow Ludovic Bellier. “For me, one of the things about music is that it has prosody and emotional content. As the entire field of brain-machine interfaces advances, this offers the potential to add musicality to future brain implants for people who need it, such as those with ALS or some other disabling neurological or developmental disorder that affects speech. It allows decoding not only the linguistic content, but also the prosodic part of the speech content, the affective part. I think we've actually started to solve the code problem."
As brain recording techniques improve, it may one day be possible to perform such recordings without opening the brain, perhaps using sensitive electrodes attached to the scalp. Currently, brain activity can be measured using a scalp EEG to detect a single letter from a stream of letters, but it takes at least 20 seconds to identify a single letter, making communication difficult and difficult, Knight said.
“Currently non-invasive techniques are not precise enough. Hopefully in the future patients will be able to read the activity of deeper areas of the brain with good signal quality using only electrodes placed outside the skull. But we're a long way from that," Bellier said.
Bellier, Knight and their colleagues published the results today in the journal.more biology, noting that they added "another brick to the wall of our understanding of how music is processed in the human brain."
do you read mind Not yet.
The brain-machine interfaces used today to help people communicate when they cannot speak can decode words, but the sentences they produce have a robotic quality, similar to how the late Stephen Hawking sounded when using a speech-generating device. voice.
"Right now, the technology is more like a mental keyboard," Bellier said. "You can't read your thoughts on a keyboard. You have to push buttons. And it does a robotic voice; there's certainly less of what I call free speech."
Bellier should know. He has been involved in music since childhood: drums, classical guitar and piano, and at one point played in a heavy metal band. Asked by Knight to work on the musicality for the speech, Bellier said, "I bet I was thrilled when I got the offer."
As of 2012, Knight, PhD student Brian Pasley, and their associates were:first reconstruct the words the person heardsolely on the basis of records of brain activity.
More recently, other researchers have taken Knight's work much further. Eddie Chang, a neurosurgeon at the University of California, San Francisco and co-lead author of the 2012 paper, recorded signals from the motor area of the brain associated with movements of the jaw, lips, and tongue to reconstruct the speech predicted by a paralyzed patient. usingwords displayed on a computer screen.
This work, published in 2021, used AI to interpret brain recordings of a patient trying to pronounce a sentence based on a set of 50 words.
While Chang's technique is proving effective, a new study suggests that recording the auditory areas of the brain where all aspects of sound are processed may capture other aspects of speech important in human communication.
"Decoding with the auditory cortex, which is closer to the acoustics of sounds, as opposed to the motor cortex, which is closer to the movements made to generate the acoustics of speech, is extremely promising," Bellier adds. "This will add some color to what has been decoded."
For the new study, Bellier reanalyzed brain recordings obtained between 2008 and 2015 when patients were played a roughly three-minute Pink Floyd song from the 1979 album.Wall. He hoped to go beyond previous research that tested whether decoding models could identify different songs and music genres, and indeed reconstruct musical phrases using regression-based decoding models.
Bellier stressed that the study, which used AI to decode brain activity and then encode its reproduction, did more than just create a black box that synthesized speech. He and his colleagues were also able to identify new areas of the brain involved in detecting rhythm, such as the sound of a guitar, and found that parts of the auditory cortex (in the superior temporal gyrus, located just behind and above the ear) respond to the beginning of a vocal or synth while other areas respond to sustained vocals.
Scientists have also confirmed that the right hemisphere of the brain is more sensitive to music than the left.
“Language works more in the left hemisphere. The music is more diffuse and shifts to the right," Knight said.
"It wasn't clear that the same thing would happen with musical stimuli," Bellier said. "So here we confirm that this is not just a matter of speech, but more fundamental to the auditory system and how it processes both speech and music."
Knight is embarking on new research to understand the brain circuitry that allows some people with aphasia caused by a stroke or brain injury to communicate through singing when they otherwise cannot find the words to express themselves.
Other co-authors on the paper include Neuroscience Institute Postdoctoral Fellows Helen Wills, Anaïs Llorens, and Déborah Marciano, Aysegul Gunduz of the University of Florida, and Gerwin Schalk and Peter Brunner of Albany Medical College of New York and the University of Washington, who captured the brain recordings. . The research was funded by the National Institutes of Health and the Brain Initiative, a partnership between federal and private funders aimed at accelerating the development of innovative neurotechnologies.
- Music can be reconstructed from the activity of the human auditory cortex using nonlinear decoding models.(more biologyannouncement)
- Scientists decode brain waves to hear what we hear(January 31, 2012)