A Pink Floyd song reconstructed from the brain impulses of those who heard it

A landmark study reconstructs a Pink Floyd song from brain activity data

Music is a fundamental part of what it means to be human, but scientists don’t know much about what happens in our brains when we hear our favorite songs. We know that the signals that the brain receives from the ears are converted into electrical impulses in our neurons and that these impulses are reproduced when we remember a song.

So, in theory, it would be possible to connect electrodes to a person’s brain and, by analyzing the impulses, identify what music they’re listening to. Well, a study just showed that it’s actually possible to reconstruct a song someone’s heard using their brain activity patterns alone, and if you think that sounds like science fiction, you can listen to it yourself.


Besides a better understanding of how the brain perceives music, there is another side to this research. Brain-computer interfaces are evolving. For people who have lost the ability to speak due to brain injury or disease, there are devices that can help them communicate, such as the one used by the late Stephen Hawking.

Versions of these devices, sometimes called neuroprostheses, have been developed that allow paralyzed people to type text by imagining they are writing it by hand, or to spell sentences using just their minds. But when it comes to language, the rhythm and emotion behind the words, known as prosody, is notoriously difficult to understand. The best we could do clearly sounds robotic.

“Right now, technology is more of a keyboard for the mind,” says lead author Ludovic Bellier in a statement. You can’t read your mind from a keyboard. You have to push the buttons. And the voice is like robotics; I’m sure there is less of what I call freedom of expression.”

The team responsible for the new study examined music, which naturally contains rhythmic and harmonic components, to try to create a model for decoding and reconstructing a sound. Fortunately, there was a perfect data set waiting to be analyzed.

The Pink Floyd Experiment

More than a decade ago, 29 patients with treatment-resistant epilepsy took part in a study that used electrodes placed in their brains to record their brain activity while listening to a three-minute segment of the Pink Floyd classic Another Brick. in the Wall, Part 1».

Back in 2012, UC Berkeley professor Robert Knight was part of a team that became the first to reconstruct the words a person heard from brain activity alone. Things had progressed rapidly in this area since then, and Knight was now co-leading the study of the new problem of musical perception with Bellier.

Bellier reanalyzed the recordings and used artificial intelligence to create a model capable of decoding brain activity recorded in the auditory cortex and reconstructing it into a sound waveform that reproduces the music the subject was listening to.

rebuilding song

Original song spectrogram (left), brain shows representative activity patterns as colored dots (middle), reconstructed spectrogram (right). Photo credit: Ludovic Bellier, PhD (CC BY 4.0)

The left panel shows the spectrogram of the original song the patients heard and the middle panel shows a typical pattern of neural activity. The researchers used only these patterns to decode and reconstruct a spectrogram like the one shown on the right, which is recognizable as the original song.

For Bellier, a lifelong musician, the prospect was irresistible: “You can bet I was thrilled when I got the proposal.” In the reconstructed tone, rhythm and melody are discernible, and even the words are distinguishable: “Everything in after all it was just a brick in the wall.”

The research also allowed the team to identify new areas of the brain involved in rhythm recognition, in this case the rumble of the guitar. The most important appears to be part of the right superior temporal gyrus, located in the auditory cortex just behind and above the ear.

They also found that speech perception tends to occur on the left side of the brain, but music perception tends to occur on the right. Bellier and Knight, along with their co-authors, are confident that the project could lead to improvements in brain-computer interface technology.

“As this whole area of ​​brain-machine interfaces advances, it’s possible to bring musicality to future brain implants for people who need them,” Knight explains. “It gives you the opportunity to decode not only the linguistic content, but also part of the prosodic content of speech, part of affect. I think that’s what we really started to figure out.

Being able to perform the brain recording non-invasively would be particularly useful, but Bellier explains that we are not there yet: “Non-invasive techniques are not precise enough today. Let’s hope for the patients that in the future we will be able to read the activity of deeper brain regions with good signal quality using simple electrodes attached to the outside of the skull. But we are a long way from that.”

One day that might be possible. But listening to music decoded from brain activity alone still leaves us speechless. And as the authors conclude in their article, they have certainly added “another building block to the wall of our understanding of how music is processed in the human brain”.

REFERENCE

Using nonlinear decoding models, music can be reconstructed from the activity of the human auditory cortex

Recent Articles

Related News

Leave A Reply

Please enter your comment!
Please enter your name here