
Scientists have developed a decoder that uses brain imaging and artificial intelligence to translate a person’s thoughts into speech without having to speak, according to a study published on Monday and cited by AFP.
The main purpose of this “speech decoder” is to help patients who have lost the ability to speak to communicate their thoughts through a computer.
Although this new device is intended for medical use, it still raises questions about the violation of “psychic privacy”, admit the authors of the study published in the journal Nature Neurosciences.
To deflect criticism, they point out that their tool only works after the brain has been trained for many hours on an MRI (magnetic resonance imaging) machine.
Previous brain-machine interfaces designed to allow people with severe disabilities to regain their independence have already proven useful. One such interface could transcribe the sentences of a paralytic who could not speak or type.
But these devices require invasive surgery with electrodes implanted in the brain and target only the areas of the brain that control our mouths to form words.
“Our system works at the level of ideas, semantics, meaning,” Oleksandr Huth, a neuroscientist at the University of Texas at Austin and co-author of the study, said at a press conference. And this is in a non-invasive way.
During the experiment, three people spent 16 hours in a functional medical imaging (fMRI) machine: this technique allows the recording of changes in blood flow in the brain, thus providing a real-time description of the activity of areas of the brain during certain tasks (speech). , movement, etc.).
Volunteers were forced to listen to podcasts with stories. This allowed the researchers to determine how words, sentences and their meanings stimulate different areas of the brain.
They then fed this data into an artificial neural network for language processing using GPT-1, the predecessor of the ChatGPT chatbot.
The network was trained to predict how each brain would react to the speech it heard. Each person then listened to a new story in an fMRI machine to see if the network guessed correctly.
Much deeper than language
The result: Despite often paraphrasing or changing the order of words, the decoder was able to “reconstruct the meaning of what the person heard,” said Jerry Tang (University of Austin), first author of the study.
For example, when a user heard the phrase “I don’t have a driver’s license yet,” the network model responded “he hasn’t even started learning to drive yet.”
The experiment went further: even when the participants imagined their own stories or watched a silent film, the decoder could capture the essence of their thoughts.
These results suggest that “we are decoding something deeper than language and then translating it into language,” Huth continued.
Source: Hot News

Ben is a respected technology journalist and author, known for his in-depth coverage of the latest developments and trends in the field. He works as a writer at 247 news reel, where he is a leading voice in the industry, known for his ability to explain complex technical concepts in an accessible way. He is a go-to source for those looking to stay informed about the latest developments in the world of technology.