Scientists are edging closer to achieving speech restoration in people suffering from paralysis disorders or those who are aphasic following a stroke. Using a combination of brain signal readers and artificial intelligence, researchers at the Radboud University and the University Medical Center Utrecht have developed a means of communication that is accurate and similar to natural speech. This will be a major boost to the quality of life of people with locked-in syndrome and even for their family members.
Locked-In
In the new technology, the researchers aimed to establish the use of a brain-computer interface (BCI) that could produce accurate speech by reading brain waves. Previously, neuroscientists have come up with ways to move prosthetic limbs using decoding technology but such levels of accuracy have not been achieved in speech. People with locked-in syndrome have muscle paralysis and therefore the restoration of speech has to use brain signals rather than muscle activity.
“Ultimately, we hope to make this technology available to patients in a locked-in state, who are paralyzed and unable to communicate,” said Julie Berezutskaya the lead researcher. “These people lose the ability to move their muscles, and thus to speak. By developing a brain-computer interface, we can analyze brain activity and give them a voice again.”
Decoding Brain Signals
The researchers mapped out the areas of the brain that produce speech signals. The participants in the research ̶ who were non-paralyzed ̶ had brain implants to assess neuronal activity when speaking certain words. Study participants read aloud a list of 12 specific words, multiple times, in a random sequence. Their brain activity was then fed into a computer, and the AI tools analyzed the wave forms of the brain to produce audible speech. The words generated by the computers had an accuracy of between 92 to 100%.
The researchers devised means to clean the brain waves and the resultant speech by filtering out the noise.
“ . . . we also used advanced artificial intelligence models to translate that brain activity directly into audible speech. That means we weren’t just able to guess what people were saying, but we could immediately transform those words into intelligible, understandable sounds. In addition, the reconstructed speech even sounded like the original speaker in their tone of voice and manner of speaking.” Added Berezutskaya.
A Game Changer in Speech Restoration?
With such high levels of accuracy, this study carries the promise of a dependable speech restoration approach.
However, Berezutskaya noted that this study was limited in the vocabulary used because the computers had to detect only the 12 words. In real life situations, the machines will have to analyze sentences and complex phrases and different languages.
As neuroscientists wander farther into the world of speech prostheses, people with loss of speech might soon have a reliable solution to reverse the condition and at least restore one important aspect of their lives—effective communication.
References
Berezutskaya J, Freudenburg ZV, Vansteensel MJ, Aarnoutse EJ, Ramsey NF, van Gerven MAJ. Direct speech reconstruction from sensorimotor brain activity with optimized deep learning models. J Neural Eng. 2023;20(5):056010. Published Sep 20, 2023. doi:10.1088/1741-2552/ace8be
Brain signals transformed into speech through implants and ai. Radboud University. Accessed October 30, 2023. https://www.ru.nl/en/research/research-news/brain-signals-transformed-into-speech-through-implants-and-ai.