top of page

New Brain Chip Allows Paralysed People to Speak Again

  • Writer: Lidi Garcia
    Lidi Garcia
  • 2 days ago
  • 4 min read

Scientists have created an innovative system that allows people with paralysis, such as amyotrophic lateral sclerosis (ALS), to communicate again using their own voice almost instantly. The system captures brain signals with microelectrodes and transforms them into words and even tones of voice, as in a natural conversation. This technology brings hope for those who have lost their speech to be able to talk more quickly and expressively with family and friends.


Brain-computer interfaces (BCIs) are technologies that have sparked great interest in science because they promise to restore the ability to communicate to people who have lost their speech, whether due to neurological diseases or injuries.


One of these conditions is amyotrophic lateral sclerosis (ALS), a serious disease that damages the nerves and causes loss of control of the muscles, including those we use to speak.


Recently, a group of scientists at the University of California, Davis, developed a revolutionary system: a brain-computer interface that can transform thoughts linked to speech into an audible and even expressive voice in real time. This technology offers hope that paralyzed patients will be able to not only communicate, but also express emotions and personality through their voice.

An investigational brain-computer interface (BCI) allows a study participant to communicate through a computer. Source: University of California, Davis


Traditionally, systems that attempt to convert brain signals into words or sentences have suffered from a major problem: delayed response. Imagine trying to have a conversation, but having to wait several seconds for your words to come out, making interactions slow and unnatural.


The new system changes that by creating a kind of “direct voice call,” in which the conversion of thought into speech occurs almost as soon as the patient tries to speak. The secret lies in an array of 256 microelectrodes implanted in the part of the brain that controls speech movements.

A model of a human brain showing an array of microelectrodes. The arrays are designed to record brain activity. Source: US Davis and Interesting Engineering


These electrodes are tiny sensors that pick up the activity of hundreds of neurons as the patient struggles to form words. The signals are sent to a computer that, with the help of artificial intelligence, decodes and translates into sound what the person meant, and does so with almost no delay, with a response time of about one-fortieth of a second, roughly the same as it takes to hear our own voice when speaking.


The study participant was a 45-year-old man with ALS and severe dysarthria (severe speech impediment). To teach the system to “listen” to his brain, he was presented with sentences on a screen and told to try to say them out loud while the scientists recorded his brain activity. 

The researchers collected data while the participant was asked to try to speak sentences that were shown to them on a computer screen.


With this, the system's algorithms learned to associate specific patterns of brain signals with speech sounds, including differences in intonation, such as when we transform a sentence into a question or a statement, and even tones used to sing short melodies.


This means that, in addition to words, the patient was able to convey emotional and expressive nuances, such as joy, doubt or sadness, which makes communication more human and authentic.


The impact of this technology goes beyond just producing sounds. According to the researchers, it allows for real and dynamic conversations, in which the patient can, for example, interrupt someone, ask questions and respond immediately, something that was not possible with previous versions of these interfaces.

Maitreyee Wairagkar, the first author of the new study, is a project scientist in the Neuroprosthetics Laboratory at UC Davis.


The speech generated by the system was often clear enough for listeners to understand about 60 percent of the words correctly, a very promising result for a field that is still developing. In addition, the system was able to create a synthesized voice that resembled the one the participant had before they lost their speech, reinforcing the personal and emotional aspect of communication.


Scientists believe that this technology could transform the lives of many people with paralysis, allowing them to regain an essential part of their identity: their voice. And while more testing and refinements are still needed, the study represents a major step toward the future of speech neuroprosthetics, combining advances in neuroscience, engineering, and artificial intelligence to restore speech to those who need it most.



READ MORE:


An instantaneous voice-synthesis neuroprosthesis

Maitreyee Wairagkar, Nicholas S. Card, Tyler Singer-Clark, Xianda Hou, Carrina Iacobacci, Lee M. Miller, Leigh R. Hochberg, David M. Brandman, and Sergey D. Stavisky 

Nature (2025)


Abstract:


Brain–computer interfaces (BCIs) have the potential to restore communication for people who have lost the ability to speak owing to a neurological disease or injury. BCIs have been used to translate the neural correlates of attempted speech into text1,2,3. However, text communication fails to capture the nuances of human speech, such as prosody and immediately hearing one’s own voice. Here we demonstrate a brain-to-voice neuroprosthesis that instantaneously synthesizes voice with closed-loop audio feedback by decoding neural activity from 256 microelectrodes implanted into the ventral precentral gyrus of a man with amyotrophic lateral sclerosis and severe dysarthria. We overcame the challenge of lacking ground-truth speech for training the neural decoder and were able to accurately synthesize his voice. Along with phonemic content, we were also able to decode paralinguistic features from intracortical activity, enabling the participant to modulate his BCI-synthesized voice in real time to change intonation and sing short melodies. These results demonstrate the feasibility of enabling people with paralysis to speak intelligibly and expressively through a BCI.


Opmerkingen


© 2020-2025 by Lidiane Garcia

bottom of page