
Communication is an essential part of human life, allowing us to express our thoughts, feelings and needs. However, some neurological diseases can severely impair this ability.
For example, conditions such as amyotrophic lateral sclerosis (ALS) or a stroke that affects the brain stem can lead to a condition known as locked-in syndrome.
In this situation, the person is completely paralyzed, unable to speak or move, but remains conscious and with preserved cognitive functions. This can be extremely distressing, as the person understands everything around them but is unable to communicate.

Fortunately, advances in technology have offered new hope for these patients. Brain-machine interfaces (BMIs), also called brain-computer interfaces, are devices that allow people with severe paralysis to communicate using only brain activity.
These technologies have advanced significantly in recent years, especially those that attempt to directly translate brain signals related to speech. In other words, these interfaces seek to decode what the person wants to say, whether by analyzing the sounds of words (phonemes), the movements that the muscles would make to speak, or even the way the brain creates speech.
Most studies on brain-machine interfaces focus on the frontal lobe of the brain, more specifically on the regions that control the movements necessary for speech. These locations include areas responsible for the movement of the mouth, tongue, and vocal cords.

However, there are indications that other brain regions, such as the parietal and temporal lobes, also play an important role in speech. These areas are traditionally associated with understanding and processing language, but some researchers believe that they may also be involved in speech intention, that is, the desire and preparation to initiate speech.
If this is true, these findings could have a very positive impact on the development of brain-machine interfaces. First, they could help make these devices more efficient and accurate.
In addition, they could expand the number of people who could benefit from this technology, including those who have suffered damage to the frontal lobe and are therefore unable to send signals from this region to a conventional brain-machine interface.
This could help, for example, patients who have developed aphasia (difficulty speaking and understanding language) after a stroke or brain injury. However, a major challenge is to differentiate the brain signals related to speech production from those related to perception and comprehension. This is because, in everyday life, the brain processes both our own speech and the speech of other people around us.
An efficient brain-machine interface needs to be able to capture only what the person wants to say, without misinterpreting internal thoughts or words spoken by others.
This distinction is also essential to ensure that the technology respects the user's privacy and autonomy, avoiding ethical dilemmas, such as decoding thoughts that the person does not want to express.
To investigate these issues, researchers at Northwestern University, USA, used intracranial recordings, a method that directly records the brain's electrical activity with high precision.
The study participants were patients who needed intracranial monitoring due to treatment-resistant epilepsy (4 men and 4 women) or who had undergone surgery to remove brain tumors while they were awake (1 man). None of them had speech difficulties.

For patients with epilepsy, electrodes were implanted as clinically necessary, using electrocorticography (ECoG) on the surface of the brain or depth electrodes in specific areas.
In patients with tumors, the electrodes were positioned over the temporal and parietal lobes, always maintaining a safe distance of at least two gyri (brain folds) from the tumor.
Participants who were being monitored for seizure localization received standard clinical electrodes, with 1 cm spacing between them and a diameter of 2.3 mm. Participants undergoing surgery received denser electrode arrays, with 5 mm spacing, arranged in an 8×8 grid, covering the temporal and/or parietal areas as brain exposure allowed.

Synchronized recording of ECoG and acoustic data. Image: Peter Brunner. CC BY 4.0
The exact location of the electrodes was determined using CT scans taken before and after the procedure, ensuring the accuracy of the analysis.
In cases where the depth electrodes were positioned in areas of white matter (which are not the focus of the study), these were removed from the analysis.

Electrode locations for each participant. Electrodes with extensive noise were removed.
They analyzed whether, when, and where brain signals related to speech intent were present in the temporal and parietal lobes. In addition, they used analytical methods to distinguish speech signals from brain activity related to other processes, such as language comprehension and memory.
The results showed that the temporal and parietal lobes do indeed contain information related to speech intent, and this information can be identified even before the person starts speaking. This means that these brain regions are not only involved in language perception and comprehension, but also in speech production.
The researchers found speech intent signals distributed across several areas of these lobes, including the superior temporal gyrus, the middle temporal gyrus, the angular gyrus, and the supramarginal gyrus.
These findings are very significant because they indicate that speech brain-machine interfaces can go beyond the frontal lobe and tap into new brain areas to decode communication.
This could make the technology more accessible to patients who have suffered damage to the frontal region of the brain and expand the possibilities for treatment to a wider range of people. In the future, devices that use these new discoveries could offer a more efficient and natural way of communicating for those who have lost this ability due to neurological diseases.
READ MORE:
Decoding speech intent from non-frontal cortical areas
Prashanth Ravi Prakash, Tianhao Lei, Robert D Flint, Jason K Hsieh, Zachary Fitzgerald, Emily Mugler, Jessica Templer, Matthew A Goldrick, Matthew C Tate, Joshua Rosenow
Journal of Neural Engineering, Volume 22, Number 1. 13 February 2025
DOI 10.1088/1741-2552/adaa20
Abstract:
Objective. Brain machine interfaces (BMIs) that can restore speech have predominantly focused on decoding speech signals from the speech motor cortices. A few studies have shown some information outside the speech motor cortices, such as in parietal and temporal lobes, that also may be useful for BMIs. The ability to use information from outside the frontal lobe could be useful not only for people with locked-in syndrome, but also to people with frontal lobe damage, which can cause nonfluent aphasia or apraxia of speech. However, temporal and parietal lobes are predominantly involved in perceptive speech processing and comprehension. Therefore, to be able to use signals from these areas in a speech BMI, it is important to ascertain that they are related to production. Here, using intracranial recordings, we sought evidence for whether, when and where neural information related to speech intent could be found in the temporal and parietal cortices Approach. Using intracranial recordings, we examined neural activity across temporal and parietal cortices to identify signals associated with speech intent. We employed causal information to distinguish speech intent from resting states and other language-related processes, such as comprehension and working memory. Neural signals were analyzed for their spatial distribution and temporal dynamics to determine their relevance to speech production. Main results. Causal information enabled us to distinguish speech intent from resting state and other processes involved in language processing or working memory. Information related to speech intent was distributed widely across the temporal and parietal lobes, including superior temporal, medial temporal, angular, and supramarginal gyri. Significance. Loss of communication due to neurological diseases can be devastating. While speech BMIs have made strides in decoding speech from frontal lobe signals, our study reveals that the temporal and parietal cortices contain information about speech production intent that can be causally decoded prior to the onset of voice. This information is distributed across a large network. This information can be used to improve current speech BMIs and potentially expand the patient population for speech BMIs to include people with frontal lobe damage from stroke or traumatic brain injury.
Comments