Brain-computer interfaces help two women speak again

24 Aug 2023

Ann, the participant in the UCSF and UC Berkeley study. Image: Noah Berger

The studies claim their results were significant improvements over earlier examples, though the error rate was still high in both cases.

Two new studies show the potential of AI and brain-computer interfaces to restore communication to people who have lost the ability to speak due to paralysis.

The studies involved using implants that can pick up signals that the brain sends to muscles involved in speech. The implants could then translate these signals into sentences.

One study tested this technology on Ann, a woman who was left paralysed and unable to speak after a brain stem stroke. A team of researchers at the University of California San Francisco (UCSF) and University of California Berkeley worked with Ann to showcase the potential of this technology.

Edward Chang, MD, chair of neurological surgery at UCSF, hopes that the results of their study will lead to an FDA-approved system in the near future.

“Our goal is to restore a full, embodied way of communicating, which is the most natural way for us to talk with others,” Chang said. “These advancements bring us much closer to making this a real solution for patients.”

For this study, researchers implanted a paper-thin rectangle interface of 253 electrodes onto the surface of Ann’s brain, over areas they previously discovered were critical for speech.

Illustration of a person's head in blue lighting, with a cable on the top of the head and a rectangular device on the side of the brain.

Illustration of the thin rectangle of electrodes that were connected to Ann’s brain. Image: Ken Probst

Ann worked with the team to train the system’s AI algorithms to recognise her unique brain signals for speech. This involved repeating different phrases from a 1,024-word conversational vocabulary until the computer recognised the brain activity patterns.

The team also used a digital avatar and an algorithm that could recreate speech, in order to give Ann a digital avatar that can translate her thoughts into audible words.

A woman sitting in front of a TV screen that has words and a digital avatar of a woman on it.

Ann working with the UCSF researchers in front of her digital avatar. Image: Noah Berger

The second study was conducted by researchers at Stanford University. These researchers tested their brain-computer interface on Pat Bennett, a woman who has amyotrophic lateral sclerosis (ALS), making her “unable to produce intelligible speech”.

The Stanford study was similar to the one at UCSF, as it involved a brain-computer interface that could detect activity when the patient was trying to speak.

An AI algorithm received and decoded electronic information coming from Bennett’s brain, in order to understand the brain activity associated with her attempts to formulate 39 phonemes that compose spoken English.

A step in the right direction

Both studies had similar results, as the Stanford study claimed the interface decoded speech at 62 words per minute, while the UCSF study reported a decoding rate of 78 words per minute.

While the results of both studies are promising, there is still a lot of work to be done until these concepts can be widely used, as both studies reported an error rate of more than 20pc. Despite this, the researchers and participants spoke positively about the results and claimed they were improvements over previous studies.

“These initial results have proven the concept, and eventually technology will catch up to make it easily accessible to people who cannot speak,” Bennett wrote. “For those who are nonverbal, this means they can stay connected to the bigger world, perhaps continue to work, maintain friends and family relationships.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com