Worth Sharing

WS

Stories That Matter

Paralyzed Woman 'Speaks' with Brain Signals Turned into Talking Avatar in World First

Paralyzed Woman 'Speaks' with Brain Signals Turned into Talking Avatar in World First
The tech can decode the signals at a rate of 80 words a minute, while an audio recording of her voice from her wedding day gives them life.

A paralyzed woman has spoken again after her brain signals were intercepted and turned into a talking avatar, complete with facial expressions and sound samples from the woman's real voice, all in a world first.

48-year-old Ann suffered a brainstem stroke when she was 30, leaving her paralyzed.

Scientists at the University of California then implanted a paper-thin rectangle of 253 electrodes onto the surface of her brain covering the area critical for speech. They then used artificial intelligence to produce the brain-computer interface (BCI).

These intercept ‘talking' brain signals and are fed into a bank of computers via a cable, plugged into a port fixed to her head.

The computers can decode the signals into text at a rate of 80 words a minute, while an audio recording of her voice from her wedding day years before the stroke reproduced her voice and then gave it to an on-screen avatar that uses it with facial expressions.

The team from the University of California San Francisco says it is the first time that either speech or facial expressions have been synthesized from brain signals.

"Our goal is to restore a full, embodied way of communicating, which is really the most natural way for us to talk with others," said Dr. Edward Chang, chair of neurological surgery at UCSF. "These advancements bring us much closer to making this a real solution for patients."

For weeks, Ann worked with the team to train the system's artificial intelligence algorithms to recognize her unique brain signals for speech.

This involved repeating different phrases from a 1,024-word conversational vocabulary over and over again, until the computer recognized the brain activity patterns associated with the sounds.

Rather than train the AI to recognize whole words, the researchers created a system that decodes words from phonemes. "Hello," for example, contains four phonemes: "HH," "AH," "L" and "OW."

Using this approach, the computer only needed to learn 39 phonemes to decipher any word in English. This both enhanced the system's accuracy and made it three times faster.

"The accuracy, speed, and vocabulary are crucial," said Sean Metzger, who developed the text decoder in the joint Bioengineering Program at UC Berkeley and UCSF. "It's what gives a user the potential, in time, to communicate almost as fast as we do, and to have much more naturalistic and normal conversations."

Using a customized machine-learning process that allowed the company's software to mesh with signals being sent from her brain, the computer avatar was able to mimic Ann's movements, making the jaw open and close, the lips protrude and purse and the tongue go up and down, as well as the facial movements for happiness, sadness, and surprise.

The team is now working on a wireless version that will mean the user doesn't have to be connected to the computers.

The current study, published in the journal Nature, adds to previous research by Dr. Chang's team in which they decoded brain signals into text in a man who had also had a brainstem stroke many years earlier.

But now they can decode the signals into the richness of speech, along with the movements that animate a person's face during conversation.

WATCH the story and tech in action from UCSF…

About author
A writer is someone for whom writing is more difficult than it is for other people.

Be the first to comment

Leave a Comment