A speech prosthesis developed by a collaborative team of Duke neuroscientists, neurosurgeons and engineers can translate a person’s brain signals into what they are trying to say.
It appears on November 6 in the magazine Nature communicationsnew technology may one day help people unable to speak due to neurological disorders regain the ability to communicate through a brain-computer interface.
“There are many patients who suffer from debilitating movement disorders, such as ALS (amyotrophic lateral sclerosis) or locked-in syndrome, which can impair their ability to speak,” said Gregory Cogan, Ph.D., professor of neurology at Duke University. School of Medicine and one of the principal investigators involved in the project. “But the currently available tools that allow them to communicate are generally too slow and cumbersome.”
Imagine listening to an audiobook at half speed. This is the best available speech decoding rate, which is about 78 words per minute. Humans, however, speak about 150 words per minute.
The lag between spoken and decoded speech rates is partly due to the relatively few brain activity sensors that can be fused to a thin piece of material that sits on the surface of the brain. Fewer sensors provide less information that can be deciphered for decoding.
To improve upon past limitations, Cogan collaborated with Duke Institute for Brain Sciences colleague Jonathan Viventi, Ph.D., whose biomedical engineering lab specializes in making high-density, ultrathin, and flexible brain sensors.
For this project, Viventi and his team packed an impressive 256 tiny brain sensors into a postage stamp-sized piece of flexible medical-grade plastic. Neurons just a grain of sand apart can have very different patterns of activity when they tune into speech, so it is essential to distinguish signals from neighboring brain cells to help make accurate predictions about the intended speech.
After constructing the new implant, Cogan and Viventi collaborated with several Duke University Hospital neurosurgeons, including Derek Southwell, MD, Ph.D., Nandan Lad, MD, Ph.D., and Allan Friedman, MD, who helped recruit four patients for the implant trial. The experiment required researchers to temporarily implant the device in patients undergoing brain surgery for another condition, such as treating Parkinson’s disease or removing a tumor. Time was limited for Cogan and his team to test their device in the OR.
“I like to compare it to a NASCAR pit crew,” Cogan said. “We don’t want to add extra time to the surgical procedure, so we had to be in and out within 15 minutes. As soon as the surgeon and medical team said ‘Go!’ we sprang into action and the patient got the job done.”
The task was a simple listen and repeat activity. Participants heard a series of nonsense words such as ‘ava’, ‘kug’ or ‘vip’ and then each spoke aloud. The device recorded activity from each patient’s speech motor cortex as it coordinated nearly 100 muscles that move the lips, tongue, jaw and larynx.
Next, Suseendrakumar Duraivel, the first author of the new report and a graduate student in biomedical engineering at Duke, took the neural and speech data from the surgical suite and fed it into a machine learning algorithm to see how accurately it could predict the sound. was done, based on brain activity recordings alone.
For some sounds and participants, such as the /g/ in the word “gak,” the decoder got it right 84% of the time when it was the first sound in a string of three that constituted a given nonsense word.
However, accuracy dropped as the decoder parsed sounds in the middle or at the end of a nonsense word. It also struggled if two sounds were similar, such as /p/ and /b/.
Overall, the decoder was accurate 40% of the time. This may seem like a humble test result, but it was quite impressive given that similar brain-to-speech technical achievements require hours or days of data to pull from. The speech decoding algorithm used by Duraivel, however, worked with only 90 seconds of spoken data from the 15-minute test.
Duraivel and his mentors are excited about building a wireless version of the device with a recent $2.4 million grant from the National Institutes of Health.
“Now we’re developing the same kind of recorders, but without wires,” Cogan said. “You could move around and you wouldn’t have to be tied to an outlet, which is really exciting.”
While their work is encouraging, there’s still a long way to go before Viventi and Cogan’s speech prosthetic hits shelves any time soon.
“We’re at the point where it’s still much slower than natural speech,” Viventi said in a recent Duke magazine article about the technology, “but you can see the trajectory of where you can get there.”
This work was supported by grants from the National Institutes of Health (R01DC019498, UL1TR002553), the Department of Defense (W81XWH-21-0538), the Klingenstein-Simons Foundation, and an Incubator Award from the Duke Institute for Brain Sciences.