Scientists gave a man with ALS his voice back

This student story was published as part of the 2025 NASW Perlman Virtual Mentoring Program organized by the NASW Education Committee, providing science journalism experience for undergraduate and graduate students.

Story by Vivienne Mayerhofer
Mentored and edited by Rich Monastersky

Giving someone their voice back after they've lost the ability to speak sounds like something out of a science-fiction movie. But researchers at UC Davis have made it a reality for the first time. By harnessing artificial intelligence and a brain implant, neuroscientists say they have developed a uniquely personal method to restore vocal communication for individuals who can no longer effectively speak.

While prior technology had successfully translated neural activity into language, it was often limited to text displayed on a screen. The new method, according to the researchers, goes beyond that by returning the full range of speech to a man whose ability to speak had degraded after he developed amyotrophic lateral sclerosis (ALS).

Their program processes thoughts into words nearly instantly and it allows the participant to speak in his own simulated voice, emphasize words, change his intonation, and even “sing” simple melodies, the researchers reported in Nature in June.

Study co-author and neuroscientist Maitreyee Wairagkar says that their lab’s “main goal was to restore naturalistic speech and expressive speech.” After all, she said, “it’s how we say something that matters.” 

Breakthrough brain-computer interface

Wairagkar and her colleagues achieved this goal by developing an advanced brain-computer interface (BCI). These devices, which get implanted into people's brains, measure neural activity, process those signals, and then allow users to perform actions based on those signals. Their study focused on work with one participant.

The BCI's 256 micro electrodes were surgically placed in the parts of his brain that control the vocal tract. Whenever the participant tried to speak, these micro electrodes captured and shared all his related neural activity. 

Close-up view of the back of a man

A brain-computer interface (BCI) enables the participant to interact and communicate using machine learning and a computer system. Credit: UC Davis Health

To process these signals, the research team trained a deep-learning algorithm to translate the recorded neural activity into the discrete sounds that make up words. By having the participant attempt to read cued sentences aloud, the algorithm learned to correlate individual sounds with specific neural activity. The researchers also asked the participant to stress certain words to teach the program how to intonate. By the end of the study, Wairagkar said that the participant “could say whatever he wanted.” This included successfully asking questions, responding freely, interjecting, and even completing a simple three-note singing task. 

Neuroscientist Sergey Stavisky, a coauthor on the paper, says their program works by interpreting “commands that would normally be driving muscle movement.” The participant is “someone with vocal tract problems, he knows exactly what he wants to say, those words are in his brain fully formed, those signals are just not reaching the muscles.”

Their own voice

Researchers outfitted the BCI with one final personal touch: the participant’s own voice. By training a different algorithm on recordings of the participant speaking before his ALS progressed, the team successfully synthesized a voice that the participant said felt like his “real voice” and made him "feel happy", the researchers report in the paper.

What’s most exciting is that this technology is still in its infancy, says Stavisky, who adds that the new field is “at an inflection point.” The larger technological revolution that’s been occurring in recent years has led to a boom in neuroelectronic medicine and neurotechnology, he says. For people suffering from speech impairment, says Stavisky, these developments “provide a huge quality of life improvement."

Top image: The micro electrodes in the participants’ brain collected data from his neural activity as he attempted to read sentences shown on a computer aloud.

Vivienne Meyerhofer recently graduated from Carnegie Mellon University after studying technical writing and cognitive science. She currently writes for the Defense Health Agency, and enjoys stories about brains and healthcare. Reach her at meyerhofervivienne@gmail.com.

Rich Monastersky is chief features editor at Nature magazine, based in Washington, D.C.


The NASW Perlman Virtual Mentoring program is named for longtime science writer and past NASW President David Perlman. Dave, who died in 2020 at the age of 101 only three years after his retirement from the San Francisco Chronicle, was a mentor to countless members of the science writing community and always made time for kind and supportive words, especially for early career writers.

You can contact the NASW Education Committee at education@nasw.org. Thank you to the many NASW member volunteers who lead our #SciWriStudent programming year after year.

Founded in 1934 with a mission to fight for the free flow of science news, NASW is an organization of ~2,600 professional journalists, authors, editors, producers, public information officers, students and people who write and produce material intended to inform the public about science, health, engineering, and technology. To learn more, visit www.nasw.org.

ADVERTISEMENT
Science Communication Awards in Acoustics

ADVERTISEMENT
AACR June L. Biedler Prize for Cancer Journalism

ADVERTISEMENT
Advertise with NASW