Patients who have lost the ability to speak may be able to return to active verbal communication if a new technology for translating mental words into voice broadcasts is brought to clinical use. Experiments with a new implant and a method for converting mentally spoken words into speech have shown good promise, literally returning the voice of a patient who lost the ability to speak 20 years ago.

Image source: Nature Neuroscience 2025

A new implant and neural network training technology for recognizing the activity of the brain’s speech centers were developed by scientists from the University of California at Berkeley (UC Berkeley) and the University of California at San Francisco (UCSF). The main feature of the platform is scanning brain activity every 80 ms. The proposed method eliminates the annoying delay usually inherent in such solutions.

The system usually first recognizes brain activity, then connects the learning model, analyzes (usually) a large fragment of text, and only then reproduces the mentally spoken speech. Live communication in this case is difficult, since it is constantly interrupted by pauses. Scientists from California have proposed a solution that allows speech to be reproduced immediately in the process of mentally pronouncing words by a patient with an implant.

Also in the new study, scientists bypassed the problem of training the model (interface) by using the patient to reproduce sounds and words. Not all people who have lost speech are capable of this, so eliminating this stage will expand the circle of potential users of the technology.

The experiment was conducted with a 47-year-old patient who had lost the ability to speak due to illness at the age of 30. During the training of the neural network, she mentally pronounced 100 unique sentences from a dictionary of just over 1,000 words. An auxiliary scheme for communication based on 50 phrases using a smaller set of words was also used.

Unlike previous methods, the new process involved the participant not trying to say words out loud, but simply saying sentences to herself. The system successfully decoded both modes of communication, with the average number of words translated into speech per minute nearly twice that of previous methods. And when using the prediction method, thoughts were translated into speech on the fly eight times faster than with alternative methods.

To achieve a more natural sound, old recordings of the patient’s voice were used, which allowed her to speak in her own voice. Moreover, when the thought recognition process was launched in autonomous mode without time limits, the system was able to translate into words even the brain activity that the interface had not been trained to.

The authors note that the method still needs to be refined before it can be considered clinically applicable.

Leave a Reply

Your email address will not be published. Required fields are marked *