Brain-computer interface allows a person with ALS to talk through a computer

A team of scientists from the University of California reported a breakthrough in the field of brain-computer interfaces (BCI). They managed to create a device capable of translating brain signals into text with unprecedented accuracy – the error rate was less than 3%.

Image Source: UC Davis Health/YouTube

The study, published in the New England Journal of Medicine, involved 45-year-old American Casey Harrell, who suffers from amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s disease (ALS). The illness, which began five years ago, has left Harrell unable to fully communicate. If the average speech rate in healthy people is about 160 words per minute, then Harrell could speak on average 6 words per minute. His speech was very slow due to the damage inherent in this disease to the function of motor neurons.

Image Source: UC Davis Health/YouTube

However, thanks to new technology, Harrell was able to regain his ability to communicate. During the procedure, he was implanted with 3.2 mm microelectrode arrays, which are a signal processing system based on NeuroPort technology from BlackRock Neurotech. The system transmits brain signals to several computers with special Backend for Realtime Asynchronous Neural Decoding (BRAND) software, which decodes neural signals in real time and then displays them as phrases and sentences on a monitor screen.

Image source: UC Davis Health

Already during the first session, when Harrell tried to pronounce sentences from a 50-word dictionary, decoding accuracy was 99.6%. In the second session with the same dictionary, all sentences were decoded without error. The dictionary was subsequently expanded to more than 125,000 words, covering most of spoken English. After several hours of training, decoding accuracy reached 90.2%, and over the following months it exceeded 97.5%.

Image source: UC Davis Health

The study was presented by a group of scientists led by UC Davis neuroscientist Sergey Stavisky and neurosurgeon David Brandman. Although Harrell was the first to test the new neuroprosthesis and interface technology, the results provide great hope for restoring communication abilities in people with disabilities.

«When we first tested the system, he (Harrell) cried with joy when the words he was trying to say appeared on the screen very quickly. Yes, we were all very touched,” Stavinsky noted.

admin

Share
Published by
admin

Recent Posts

Scientists have found a way to ensure fast charging and long service life of lithium-sulfur batteries

Two independent research groups have reported an advance in the development of lithium-sulfur batteries that…

2 hours ago

The US government considers GlobalFoundries a good candidate to save Intel

Until now, it was believed that large suppliers of semiconductor products such as Qualcomm and…

3 hours ago

Microsoft and Ubisoft have solved the problem of Assassin’s Creed compatibility with Windows 11 24H2

Microsoft has lifted restrictions on updating Windows 11 to version 24H2 for computers running Assassin's…

3 hours ago

Windows 11 will become smarter: Microsoft is testing AI file search

Microsoft is testing a new artificial intelligence (AI)-powered search feature in the latest build for…

4 hours ago

Merger instead of sale: Perplexity AI wants to save TikTok in the US

Perplexity AI proposed on Saturday, a day before TikTok was blocked in the United States,…

4 hours ago