Scientists at Meta✴ have developed a system that can interpret brain signals and determine which keys a person is pressing without direct observation. In an experiment involving 35 volunteers, the deep neural network-based algorithm was able to achieve 80% accuracy in recognizing letters. However, this technology remains strictly laboratory-scale. Despite all the limitations, Meta✴ sees this project as a strategic direction that can shed light on the mechanisms of human thought and contribute to the development of AI.

Image source: ai.meta.com

Back in 2017, Mark Zuckerberg announced that Facebook✴ was working on a technology that would allow you to “type directly from your brain.” At the time, the company planned to create a compact device — for example, a hat or headband — that could read brain signals and convert them into text without the need for implants. However, the implementation of this idea faced serious technical limitations, and four years later, Facebook✴ abandoned the development of a consumer version of the device.

Despite the commercial project being shut down, Meta✴ continued to fund fundamental research in neuroscience. In the new work, the results of which are presented in two preprints and on the company’s blog, scientists used magnetoencephalography (MEG), a technology that records weak magnetic fields created by neural activity. The resulting signals were processed by a deep neural network, which made it possible to analyze human brain activity and compare it with specific keystrokes.

Jean-Rémi King, head of the Meta✴ Brain & AI research group, emphasizes that the main goal of the project is not to create a final product, but to study the fundamental principles of intelligence. According to him, understanding the architecture and mechanisms of the human brain can open up new paths in the development of AI systems. During the experiment, the system demonstrated the ability to recognize letters typed by an experienced user with up to 80% accuracy, analyzing only brain signals. This level of accuracy allowed the researchers to recreate entire sentences based on the registered neurosignals. “Trying to understand the exact architecture or principles of the human brain may be the key to the development of machine intelligence. This is exactly the path we are exploring,” King says.

An experiment with 35 participants used EEG/MEG and the Brain2Qwerty model to decode text from human brain signals

But despite the impressive results, the technology remains far from practical. The experiment used a bulky magnetoencephalography scanner that cost more than $2 million. Its operation requires a room with powerful magnetic shielding, since the Earth’s natural magnetic field is a trillion times stronger than brain signals, creating strong interference. In addition, the system is extremely sensitive to movement: the slightest movement of the subject’s head leads to signal loss. King emphasizes that such limitations make the project unsuitable for commercialization.

The study was conducted at the Basque Center for Cognition, Brain and Language (BCBL) in Spain. It involved 35 volunteers, each of whom spent about 20 hours in a scanner typing in Spanish. Phrases included sentences such as “el procesador ejecuta la instrucción” (“the processor carries out the instruction”). Meta✴’s system, called Brain2Qwerty, analyzed the participants’ brain signals and matched them to the corresponding keystrokes.

In the first phase of training, the algorithm had to analyze thousands of input characters before it could begin predicting letters based on the recorded brain signals. The average error rate was 32% — nearly one letter in every three that was incorrectly identified. Despite this, Meta✴ calls the accuracy achieved the highest of any known non-invasive typing method using the full alphabet.

While Meta✴ is focusing on non-invasive methods, the field of neural interfaces is actively developing invasive technologies based on implanted electrodes. In 2023, a patient with amyotrophic lateral sclerosis (ALS), who had lost the ability to speak, regained the ability to communicate thanks to a neural interface that transmits her thoughts to a speech synthesizer. Neuralink, founded by Elon Musk, is developing implantable devices that allow paralyzed patients to control a computer cursor. Although such technologies provide significantly more accurate signal reading, they require surgery and are associated with risks.

Meta✴ does not develop medical devices and relies on fundamental science. Unlike electrode interfaces, the magnetoencephalographic scanner cannot record the activity of individual neurons, but it allows researchers to analyze the brain as a whole. This method allows tracking complex processes that cover several areas of the brain at once, which is especially important for studying cognitive functions and linguistic thinking.

In a second study using the same data, the Meta✴ researchers examined how the brain structures language information. They confirmed the hypothesis that the process is hierarchical: first, a general thought is formed, then areas responsible for individual words are activated, then for syllables, and only lastly does the brain generate signals corresponding to specific letters. Although this concept is not new, Meta✴ provided additional data on the interaction of these levels and their dynamics.

Although the developed system is far from practical application, its results may influence the development of neural interfaces and AI. Modern language models already use algorithms that imitate information processing in the human brain, but a deeper understanding of the cognitive processes associated with the formation of language may be the key to creating truly intelligent systems.

Leave a Reply

Your email address will not be published. Required fields are marked *