USD
39.52 UAH ▼0.18%
EUR
42.36 UAH ▲0.24%
GBP
49.56 UAH ▲0.99%
PLN
9.81 UAH ▲0.17%
CZK
1.68 UAH ▲0.67%
Neuroncus stands on the verge of a revolutionary breakthrough. New brain-machine...

Decoding of silence. New generation interfaces will be able to paralyzed people to communicate

Neuroncus stands on the verge of a revolutionary breakthrough. New brain-machine interfaces give hope to those who live in the world of a closed person's syndrome. The new brain-machine interface development gives a chance to resume contact with the world. These innovative devices use computer algorithms to interpret and translate the brain waves associated with speech.

The most advanced developments are able to capture and convey the words that a person is silently pronounced in the mind, without the need to blink, follow their eyes or try to pronounce them, writes Invertce. In focus, technology appeared its Telegram channel.

Subscribe to not miss the most info and interesting news from the world of science! A closed person's syndrome is a medical condition in which a person is in full consciousness, but cannot move or communicate verbally because of complete paralysis of almost all muscles of the body that a person is able to control. Often this is due to damage to the brain stem caused by strokes, tumors, traumatic injuries, infections or neurodegenerative diseases such as lateral amymotrophic sclerosis.

The frequency of the syndrome is ambiguous due to variations in the ability of patients to communicate through eye movements or blinking. Unfortunately, some people completely lose mobility, including eye and eyelids, which makes the diagnosis even more complicated. Often, patients spend an average of 79 days in a fixed state before the correct diagnosis is made.

Sarah Vandalt, a graduate student of the Department of Neural Systems and Calculation in Kaltech, is enthusiastic about the potential of this technology, especially for those who are completely closed and cannot communicate. Recent studies, including Wandelt research, are promising because they give preliminary evidence that brain-machine interfaces decode internal language.

Despite the positive shifts, experts agree that further development is needed to make these interfaces accessible, practical and cost -effective for patients. Creating a brain-machine interface begins with a determination of which part of the brain should be aimed at.

Contrary to the obturation of the idea that the structure of the skull gives an idea of ​​the work of the brain, today we understand that cognitive abilities arise as a result of complex interactions between numerous brain areas. This complexity is both a challenge and an opportunity for research, since there is no single area of ​​brain responsible for internal language, which allows different areas to be potential targets.

It is noteworthy that Vandult and her colleague David Bins found a connection between speech and supramarginal gyrus (SM) in the parietal lobe, which is usually associated with the capture of objects. This discovery was made during the observation of a tetraplegic participant with a microelectrode grid implanted in the media. The array recorded the activity of individual neurons, which was then processed by the computer.

In the context of the football match, Bians compares the brain to the stadium, neurons with the audience, and electrodes - with microphones lowered into the crowd to capture important events. The implanted device between neurons tracks electrochemical signals, generated every time the neuron works, creating unique sound patterns associated with certain actions or intentions.

Caltech researchers successfully taught their brain machine interface to distinguish between brain patterns created when the participant silently pronounced six words and two pseudopolov. The device reached more than 90% accuracy in recognizing words only after 15 minutes of study. This successful test was the step, with the ultimate purpose of expanding vocabulary for more meaningful communication.

Another innovative approach is aimed at developing a brain-machine interface that can recognize individual letters rather than whole words. This concept was tested by Sean of Metzger, a graduate student at the University of California at San Francisco and the University of California in Berkel. This approach uses machine learning algorithms to decoding silently written sentences, reaching in most cases 92% of accuracy.

Jun Wang, a specialist in computer technology and language at Texas University in Austin, warns that, despite recent progress in developing language recovery devices, this industry is still in the initial stage of development. It is in favor of improving hardware and software to make these devices less bulky, more accurate and faster.

In addition, researchers study the possibility of developing non-invasive interfaces of the brain machine and the use of advanced methods of visualization to convert magnetic fields, generated by brain currents, into text. Restoration of speech in patients with speech blocking is a unique task associated with the variability of coding of internal language in different people.