AI-Powered Brain Computer Interface Co-pilot Offers New Autonomy for People with Paralysis

Scientists at the University of California – Los Angeles (UCLA) have developed an AI-powered “co-pilot” to dramatically improve assistive devices for people with paralysis. The research, conducted in the Neural Engineering and Computation Lab led by Professor Jonathan Kao with student developer Sangjoon Lee, tackles a major issue with non-invasive, wearable brain-computer interfaces (BCIs): “noisy” signals. This means the specific brain command (the “signal”) is very faint and gets drowned out by all the other electrical brain activity (the “noise”), much like trying to hear a whisper in a loud, crowded room. This low signal-to-noise ratio has made it difficult for users to control devices with precision.

The team’s breakthrough solution is a concept called shared autonomy. Instead of only trying to decipher the user’s “noisy” brain signals, the AI co-pilot also acts as an intelligent partner by analyzing the environment, using data like a video feed of the robotic arm. By combining the user’s likely intent with this real-world context, the system can make a highly accurate prediction of the desired action. This allows the AI to help complete the movement, effectively filtering through the background noise that limited older systems.

A side-by-side diagram contrasting two approaches to brain-computer interface (BCI) control. On the left, titled "Prior studies," a person in a chair with electrodes on their head sends neural signals to a "BMI decoder," which then directly controls a robotic arm. The person receives "visual feedback" from a monitor displaying the arm's movement. On the right, titled "This study, with an AI copilot + BMI (AI-BMI)," the setup is more complex. Neural signals still go to a "BMI decoder," providing "BMI control." However, this signal now feeds into an "AI-BMI control" pathway, which also receives input from an "AI Agent." The AI Agent is shown as an "AI policy" mechanism that takes input from "Computer Vision" (represented by a camera pointing at the robotic arm and task) as well as "Task priors and information" and "Historical movements." The combined AI-BMI control then directs the robotic arm, and the person again receives "visual feedback" from the monitor.

The results of this new approach are remarkable. In lab tests, participants using the AI co-pilot to control a computer cursor and a robotic arm saw their performance improve by nearly fourfold. This significant leap forward has the potential to restore a new level of independence for individuals with paralysis. By making wearable BCI technology far more reliable and intuitive, it could empower users to perform complex daily tasks on their own, reducing their reliance on caregivers.

Source: University of Illinois Urbana-Champaign

Leave a comment

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.