Brain Computer Interface Breakthrough: Mind-to-Word Technology Achieves Record Word Decoding Rates, Paving the Way for Enhanced Communication
In a groundbreaking development, two separate research teams from Stanford University and the University of California, San Francisco, have unveiled remarkable advancements in brain-computer interface (BCI) technology that decode words from a person’s thoughts, specifically targeting individuals who have lost their ability to speak. These BCIs represent a significant leap forward in the field of assistive technology.
The Stanford University team, led by Jaimie Henderson, implanted electrode arrays into the cortex of an ALS patient referred to as “T12” to record neural activity. They then employed a deep-learning model to translate this activity into words, achieving an impressive word decoding rate of 62 words per minute (wpm). This rate is over three times faster than the previous record of 18 wpm, set by the same Stanford research group in their efforts to decode handwriting from neural activity.
Meanwhile, the UC San Francisco team, under the guidance of Edward Chang, took a different approach. They used electrocorticogram (ECoG) electrodes placed on the brain’s surface to capture neural signals related to vocal tract movements, such as lip, tongue, and jaw movements. Their BCI system achieved an even faster word decoding rate of 78 wpm, surpassing the Stanford team’s device and setting a new record that is more than four times faster than previous efforts. Additionally, UC San Francisco’s system can produce both text and audio reconstructions of the user’s intended speech, enhancing the overall communication experience.
While both BCIs fall short of the average speaking rate of approximately 160 wpm, these breakthroughs hold immense promise in assisting individuals with speech disabilities and offer a glimpse into the potential of advanced brain-computer interfaces. Both research teams are actively working to further enhance performance and accuracy by exploring increased electrode counts and improved hardware, with the goal of significantly improving communication options for those in need.
Brain Decoding for Communication
Ann, the person featured in the video above, developed locked-in syndrome after suffering a stroke at the age of 30. Despite having no clear cause for the stroke, she has persevered for 18 years. Her story highlights the need for technology that can empower individuals with disabilities. Ann envisions using the technology not only for personal communication but also to become a counselor and work with others. The personalized avatar and synthesized speech can open up new possibilities for individuals with disabilities. The UC San Fransisco team’s goal is to develop technologies that better support individuals with disabilities, enabling their inclusion in the workforce. They hope that their work will be a stepping stone to realizing the full potential of those who have lost the ability to communicate.
Source: IEEE
Leave a comment