The interface uses the Kinect and works by first detecting the location of the two eyes. Once it has located the eyes, it estimates the location of the tip of the nose. Once that is done, it estimates the location of the mouth area, and based on that, the actual movement of the tongue is obtained.
This interface could be efficiently used if the user could train their tongue. One way to do so is to move the tongue left and right. The researchers have actually created a game that uses the left to right tongue movement.
The interface is still a work in progress – quite raw and not very robust. The researchers are working on improving the interface’s ability to detect tongue movements more precisely. Also, they plan to include detection of lip movements in the future along with tongue movements. However, this is another fine example of what technology and innovation can do to make lives better for everyone.
Watch the video to see this interface (and the game) in action.