In preliminary experiments, NASA scientists found that small, button-sized sensors, stuck under the chin and on either side of the “Adam’s apple,” could gather nerve signals, and send them to a processor and then to a computer program that translates them into words. Eventually, such “subvocal speech” systems could be used in spacesuits, in noisy places like airport towers to capture air-traffic controller commands, or even in traditional voice-recognition programs to increase accuracy, according to NASA scientists.
“What is analyzed is silent, or subauditory, speech, such as when a person silently reads or talks to himself,” said Chuck Jorgensen, a scientist whose team is developing silent, subvocal speech recognition at NASA’s Ames Research Center, Moffett Field, Calif. “Biological signals arise when reading or speaking to oneself with or without actual lip or facial movement,” Jorgensen explained.
“A person using the subvocal system thinks of phrases and talks to himself so quietly, it cannot be heard, but the tongue and vocal chords do receive speech signals from the brain,” Jorgensen said.
Inputs via NASA