With great innovation and an extraordinary landmark breakthrough, now with advances AI can turn brain signals in to speech. The scientists have demonstrated a computer system effectively translating brain signals into intelligible speech. The astonishing experiment presents a proof-of-concept that could pave the way for a large variety of brain-controlled communication devices in the future.
In the first experiment, scientists have been able to translate brain signals directly into understandable speech. It may sound like weird, but this research could actually help some people with speech matters.
Advanced AI & Brain Signals
Translating brainwaves into words has been another massive challenge for researchers, but again, with the aid of machine learning algorithms, amazing advances have been seen in recent years. The latest leap forward, from a team of American neuro-engineers, has revealed a computer algorithm than can decode signals recorded from the human auditory cortex and translate them into intelligible speech.
When human speaks, their brains light up. And this sends electrical signals zipping around the old thought box. If scientists can decode those signals and understand how they relate to forming or hearing words, then we get one step closer to translating them into speech. With enough understanding and sufficient processing power that could create a device that directly translates thinking into speaking.
The Experiment
Initially they have gathered data from five patients while they were undergoing neurosurgery for epilepsy. The patients had a variety of electrodes fixed into their brains allowing the researchers to record comprehensive electro-corticography measurements. On the other hand, the patients were listening to short continuous stories spoken by four different speakers. Due to the invasive nature of needing to gather this data while patients were undergoing brain surgery only around 30 minutes.
To test the effectiveness of the algorithm, the system was asked to decode voices counting from zero to nine that were not included in the original training data. As the speakers recited the digits, the brain signals of the patients was recorded and run through the vocoder. A neural network then analyzed and cleaned up the output produced by the vocoder.
Recent Comments