By monitoring someone’s brain activity, the technology can reconstruct the words a person hears with unprecedented clarity. This breakthrough, which harnesses the power of speech synthesizers and artificial intelligence, could lead to new ways for computers to communicate directly with the brain.

Previous fMRI scanning research has shown that when people speak (or even imagine speaking), telltale patterns of activity appear in the brain. Distinct (but recognizable) patterns of signals also emerge when we listen to someone speak, or when we imagine listening. We previously published an article about similar technology being used in China but Columbia’s tech goes much further.

After early attempts to translate brain activity into recognizable speech failed, the research team turned to a computer algorithm called a vocoder that can synthesize speech after being trained on recordings of people speaking.

“This is the same technology used by Amazon Echo and Apple Siri to give verbal responses to our questions,” said Nima Mesgarani, PhD, the paper’s senior author and a principal investigator at Columbia University’s Mortimer B. Zuckerman Mind Brain Behavior Institute.

THAT'S RIGHT. WHEN YOU ALLOW THESE SO-CALLED "SMART" UTILITIES INTO YOUR HOMES, THEY ARE ZAPPING YOUR BRAINS AND STEALING YOUR THOUGHTS!

“In this scenario, if the wearer thinks ‘I need a glass of water,’ our system could take the brain signals generated by that thought, and turn them into synthesized, verbal speech. This would be a game changer. It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them.”