The general process is this. Researchers have developed software that takes readouts from people’s brains and matches them to words or pictures. Once mapped, future readouts can be read, interpreted and used for various kinds of mind-revealing or mental-control applications.

For example, MIT geniuses have invented a face-mounted device, plus a machine-learning application, that performs real-time speech-to-text conversion — but without the speech part.

I’d better explain that.

Electrodes on the device intercept neuromuscular signals sent by the brain to the face, and the machine-learning application transcribes them into text. It replaces vocalization with “subvocalization,” or “silent speech.”

Researchers use a neural network to match specific neuromuscular signals with specific words. Each person’s physiology is different. The researchers were able to achieve 92% accuracy after 15 minutes of customization and training.

The device also provides bone conduction output. That means you could make requests of a virtual assistant and get results audible only to you, all without the knowledge of people sitting right in front of you.

This is a surprising use for mind-reading technology, because it doesn’t “read” thoughts in general, only “instructions” sent from the brain to the face to speak (even if actual speaking doesn’t audibly or visually occur).

Also, it merely takes an existing behavior — spoken and audible interaction with a virtual assistant — and makes it silent and invisible, thereby increasing the range of situations where one could use a virtual assistant.

Of course, the device itself looks ridiculous. Nobody’s going to wear this in public. What’s important about this research is its proof that subvocalization can be a computer interface.

Scientists at the University of California, San Francisco, have created a mind-reading device that also turns mental activity into text with better than 90% accuracy. Instead of understanding the words a person is subvocalizing, it can detect what that person is hearing, with brain activity alone.

The science was a bit gruesome.

The researchers took advantage of a kind of epilepsy treatment whereby electrodes are implanted directly on the surface of the brain. Scientists used those electrodes for a second purpose, which was to monitor brain waves in the auditory cortex. They took that data and used algorithms to decode the specific speech sounds as they were being heard by the subject.

It’s the first step toward creating an externally worn gadget that can be used to convert thoughts to text — either “perceived” or “produced” speech.

Carnegie Mellon University research has found ways to read “complex thoughts“ based on brain scans, and output text accordingly. The university’s study demonstrated that complex thinking could enable its A.I. to predict the next “sentence” in the thought process.

Even Facebook has a mind-reading project in the works. The social networking company’s secretive Building 8 division is working on a way for users to send Facebook Messenger messages using thoughts alone.

One example is to turn down the volume of music based on the mental activity of being irritated by loud noise. It could be used for any number of Microsoft-related products, from enhancing the accuracy of a mouse to enabling next-level applications in the company’s HoloLens mixed-reality system.

Mind-reading research is also making gains in reading visuals, not just words.

Thirteen subjects were shown 140 faces. Electroencephalogram (EEG) readouts were processed by an A.I. algorithm developed by the scientists, and produced blurry but recognizable copies of what the subjects were shown.

Researchers are certain they’ll soon be able to re-create faces from memory alone, a feat that has obvious law-enforcement applications.

Japanese researchers at the University of Kyoto are working on a neural network system that performs in a similar way to the University of Toronto research. Subjects are shown pictures, then functional magnetic resonance imaging (fMRI) scans plus A.I. can estimate what those pictures looked like based on blood flow to the brain.

Researchers at Purdue University are also reading minds using A.I. and fMRI machines. They showed subjects videos and used A.I. to train their software to predict brain activity in the visual cortex. Over time, they could figure out what the person was looking at based on brain activity alone.

As with the MIT technology, Neurable’s game doesn’t read “thoughts,” but instead uses neural activity as commands or instructions.

A participant in HTC’s Vive X accelerator program, called Looxid Labs, is building a mobile VR headset with built-in emotion-detection technology that uses both eye tracking and brainwave monitoring.

The company has also developed attachments for the HTC Vive that do the same thing. Developer kits are scheduled for release this summer.

On a more practical level, car giant Nissan revealed its IMx KURO concept car, complete with an EEG headset, at the 2018 Geneva Motor Show.

The system uses monitored brainwaves to speed up the reaction of the car. For example, when it detects that the driver intends to apply the brakes, it starts braking even before the driver stomps on the brakes. Nissan claims that reaction times can be sped up by as much as half a second.

The best application is for everyday business use

What’s now becoming clear is that the best use of mind-reading A.I. is not stand-alone applications for mind control or telepathy. Instead, it will almost certainly be the enhancement and perfection of existing business applications.

Autocorrect and speech recognition could be improved to something close to 100% accuracy by combining existing technologies with mind-reading applications to understand intent or thought processes.

Instead of the sci-fi scenario of some cyborg brainiac bringing finger to temple and mentally commanding armies of robots, mind-reading applications may simply make everything happen more accurately and automatically.

Lights and sounds may turn up or down based on mental preference at the moment. User interfaces could always do exactly what we want them to.

And our own thoughts might be reflected back at us without manually writing them down and reading what we wrote.

Now that A.I. makes real mind reading possible, it’s time to consider that such applications might be practical, productivity-enhancing and actually enjoyable to use.

Copyright 2018 IDG Communications. ABN 14 001 592 650. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of IDG Communications is prohibited.