This is what happened when we tried controlling Google Glass with our mind

Take photos with a bit of brain power

Shares

We're thinking about building blocks. One on top of the other, higher and higher and higher. We're thinking about it really hard now. We've just got to make the line in front of us hit the top, then we can relax. Wait a minute - it's doing it, it's getting higher, we're doing it. Wait, no, it's dropped again. Sh*t. We stopped thinking about the blocks.

TechRadar's attempting telekinesis. We've got Google Glass and what appears to be some sort of wireless headset strapped to our face. Right now we look like RoboCop working the phones at his night job.

The application binding the two together is called MindRDR and was developed by London-based company This Place. The bit on our head that isn't Google Glass is a Neurosky EEG sensor, which you can buy for less than a tenth of the cost of Glass.

And the software? That's free to download right now. You'll need a bit of tech know-how to get it working correctly, but if you get to this stage, that probably isn't a concern.

Even Obi Wan had to start somewhere

The device works by detecting brain function through a sensor that sits in the middle of your forehead. Right now it's only able to to detect a binary value: whether you're concentrating hard on something or not.

For the first stage, MindRDR will detect your level of concentration, and when it's high enough it will take a photo - hence the building the blocks - while successfully completing a second concentration test will socialise the photo to Twitter.

"We managed to pair [Glass and the sensor] together via Bluetooth and then make the signals that the EG device reads able to control an action on GG," This Place MD Ben Aldred tells TechRadar. "And then built the app on GG to be able to take a photo and then socialise it."

"It's very easy to do," he reassures us at first "The learning curve is only a few minutes. And what tends to happens is that people tend to use it easily first time when they're not sure what to expect. And then everyone finds their own way of concentrating and relaxing."

The blocks don't quite do it for us (nor does thinking about tying our shoelaces, as it turns out) so we try mentally performing a tennis serve. Ding. Success. The bar climbs to the top and a picture is taken. Something's going on upstairs after all.

For the second test we try just focusing on the line and raising it like a Jedi mind trick. It works. And here's the proof.

It's because Google's putting the future of user experience in the hands of people like This Place that apps like MindRDR can exist. The mind-reading on Glass might be rudimentary right now, but this is just stage one.

With more advanced EEG devices, which cost anything up to a thousand pounds, you could map around 18 readings on the brain. "That allows a much wider set of commands, which would involve a bit more of a learning curve. But essentially you could have up, down, left, right, and in the future you could start mapping it to keyboards."

"Firstly, we want to empower other people to get involved with it and use the code and start developing with us," Aldred says. "We're going to continue with it and I think one of the areas in which we're interested is firstly looking at better EEG sensors so we can start taking it a step further. But also we can't help but be excited about making digital open to everybody."

It's a lot of fun, but it's also about making it easier for people to use technology to augment their lives. Glass still has its obstacles, including what Ben refers to as "Glass elbow" - something Google Glass users experience from continually lifting their arm up to swipe the device's touch pad.

Imagine what you could do with 17 more of those sensors

"We thought 'That's not ideal for anybody', but that also takes out a huge proportion of society that may have problems with dexterity or may have some sort of disability."

Using basic mind-reading functionality to take photos or play Pong is a lot of fun, but there are more significant possibilities for this technology that begin to emerge.

"[Technology is still] not inclusive of everybody. They all require a high level of dexterity to use, or a high level of speech to use. What we want to do is make technology accessible to everyone. For example, sufferers of locked-in syndrome or alzheimer's, things like that - could this be a tool that's useful for them? Could they actually start to interact with the power of digital with devices like this?"

"And how are we going to teach people to pick 18 different thoughts in their mind in order to control these? This is why it's so exciting."

Google Glass is available to buy, but is it worth your hard-earned money?