Tag: interface

This is pretty cool I guess. The idea is that your partner “helps you” to play a video game by letting you snog them in different ways (while you’re looking at a computer screen and therefore not really paying attention).

It’s a bit gross, but it’s still a novel idea, so have a look:

What’s the mechanic here?

The Kiss Controller interface has two components: a customized headset that functions as a sensor receiver and a magnet that provides sensor input. The user affixes a magnet to his/ her tongue with Fixodent. Magnetic field sensors are attached to the end of the headset and positioned in front of the mouth. As the user moves her tongue, this creates varying magnetic fields that are used to control games.

We demonstrate the Kiss Controller bowling game. One person has a magnet on his/her tongue and the other person wears the headset. While they kiss, the person who has the magnet on his/her tongue, controls the direction and speed of the bowling ball for 20 seconds. The goals of this game are to guide the ball so that it maintains an average position in the center of the alley and to increase the speed of the ball by moving the tongue faster while kissing.

And what’s the point?

I literally do not know. If I were the developers I’d have focused on highlighting their innovative technique to use the tongue as an input device: it’s the most dexterous muscle in the body and it’s use is often one of the few remaining facilities among paralytics.

Can’t this be a remote control for wheelchairs or similar, rather than a Wii Sports ripoff? Come on guys…

He states that there will come a point in human history where computer power matches our own thinking ability, therefore allowing us to interface with machines on a one-to-one basis without the need for a medium.

We’re talking simulations of your mind running on a machine, and the potential to upload information from computers into your mind.
Hot shit, right? Very Johnny Mnemonic.

Anyway, the news just in is that Ray has mapped out the future as follows:

A team from the University of Tokyo have conceived of several new applications for lasers, some of which are interesting to say the least, others potentially groundbreaking. These applications arise from their Smart Laser Scanner (markerless laser tracking) technology:

Essentially, it is a smart rangefinder scanner that instead of continuously scanning over the full field of view, restricts its scanning area to a very narrow window precisely the size of the target (from the Ishikawa Komura Laboratory)

So what this means for us is we could pretty soon have a low-cost and low-apparatus method to interface with a wearable computer, in multitouch, and without the need for any markers.

Potential forms of laser input and outputThe project website features videos for all of their experiments, including:

Simple 3D tracking

Air writing

Multiple point tracking

Alphanumeric feedback

Video editing

Map navigation

Multiple users

I urge you to read more on the project website right here, but before you go, I’d like to feature one of the coolest applications I found for the Smart Laser Scanner. It’s called Sticky Light, and it’s an experiment in light interaction:

The question I want to ask is, wouldn’t this be the ultimate executive toy if productized in time for Christmas? I know I want one.