Touch screens are great, but what if you want to use any surface as an input? Then you grab the simplest of today’s standard inputs: a computer mouse. But take that one step further and think of the possibilities of using the mouse as a graphic input device in addition to a positional sensor. This concept allows Magic Finger to distinguish between many different materials. It knows the difference between your desk and a piece of paper. Furthermore, it opens the door to data transfer through a code scheme they call a micro matrix. It’s like a super small QR code which is read by the camera in the device.

The concept video found after the break shows off a lot of cool tricks used by the device. Our favorite is the tablet PC controlled by moving your finger on the back side of the device, instead of interrupting your line of sight and leaving fingerprints by touching the screen.

What if you linked the camera image to the display image? I.e. if you put your finger on the screen, the software will match the camera image to what it knows is on the screen and hence determine where on the screen your finger is. Combine that with the tactile sensor and you could turn any screen into a touch screen.

This is a pretty cool device. Their presentation wasn’t all that great.
If they can get this thing down to being wireless and small then that would be the key.
This mixed with the google glasses would make for an absolutely awesome combo.

Interesting… the video was a little awkward but probably less so than if I had been the subject. It seems like they took some of the inner workings of an optical mouse and tethered the sensors to the guts…

In reality it is not a bad idea, but at this infant state it seems as though they have a little way to go before it is more useable.

A device like this could, in essence, make any display a “touch” display, and could be the bridge from desktops to tables for UIs like Windows 8. I like the concept, but am concerned about the ergonomics.

Is it just me or is the prototype primarily just the guts of an optical mouse strapped on with velcro? I see quite a bit of clever filming to hide the wires on his forearm and the USB cables to the external devices.

I totally get this is a proof of concept type prototype and I think the addition of the camera is brilliant, but why hack a corded mouse when wireless mice are so cheap?

We got our NanEye camera directly from Awaiba. It is indeed used in the video, I guess its easy to miss because its so small! We wouldn’t be able to recognize the textures or Data Matrix codes with just the optical mouse motion sensor, the resolution is too low.

We were aware of previous mousecam projects. Combining the low-resolution high speed mouse sensor with the high resolution lower speed naneye camera gave us the best of both worlds, and opens up new possibilities. If interested you can read the scientific research paper, the pdf is here:

Not quite a leather glove – but a wrist band with multiple “fingertips” (plug ins, from 1 to 5). Ribbon wiring goes across the back of fingers.
If the wrist band is the cuff of a shirt or jacket, you are well on the way to a wearable computer.

Damn freaky tiny camera…
If I know right the optical mouse is a very low resolution (something at 16×16 px) camera that detects small changes in the surface pattern.

Also, this tiny camera mouse thingy seems a bit of overkill to me. Just imagine how many data the computer has to process from the camera in order to move your pointer around. At 256×256(64k) RGB colour pixels, an image should occupy around 590KB in memory (raw data), so 30FPS (average frame rate for a good moving picture) gives more than 17MB per second of imagery to be processed, and that is just to move the pointer around. Add the positional sensor said here and you got more data. I don’t know much about positional sensors, but I make a rough guess at 20MB/s for the whole thing.

I’m not saying that it’s not an impressive and cool hack, because it is. But thanks, I’m going to stick with my ol’n’trusty (and not spy) mouse.

optical mouse sensors do all the processing on chip, + they operate WAY higher than 30 fps. For example very old UIC1001 sensor scans at 20K frames per second.
Dumping raw pixel data is a side effect.
The trick to successful vision apps is to not store video data at all :)
look at http://chipsight.com/