Microsoft shows off their transparent 3D desktop prototype

We think most would agree that the Microsoft Kinect is a miraculous piece of hardware. The affordable availability of a high-quality depth camera was the genesis of a myriad of hacks. And now it seems that type of data is making an intriguing 3D display possible.

What you see above is a 3D monitor concept that Microsoft developed. It starts off looking much like a tablet PC, but the screen can be lifted up toward the user whose arms reach around it to get at the keyboard underneath. There is as depth camera that can see the hands and fingers of the user to allow manipulation of the virtual environment. But that’s only part of the problem. You need some way to align the user’s eyes with what’s on the screen. They seem to have solved that problem too, using another depth camera to track the location of the user’s head. This means that you can lean from one side to the other and the perspective of the virtual 3D desktop will change to preserve the apparent distance of each object.

Don’t miss the show-and-tell video after the break. As long as there’s only one viewer this looks like a perfect non-glasses alternative to current 3D hardware offerings.

So, why hasn’t anyone made an LCD-monitor that in addition to the LCD-panel and backlight contains a backplate made from a “1 pixed LCD panel”? One that could be entirely black for a normal, opague screen, totally clear for a transparent screen and anything inbetweet at the touch of a button.

Head tracking cannot and will not be able to replace stereography, as the two serve entirely separate purposes. Stereography is to get separate images to both eyes, because most people (those who aren’t depth-blind) get a lot of depth information from triangulating what both of their eyes see. Head tracking solve the other half of the problem, which is parallax based depth sensing, where the brain determines distance based on the relative speed of objects as they move. Head-tracking also allows the viewer to decide how to view the scene in a natural way by just moving their head. Basically, you either need both head tracking and stereography, or you would need to use a hologram-like display, where the displays light has a different value depending on the angle it is seen from (and fine enough that it will be viewed correctly on each individual eye.

My girlfriend is a vision scientist and the lack of fundamental understanding of the fundamental human vision and cognition in the marketplace when it comes to display / interaction technology makes her crazy.

OK. I’ll bite (Ignoring that Faraday cages exits). If 3″ of tinfoil is needed to block WiFi (“Internet-driven mind control signals”), how much lead or more importantly how much U-238 would I need ?

Anyhow seriously I’d be more worried about hackers taking control of more common devices (PC webcams/Xbox Kinetic/PS3 PlayStation Eye). Once the legion of bots are talking, use image recognition software (ideally at the data source) to scan millions of homes for shopping lists of items and generating an auto-route for the most efficient pick up.

With all the leaked data to the internet over the last number of years tying a physical address (from credit card/delivery details) has become much much easier. When gangland meets cyberland, scary things COULD happen.

I still want my DAMN heads up display in my eye glasses, without some stupid monstrosity attached to them.

OLED technology works, but noboday has this technology in the US. But you can get it in tiny little war-torn Israel.

The demand for the technology is around by nobody is maing it. I want to walk into any sportsbar and watch whatever TV channel I want, right in front of me. I want to have a map and compass overlay on my paintball mask.

Amen to that!
I remember the contact display on here a while back, the biggest challenge is putting a microlens on the LED to focus it directly onto the retina. Heck, the eye can even handle lots of little blocked spots of vision and see clearly.