If you combine that with speech or gesture recognition, it leads to a technological approach that could be safer and more ubiquitous than what’s been done before. Naturally, there are some people who think that these displays are risky in certain circumstances.

Even as access to networking and computing permeate more of our business and personal lives, the display has been one dimension that has been holding back application in many domains. I can easily see a mechanic or others who hands are typically busy doing work using techniques like this to reference manuals… and facilitate decisions. Who knows if these techniques can be applied in a transparent and effective way, they could lead to the one display that is used by all the devices around us.

It makes me ask questions about how applications would change if this were available? What new business solutions are possible??

I recently saw a story about The Omni a new virtual reality gaming device that launched a funding campaign in Kickstarter.

It is a platform with a low friction, grooved base that allows users to walk or run in place. That movement translates directly into any keyboard-compatible game, allowing for an even more natural interface. It can be used with head up displays like the Oculus Rift and motion controllers like the Xbox Kinect to provide a very high level of realism to virtual reality.

There are also health benefits, since you’re not just sitting playing – you actually need to move large muscles to play the game.

As I looked at the capabilities, I couldn’t help but wonder about its application in the business environment. Maybe not for the knowledge worker (although thinking about that may be innovative) but for training and orientation. Let’s say you are a telecom worker who goes in the field and makes adjustments at a communications center – it may help to know what it will look like when you get there.

They were able to trick people who were eating a plain cookie into thinking the cookie was whatever flavor they selected. The group is making use of the fact that taste is affected by what we see, hear, and smell, as well as the texture of the food, among other things.

"We are using the influences of these sensory modalities to create a pseudo-gustatory display," says Takuji Narumi, an assistant professor at the University of Tokyo. "The aim is to have subjects experience different tastes through augmented reality by only changing the visual and olfactory stimuli they receive."

I can’t think of any business applications right now, but it does make me wonder about other uses.

Both articles show how consumerization is entering into the retail space in a big way, with the Microsoft Kinects spurring the imagination far outside the gaming community. There is a great example from FaceCake which virtualizes the “dressing room” experience in a way that speeds up the shopping process. What I found interesting is that most of the technology shown in the video is actually more likely to be found at home than at a retailer – possibly changing the definition of what shopping means.

In a similar vein there was a demonstration by Tissot at Harrods of using virtual reality techniques to supplement even the window shopping experience.

One of the technologies I was able to embrace last week when I was at the HP labswas a wall size display – the one I saw was probably 15 feet long but there are installations much larger. It was running at mutliples of high def resolution. This comes from the team researching the mobile and immersive experience of the future.

This technology was applied at the CES earlier this year to have a full size 3D display of earth wind and fire. I had a chance to see that video in the lab and it was strange to walk right up and almost step into a life-size 3D display.

It is clear that the 3D sensing and display technologies can change retail going forward.

One of the problems with virtual reality is that it is so, well, virtual...but maybe not for long. Researchers at the Computer Vision Lab at ETH Zurich have developed a method to produce virtual copies of real objects that can be touched and sent via the Internet. This article talks about the efforts to create virtual reality you can touch.

In order to accomplish this, they’ve used a 3D scanner to record the image and dimensions of the object. Next a probe with a force, acceleration, and slip sensor collects information about the object’s shape and solidity, and a model is created on the computer. It can then be displayed remotely allow a user to sense the object using a haptic (touch) device and while viewing it with 3D glasses.

Not sure what the business implications will be but it does make for some interesting remote collaboration possibilities. Taking these approaches from a haptic pen to gloves or other more intuitive approaches would definitely make the experience more user friendly.

Steve Simske is an HP Fellow and Director in the Printing and Content Delivery Lab in Hewlett-Packard Labs, and is the Director and Chief Technologist for the HP Labs Security Printing and Imaging program.