Microsoftreports via its own blog that scientists from its Natural Interaction Research Group recently gave the first public demonstration of a rather incredible haptic monitor they’d developed at the TechFest 2013 event. Mounted on a robot arm and working interactively with multitouch technology, the monitor is anything but conventional. Yet it may demonstrate one way we’ll interact with our hardware in the future.

The monitor, in combination with subtle movements of the robot arm that supports it, taps into our human kinesthetic haptic sense instead of just simple tactile sensations–meaning that it can push back on the user’s hands in a dynamic way as they interact with the touchscreen. In the demonstration, for example, Microsoft showed how a user could push “into” the display’s virtual space until their finger encountered an object. The display then pushed back onto the user’s finger in a manner commensurate with its material–a stone box feels smooth and heavy, and so on.

Most interestingly the display’s robot arm works only via a single dimension of movement–toward and away from the user–but it’s still able to deliver a compelling sensation of interacting with 3-D objects, even on the monitor’s simple 2-D display. The effect could have important uses in fields like medical imaging, design, and gaming.

Is this the future of every computer display? Not necessarily. Haptics is a growing field, and more sophisticated force-feedback devices are continually being integrated into smartphones and games controllers, for example. But what Microsoft’s research tells us is that the coding burden for some truly impressive haptic interactions may actually prove to be not too large. The human brain, again, can do a lot of the processing on behalf of the developer.