A lengthy post on the Valve Blog written by software engineer Michael Abrash discusses his work on creating a viable virtual reality headset for gaming. He goes into detail on the issues involved in reducing the latency in such devices to practical levels, saying: "If you ever thought that AR/VR was just a simple matter of showing an image on the inside of glasses or goggles, I hope that by this point in the blog itís become clear just how complex and subtle it is to present convincing virtual images..." He discusses some of the breakthroughs they have made so far, but admits "we've only scratched the surface," saying they are now in need of "a true Kobayashi Maru moment" to break through the final physical barriers preventing practical VR. Thanks Polygon/VG247.

Post CommentEnter the details of the comment
you'd like to post in the boxes below and click the button at
the bottom of the form.

HorrorScope wrote on Jan 2, 2013, 18:01:3D most likely we'll just be a come along for the ride, since it makes a lot of sense tacking it on with two seperate LCD's for perfect convergence.

I don't know a lot about it so I'm curious, how do VR glasses fix the convergence problem? My understanding is that convergence is where your eyes are pointed, and that naturally changes based on the distance to the object you're focusing on (more or less cross eyed). But even with VR glasses the screens are still a fixed distance from your eyes.

Would they need eye tracking to allow your eyes to converge on any object in the scene?