How fast does “virtual reality” have to be to look like “actual reality”?

Low latency is important to an effective VR display but might not be everything.

For decades now, virtual reality has been a pipe dream concept, well ahead of the technology needed to realize it. Generating a convincing 3D world that precisely and instantly matches the head-tracked position of a player's gaze was well beyond the headsets that proliferated in research centers and on the market up through the '90s. It has only been recently that products like Sony's prototype gaming headset and the upcoming Oculus Rift have seriously attempted to create believable virtual reality headsets using modern head-tracking and display technology.

But there are some who think the technology in these systems still hasn't been developed far enough to create a truly believable, head-tracked virtual reality. Valve's Michael Abrash laid out this case in a detailed blog post last weekend, suggesting that VR headsets need a "Kobayashi Maru moment" to solve the inherent problem of display latency that plagues current and upcoming headsets.

Current non-VR games usually bottom out at about 50 milliseconds (ms) of latency between a controller input and the time the pixels actually update. That's more than fine when viewing an image on a stationary screen, Abrash says, but VR systems need much better latency in order to trick the brain into thinking it's looking at a virtual world that completely surrounds the player wherever he or she looks. "The key to this is that virtual objects have to stay in very nearly the same perceived real-world locations as you move; that is, they have to register as being in almost exactly the right position all the time," Abrash writes. "Being right 99 percent of the time is no good, because the occasional mis-registration is precisely the sort of thing your visual system is designed to detect, and will stick out like a sore thumb."

To be nearly indistinguishable from reality, Abrash says a VR system should ideally have a delay of 15ms or even 7ms between the time a player moves their head and the time the player sees a new, corrected view of the scene. The Oculus Rift can achieve latency of about 30 or 40 milliseconds under perfectly optimized conditions, according to creator Palmer Luckey (this doesn't take into account the added delay inherent in the physical display itself; more on that later). While Luckey acknowledges that this is slower than the "real world" modeling ideal, he says he thinks the Rift is more than capable of creating a convincing virtual world.

"The Rift developer kit has received a lot of positive feedback from those who’ve tried it, but there’s no denying we’re still a ways away from perfect VR," Luckey told Ars. "It's a difficult question, because 'convincing virtual reality' is very subjective... You can be very convincing without necessarily being indistinguishable from reality."

That certainly describes my experience with a prototype of the Oculus Rift at the Penny Arcade Expo in September. To me, the delay between my head movements was practically unnoticeable and much smoother than any other VR headset I had tried before. I could tell I was looking at a screen, obviously, but it wasn't the kind of jarring, "which way am I facing" experience of some other VR systems. I did get a little nauseous during the experience, but that was more from using the controller to turn my view without moving my head, rather than any delay in the virtual world I was tilting my head in.

Physical limits

Luckey says he and his team have been doing everything they can to get that latency number down, including creating their own head tracker that works more quickly than prepackaged solutions. But even if the Rift software could generate and transmit a perfectly aligned 3D perspective instantaneously, there's a significant bit of "motion-to-photons" latency introduced by the refresh rate of the display.

The standard 60Hz displays used in most phone-sized LCD displays (like the ones used on the Rift) are perfectly fine for your iPhone, but they introduce about 15ms of extra delay as the image is drawn pixel by pixel in front of your eye. "It’s actually more complicated though, because the image is drawn line by line, meaning pixels on the bottom of the display begin switching before the entire image is draw on to the screen," Luckey says. "On top of that, a pixel does not have to completely switch for motion to be perceived; you can see motion even if the pixels are in the process of switching."

Abrash suggests that using 120Hz or even 240Hz displays would help a VR system get down to that holy grail of a 7ms delay, and increased scan-out speeds could help even further. Luckey agrees that a better refresh rate on standard, low-cost displays would help matters, but he says that features like higher resolution and better positional tracking would help even more. "Luckily for the VR community, the massive mobile phone market continues to help us solve many of these challenges," he said.

One way to short-circuit that kind of inherent hardware delay is predictive head tracking. By guessing which way a player is going to move and pre-rendering the correct display for that view, the apparent latency could be cut down drastically.

Luckey says the Rift team has looked into this potential solution, but he says it's "no silver bullet." While it's often relatively easy to guess which way a goal-oriented gamer will want to look next, things "[become] especially tricky when trying to predict when the player will stop moving." In some cases, using predictive tracking can "actually be worse than no prediction at all," he added.

Valve has been long-rumored to be working on its own virtual or augmented reality headset, and Abrash didn't respond to a request for comment on the development of that potential project. But his blog post suggests that he sees the journey to the perfect VR headset as just beginning.

"It's my hope that if the VR market takes off in the wake of the Rift's launch, the day when display latency comes down will be near at hand," Abrash wrote.

Kyle Orland
Kyle is the Senior Gaming Editor at Ars Technica, specializing in video game hardware and software. He has journalism and computer science degrees from University of Maryland. He is based in the Washington, DC area. Emailkyle.orland@arstechnica.com//Twitter@KyleOrl

The immersion issues with current VR headset such as Oculus in games are, as Chris Roberts of StarCitizen has hinted, is not just the issue of convincing visuals, but also the physical interactions which depend a lot on the game.

Oculus Rift supports Doom 3 BFG edition, but there is still layer of immersion lacking. One does not walk around like the person in FPS game does, even if you can somehow find a realistic gun-like controller mod.

Chris Roberts was quoted on Oculus blog:

Quote:

As cool as Doom 3 was, it's not going to compare to Star Citizen / Squadron 42 as we're pretty much the perfect kind of game for the Oculus. Sitting in chair looking around, is exactly what you do in a cockpit, so the Rift is just going to feel natural.

There still remain difficult to solve ultimate goals in force feedback, sense of touch, smell, etc. But are these issues important? Human mind has fascinating ability to fill in the gaps, and when you read a book you can use your creativity to imagine and live in the world. It doesn't seem to bother many bookworms.

On the topic of Oculus Rift, I wonder if stereoscopic use of it is still prone to inducing migraines:

Since Oculus does not perform eye tracking (not head tracking). When I go to see the movie Avatar, I got headache because the focus and depth of field of objects was prescribed by the movie director. I look to the side of the table where the movie character was not looking, it's all blurry and off-focus, and my eye frantically tried to adjust the lens and compensate, but of course it couldn't---it was a fake effect. I was forced to look at what the move character was looking where the objects are "in focus". This was the primary problem for me.

I can see how this problem will still exist for Oculus in 3D games. It can be mitigated somewhat with head tracking if they change the depth of field effect according to what object you've cantered your head upon, but you still have to center your eyes according to your head, i.e. you can't use your eye to look elsewhere without possibly seeing migraine-inducing off focus objects, when your eyes try and fail to compensate with the internal biological lens.

I take it the author has never seen Star Trek? A "Kobayashi Maru moment" would be one involving cheating.

Except that Kirk didn't cheat. He changed the rules. That is one of the best things any commander can do when in a combat situation.

Indeed. Abrash meant it in the sense of "changing the rules" of VR.

As an engineer I can tell you I love to "Cheat." If I can find a solution to an engineering problem that works I don't really care if that solution follows the what I thought were the rules. In that respect pre-computing is something of a cheat.

I have to agree with some of the others that the Star Trek reference felt like it came out of left field **within the context of this article**. It seems like sort of a misguided "geek cred" reference on the part of this article's author.

After referring to the blog post, I can see the relevance. I think that the article should have either omitted it or expanded on it with rather more than a "click the link, do a text search and then realize I'm not making stuff up, and that it actually fits."

Seems like "VR" headsets are getting close to tipping - it's like all those smartphones with big displays pre-iPhone: neat, useful even, but too kludgy for mass adoption. I think once the features talked about (faster displays / readouts) happen at a cheap cost & low weight we'll see quite a boom in wearable displays.

But ... that's the kind of obvious statement that should be making hundreds of thousands of dollars a year as a "tech analyst".

I wonder if it would be possible to have the device take in a larger FOV image in advance, and then calculate the appropriate subimage in hardware? It only reduces the latency of head movement, but I think that's probably the most important factor (since other actions are using traditional controls at the moment, I'd imagine we can tolerate more delay).

I didn't quite understand the premise of the article with relation to the subtitle. Nothing noted in the article handles any of the human physics involved or directly addresses the latency question. I should have prefaced, human physics in this context is a relatively new specialty field which deals primarily with human kinematics, perceptual optics, and haptics.

Michael Abrash's assertion of 7-15 ms of latency as the ultimate goal for VR makes sense from a human physics standpoint and concepts related to how our eyes, in particular the fovea, and mind interact. Microsoft actually has some interesting research here and it is why the sampling rate for digitizer pens on most Windows XP-7 tablet devices is 140 Hz (~7 ms). Use a higher latency capacitive digitizer (ignoring the resolution issues) the latency makes inking type usage annoying at best, more infuriating. The same perceptual issues would certainly apply in a VR type of context.

The idea of predictive rendering is a rather obvious choice, and in fact, sufficient GPU chops should be leveraged to produce an almost rainbow table of the next several possible next frames in order to be truly effective. With enough pre-rendered frames in VRAM, it becomes a simple issue of shoving the right one based on movement input into the display buffer (I'm making this sound computationally much easier than it is).

The real game changer would be a display technology with true depth, at least such that the human eye can change depth of focus to believe the distance an object exists from the viewer. We gauge distance to a target both by the stereoscopic alignment of an object, but also by a feedback loop of lens focus and relative object positions. This is why you do not completely loose depth perception with one eye closed (though it is severely impaired).

A display technology similar to the inverse of the way the Lytro camera works would make more sense as well for immersion in VR. Principally, the technology would need to beam light fields onto the retina rather than use static planar displays. The nanotechnology isn't there yet. The closest I can think of would be DLP-like with adjustable focus micro-mirror type technology instead of just a flutter frequency to meter the volume of photons projected.

On the topic of Oculus Rift, I wonder if stereoscopic use of it is still prone to inducing migraines:

Since Oculus does not perform eye tracking (not head tracking). When I go to see the movie Avatar, I got headache because the focus and depth of field of objects was prescribed by the movie director. I look to the side of the table where the movie character was not looking, it's all blurry and off-focus, and my eye frantically tried to adjust the lens and compensate, but of course it couldn't---it was a fake effect. I was forced to look at what the move character was looking where the objects are "in focus". This was the primary problem for me.

I can see how this problem will still exist for Oculus in 3D games. It can be mitigated somewhat with head tracking if they change the depth of field effect according to what object you've cantered your head upon, but you still have to center your eyes according to your head, i.e. you can't use your eye to look elsewhere without possibly seeing migraine-inducing off focus objects, when your eyes try and fail to compensate with the internal biological lens.

I'm curious and I apologize if my logic is flawed. Since the Oculus has such a large field of view and most 3d games can be rendered clearly everywhere(unlike films that require lens focus), couldn't we ignore this to some extent and just let our eyes blur whatever we aren't looking at naturally? I think I can already see some flaws in my logic, though I don't know how to express them.

I seem to be somewhat unique in that I see VR headsets less as a use for gaming, especially FPSes or other first-person, then other uses.

Regarding first-person gaming, "real" VR (to me) would require an environment other than standing at my desk (or sitting in a chair at your desk). You'd need some kind of mechanism to be able to "run" (without actually changing your location), you'd need game-appropriate controllers, etc. Now, I think I remember seeing some guys working on some weird 2-d treadmill thing years ago, and there's always the idea of a hamster ball, but both were 3-5 years out, probably 3-5 years ago.

For various non-first-person games, I actually see it being rather easier to implement decently. Imagine StarCraft if you could look around the map just by moving your head, with a static HUD.

Which brings me to a use I can actually see should be emphasized first: virtual displays. Assuming no focal issues causing headaches, I can see sitting at a desk, putting on a set of VR glasses, then placing windows in virtual 3D space. You like having multiple displays? Imagine what amount to infinite displays. Pin some to your FOV (like email; have a "systray" icon that lurks like a game HUD), pin some to virtual locations (look up-and-left to check network status), etc.

- A keyboard + mouse offers speed and precision, but if you want to turn and look over your shoulder while still using the controls, it'd obviously be very awkward, and some rotations might be hard while sitting.- Using a standard console controller lets you stand and easily move the controller around, but it's obviously very slow and imprecise for things like aiming compared to a mouse. You could end up with a weird disparity between a very intuitive and fast system to look around, and a very unintuitive and clunky system to aim and interact with the world.

I have to agree with some of the others that the Star Trek reference felt like it came out of left field **within the context of this article**. It seems like sort of a misguided "geek cred" reference on the part of this article's author.

After referring to the blog post, I can see the relevance. I think that the article should have either omitted it or expanded on it with rather more than a "click the link, do a text search and then realize I'm not making stuff up, and that it actually fits."

Yeah, I can certainly see the reference as well in the original blog post. I'm not going to speculate on either author's motives for using it, but the reference was not placed in context well enough in the Ars article (as is evident by the number of "huh?" moments in the comments). Here, Kyle is framing the discussion by talking about further development of the technology. However, Michael Abrash's blog post wasn't so much talking about advances in technology, but rather about taking the existing tech and doing something clever to beat the system. He essentially means thinking outside the box.

Here's one of the direct references:

Quote:

The other interesting aspect is that everyone knew that there was a speed-of-light limit on 256-color performance on the VGA – and then Mode X made it possible to go faster than that limit by changing the hardware rules. You might think of Mode X as a Kobayashi Maru mode.

Which brings us, neat as a pin, to today’s topic: when it comes to latency, virtual reality (VR) and augmented reality (AR) are in need of some hardware Kobayashi Maru moments of their own.

'Realistic looking' is a pretty hazy concept. Some people consider the objectively better The Hobbit running at 48hz to look more fake than when running at 24hz (and despite the loss of motion blur) because 'it doesn't feel like film'. One would think that more data would always be better but that doesn't seem to be the case. Higher update rates in VR are obviously a good thing in terms of responsiveness but I don't think it's key in 'realism' except in that very specific area.

I have to agree with some of the others that the Star Trek reference felt like it came out of left field **within the context of this article**. It seems like sort of a misguided "geek cred" reference on the part of this article's author.

After referring to the blog post, I can see the relevance. I think that the article should have either omitted it or expanded on it with rather more than a "click the link, do a text search and then realize I'm not making stuff up, and that it actually fits."

That's fair... I should have been clearer what the reference meant when quoting it in the article. Live and learn.

I didn't quite understand the premise of the article with relation to the subtitle. Nothing noted in the article handles any of the human physics involved or directly addresses the latency question. I should have prefaced, human physics in this context is a relatively new specialty field which deals primarily with human kinematics, perceptual optics, and haptics.

Michael Abrash's assertion of 7-15 ms of latency as the ultimate goal for VR makes sense from a human physics standpoint and concepts related to how our eyes, in particular the fovea, and mind interact. Microsoft actually has some interesting research here and it is why the sampling rate for digitizer pens on most Windows XP-7 tablet devices is 140 Hz (~7 ms). Use a higher latency capacitive digitizer (ignoring the resolution issues) the latency makes inking type usage annoying at best, more infuriating. The same perceptual issues would certainly apply in a VR type of context.

The idea of predictive rendering is a rather obvious choice, and in fact, sufficient GPU chops should be leveraged to produce an almost rainbow table of the next several possible next frames in order to be truly effective. With enough pre-rendered frames in VRAM, it becomes a simple issue of shoving the right one based on movement input into the display buffer (I'm making this sound computationally much easier than it is).

The real game changer would be a display technology with true depth, at least such that the human eye can change depth of focus to believe the distance an object exists from the viewer. We gauge distance to a target both by the stereoscopic alignment of an object, but also by a feedback loop of lens focus and relative object positions. This is why you do not completely loose depth perception with one eye closed (though it is severely impaired).

A display technology similar to the inverse of the way the Lytro camera works would make more sense as well for immersion in VR. Principally, the technology would need to beam light fields onto the retina rather than use static planar displays. The nanotechnology isn't there yet. The closest I can think of would be DLP-like with adjustable focus micro-mirror type technology instead of just a flutter frequency to meter the volume of photons projected.

Regarding the display tech with true depth, would it be possible(you seem knowledgeable on the subject so I thought Id ask) to use multiple semi-transparent LCD or AMOLED screens each maybe an inch or 2 apart, and have different display elements rendered on different displays depending on what depth they are to be viewed at?

Some people consider the objectively better The Hobbit running at 48hz to look more fake than when running at 24hz (and despite the loss of motion blur) because 'it doesn't feel like film'.

I agree, very high refresh rates make everything look fake on my TV, I had to turn off the fine-motion gizmo to enjoy my TV again.

Thats not a high refresh rate, at least not a true high refresh. They simply insert frames that are an "in between" frame for those already being processed. In doing so, the added frames tend to not look as "real" as the actual frames being sent to the display. The fine motion gizmo you speak of is only meant to lessen the jumpy effect that plagues LCD screens during fast motion and action sequences, and does nothing to add to the actual quality of the video. Pretty much all LCD's are limited by a 60hz refresh rate, meaning they only are capable of displaying 60 frames per second. Thats it. Doesnt matter if your video card and 3Dmark are saying you are pushing 100 frames per second, your LCD display, unless one of the VERY few "true" 120hz LCD displays, is still only showing a max of 60fps, period. Higher fps from your video card is still desirable for other reasons related to the smoothness of video game graphics, but ultimately only 60 of those frames are actually drawn to the screen in any given second. Some newer TV's claim to be 120hz, but they dont actually have a 120hz processor and can only receive a 60fps signal. From there, tech like your "fine-motion gizmo" just interpolates extra frames and inserts them between the 60fps signal it is receiving. 3D TV's are different, but you'll notice most of those are very expensive and are plasma screens, not LCD. A true 120hz signal, should be much more smooth when displaying fast motion, but again, will probably not have any effect on the clarity or resolution of the picture and would probably not make it look more "real" just a bit more smooth.

Some people consider the objectively better The Hobbit running at 48hz to look more fake than when running at 24hz (and despite the loss of motion blur) because 'it doesn't feel like film'.

I agree, very high refresh rates make everything look fake on my TV, I had to turn off the fine-motion gizmo to enjoy my TV again.

Thats not a high refresh rate, at least not a true high refresh. They simply insert frames that are an "in between" frame for those already being processed. In doing so, the added frames tend to not look as "real" as the actual frames being sent to the display. The fine motion gizmo you speak of is only meant to lessen the jumpy effect that plagues LCD screens during fast motion and action sequences, and does nothing to add to the actual quality of the video. Pretty much all LCD's are limited by a 60hz refresh rate, meaning they only are capable of displaying 60 frames per second. Thats it. Doesnt matter if your video card and 3Dmark are saying you are pushing 100 frames per second, your LCD display, unless one of the VERY few "true" 120hz LCD displays, is still only showing a max of 60fps, period. Higher fps from your video card is still desirable for other reasons related to the smoothness of video game graphics, but ultimately only 60 of those frames are actually drawn to the screen in any given second. Some newer TV's claim to be 120hz, but they dont actually have a 120hz processor and can only receive a 60fps signal. From there, tech like your "fine-motion gizmo" just interpolates extra frames and inserts them between the 60fps signal it is receiving. 3D TV's are different, but you'll notice most of those are very expensive and are plasma screens, not LCD. A true 120hz signal, should be much more smooth when displaying fast motion, but again, will probably not have any effect on the clarity or resolution of the picture and would probably not make it look more "real" just a bit more smooth.

I know that Fine-Motion adds frames in-between images that are an artificially produced post-processing technique. I didn't care to explain the mumbo-jumbo behind something when it wasn't relevant to me agreeing that it looks fake, but now feel as if I came across as uninformed, and I just wanted to remedy that for my own egotistical purposes.

Some people consider the objectively better The Hobbit running at 48hz to look more fake than when running at 24hz (and despite the loss of motion blur) because 'it doesn't feel like film'.

I agree, very high refresh rates make everything look fake on my TV, I had to turn off the fine-motion gizmo to enjoy my TV again.

Thats not a high refresh rate, at least not a true high refresh. They simply insert frames that are an "in between" frame for those already being processed. In doing so, the added frames tend to not look as "real" as the actual frames being sent to the display. The fine motion gizmo you speak of is only meant to lessen the jumpy effect that plagues LCD screens during fast motion and action sequences, and does nothing to add to the actual quality of the video. Pretty much all LCD's are limited by a 60hz refresh rate, meaning they only are capable of displaying 60 frames per second. Thats it. Doesnt matter if your video card and 3Dmark are saying you are pushing 100 frames per second, your LCD display, unless one of the VERY few "true" 120hz LCD displays, is still only showing a max of 60fps, period. Higher fps from your video card is still desirable for other reasons related to the smoothness of video game graphics, but ultimately only 60 of those frames are actually drawn to the screen in any given second. Some newer TV's claim to be 120hz, but they dont actually have a 120hz processor and can only receive a 60fps signal. From there, tech like your "fine-motion gizmo" just interpolates extra frames and inserts them between the 60fps signal it is receiving. 3D TV's are different, but you'll notice most of those are very expensive and are plasma screens, not LCD. A true 120hz signal, should be much more smooth when displaying fast motion, but again, will probably not have any effect on the clarity or resolution of the picture and would probably not make it look more "real" just a bit more smooth.

I know that Fine-Motion adds frames in-between images that are an artificially produced post-processing technique. I didn't care to explain the mumbo-jumbo behind something when it wasn't relevant to me agreeing that it looks fake, but now feel as if I came across as uninformed, and I just wanted to remedy that for my own egotistical purposes.

Lol sorry, that was my fault. I like to type and love telling other people what I know. Ive been told by my sister-in-law I should be a teacher. I tend to assume others are less informed, more so then I should, and have been called on it before "what do you think Im stupid or something?" It comes from the frequent blank stares I get from non-geeky friends when trying to explain certain things to them. I sometimes forget the ars readership is much more informed then most of my friends and when I saw you talking about high refresh rates and then you mentioned the fine motion thing, I figured you thought that "fine-motion, 120hz display" or whatever BS marketing speak was written on the TV box, meant an actual higher fps. I ran into the same thing the last time I was looking for a TV bigger than 32" that could receive a 120hz signal from my PC. Literally every single TV that said 120hz, was not actually capable of receiving a 120hz signal. I even got into an argument with one of my co-workers(I work in IT) regarding LCD refresh rates. He seemed to think each pixel was refreshed individually and that although each pixel could only change 60 times a second, that the display as a whole was actually displaying 100fps when 3dMark said his video card was pushing that. He ended up looking it up and sending me a text later that night admitting I was right. So, sorry if I came off as condescending or like I was trying to make you look uninformed, it wasnt intentional.

I'm not sure the model as I'm at work. It's a 50" Samsung single-core 3D Smart TV (LED/LCD) and I bought it new about 2 - 3 months ago, if that's of any help. I think it was around $2,000 AUD; it was a present from my Wife so I'm not entirely sure the cost.

I'm not sure the model as I'm at work. It's a 50' Samsung single-core 3D Smart TV (LED/LCD) and I bought it new about 2 - 3 months ago, if that's of any help. I think it was around $2,000 AUD; it was a present from my Wife so I'm not entirely sure the cost.

So to get VR helmets that work perfectly, they'll have to figure out where the person is going to look next. In order to do this, they'll need to include brain probes to figure the person's thinking. So if we're going with brain probes, why not make the VR a "brain implant" similar to Total Recall's trick?

Of course Totall Recall relied on memory implants, and we'd want something that's real-time, but surely that's no biggy for brain surgeons of the 21st and 1/8 century.

The real game changer would be a display technology with true depth, at least such that the human eye can change depth of focus to believe the distance an object exists from the viewer. We gauge distance to a target both by the stereoscopic alignment of an object, but also by a feedback loop of lens focus and relative object positions. This is why you do not completely loose depth perception with one eye closed (though it is severely impaired).

A display technology similar to the inverse of the way the Lytro camera works would make more sense as well for immersion in VR. Principally, the technology would need to beam light fields onto the retina rather than use static planar displays. The nanotechnology isn't there yet. The closest I can think of would be DLP-like with adjustable focus micro-mirror type technology instead of just a flutter frequency to meter the volume of photons projected.

I see the focus vs. convergence issue brought up a lot, but I don't think it is all that important right now.The aperture of the human eye is quite small, so beyond about 10 feet, there is very little difference in focus. With the low angular resolution of the Oculus Rift, I doubt this will be an issue for any simulated object beyond arm's reach.

I see the focus vs. convergence issue brought up a lot, but I don't think it is all that important right now.The aperture of the human eye is quite small, so beyond about 10 feet, there is very little difference in focus..

Err... yeah, but isn't that first 10 feet by far the most important?

How can "virtual reality" have any hope of being immersive if you have to be 10 feet away from everything for it to have any hope of looking real?

This isn't for FPS games. How could you turn 360 degrees? Swivel chair? I can't imagine playing deathmatch or CS with this. It'd be fine for RPG though.

Flight simulators! This would be perfect for flight sims. I'd love to try VR in Falcon 4.

And racing sims. It's funny that all the latest VR news seems to involve FPS, when it would be so much easier to apply to games where you are supposed to be seated and not running around. I'm sure we'll get to a good FPS control scheme at some point, but that make take some evolution. In the meantime, everything is set up for flying and racing sims to use this.

Abrash was obviously suggesting that everyone needs to try to solve the problems with 'Virtual Reality' so that they can realise how it is doomed-to-fail and that we should just learn to accept that fact.

One day an enterprising young geek will realise that there's no point even trying to develop a new way to solve the VR problem and just claim that all of our everyday actions have inherently solved the VR lag/graphical fidelity/looking like a douche problem since no one can prove that he isn't living in a Matrix-like computer simulation.

Not surprisingly, there is a fair volume of academic (and military) research on this very issue. The first well-documented head-mounted display was made in 1968 by Ivan Sutherland. In the 45 years since, lots of people have given thought to the various issues of HMD and VR displays. Look and you will find answers.

Predictive tracking is a necessary element, however, like weather prediction, it only works well for predicting a small interval ahead (less than, say, 16 ms). Therefore, you'd like you're whole end-to-end latency to be below this amount. This is a difficult proposition, but certainly not impossible. The latency of each component (tracking system, processing, rendering, display) must be appropriately small.

Of course, depending upon what you are doing, you can get away with more latency. The slower you are moving, the less you notice the lag (and vice versa).

There are also some "cheats" you can do, such as adjusting the display at the last moment to correlate with the latest tracking data. Thus, you can decouple the (processing, rendering) somewhat from the (tracking, display) part.