A blog by Michael Abrash

Last time, we started to look at the ways in which the interaction of a head-mounted display with the eye and the brain leads to perceptual artifacts that are unique to HMDs and that can greatly affect VR/AR experiences. We looked closely at one of those artifacts, whereby use of a color-sequential display in an HMD leads to color fringing. I chose to start the discussion of perceptual artifacts with color fringing not because it was the most problematic artifact, but rather because the temporal separation of the color components makes it easy to visualize the effects of relative motion between the eye and the display. In point of fact, color fringing can easily be eliminated by using a display, such as LCD, OLED, color-filter LCOS, or scanning laser, that illuminates all three color components simultaneously. (I hope HMD manufacturers are reading this, because many of them are still using color-sequential LCOS.) However, the next artifact we’re going to look at, judder, is not so easily fixed.

Judder, as it relates to displayed images, has no single clear definition; it’s used by cinematographers in a variety of ways. I’m going to use the term here to refer to a combination of smearing and strobing that’s especially pronounced on VR/AR HMDs; why that’s so is the topic of today’s post.

The place to start with judder is with the same rule we started with last time: visual perception is a function of when and where photons land on the retina. When it comes to HMDs, this rule is much less straightforward than it seems, due to eye motion relative to the display in conjunction with the temporal and spatial quantization performed by displays; we saw two examples of that last time, and judder will be yet a third example. By “temporal and spatial quantization,” I mean that any given pixel is illuminated for some period of time over the course of each frame, and during that time its color remains constant within the pixel bounds; that’s a simplification, but it’s close enough for our purposes.

When we looked at color fringing, the key was that each color component of any given pixel was illuminated at a different time, so when the eye was moving relative to the display, each color component landed in a different place on the retina. With judder, the key is that the illuminated area of each pixel sweeps a constant color across the retina for however long it’s lit (the persistence time), resulting in a smear; this is then followed by a jump that causes strobing – that is, the perception of multiple simultaneous copies of the image. (It’s not intuitively obvious why this would cause strobing, but it should be clear by the end of this post.) The net result is loss of detail, and quite likely eye fatigue or even increased motion sickness. Let’s look at how this happens in more detail.

If you haven’t done so already, I strongly recommend you read the last post before continuing on.

Why judder happens

In this post, we’re going to look at many of the same mechanisms as last time, but with a different artifact in mind. I’ll repeat some of the discussion from last time to lay the groundwork, but we’ll end up in quite a different place (although everything I’ll talk about was implicit in the last post’s color-fringing diagrams).

Once again, let’s look at a few space-time diagrams. These diagrams plot x position relative to the eye on the horizontal axis, and time advancing down the vertical axis.

First, here’s a real-world object staying in the same position relative to the eye. (This should be familiar, because it’s repeated from the last post).

I’ll emphasize, because it’s important for understanding later diagrams, that the x axis is horizontal position relative to the eye, not horizontal position in the real world. With respect to perception it’s eye-relative position that matters, because that’s what affects how photons land on the retina. So the figure above could represent a situation in which both the eye and the object are not moving, but it could just as well represent a situation in which the object is moving and the eye is tracking it.

The figure would look the same for the case where both a virtual, rather than real, object and the eye are not moving relative to one another, unless the color of the object was changing. In that case, a real-world object could change color smoothly, while a virtual object could only change color once per frame. However, the figure would not look the same for the case where a virtual object is moving and the eye is tracking it; in fact, that case goes to the heart of what this post is about, and we’ll discuss it shortly.

Next, let’s look at a case where the object is moving relative to the eye. (Again, this is repeated from the last post.) Here a real-world object is moving from left to right at a constant velocity relative to the eye. The most common case of this would be where the eye is fixated on something else, while the object moves through space from left to right.

In contrast, here’s the case where a virtual object is moving from left to right relative to the eye. Throughout today’s post, I’m going to assume the display is one that displays all three color components simultaneously; that means that in contrast to the similar diagram from the last post, the pixel color is constant throughout each frame, rather than consisting of sequential red, green, and blue.

Because each pixel can update only once a frame and remains lit for the persistence time, the image is quantized to pixel locations spatially and to persistence times temporally, resulting in stepped rather than continuous motion. In the case shown above, that wouldn’t produce judder, although it would generally produce strobing at normal refresh rates if the virtual object contained high spatial frequencies.

Note that in these figures, unless otherwise noted, persistence time – the time each pixel remains lit – is the same as the frame time – that is, these are full-persistence displays.

So far, so good, but neither of the above cases involves motion of the eye relative to the display, and it’s specifically that motion that causes judder. As explained last time, the eye can move relative to the display, while still being able to see clearly, either because it’s tracking a moving virtual object or because it’s fixated on a static virtual or real object via VOR while the head turns. (I say “see clearly” because the eye can also move relative to the display by saccading, but in that case it can’t see clearly, although, contrary to popular belief, it does still acquire and use visual information.) The VOR case is particularly interesting, because, as discussed in the last post, it can involve very high relative velocities (hundreds of degrees per second) between the eye and the display, and consequently very long smears.

Here’s the relative-motion case.

Once again, remember that the x axis is horizontal motion relative to the eye. If the display had an infinite refresh rate, the plot would be a vertical line, just like the first space-time diagram above. Given actual refresh rates, however, what happens is that a given virtual object lights up the correct pixels for its virtual position at the start of the frame (assuming either no latency or perfect prediction), and then, because those pixels remain unchanged both in color and in position on the display over the full persistence time and because the eye is moving relative to the display, the pixels slide over the retina for the duration of the frame, falling behind the correct location for the moving virtual object. At the start of the next frame, the virtual object is again redrawn at the proper location for that time, lighting up a different set of pixels on the screen, so the image snaps back to the right position in virtual space, and the pixels then immediately start to slide again.

It’s hard to film judder of exactly the sort defined above, but this video shows a very similar mechanism in slow motion. Judder as I’ve discussed it involves relative motion between the eye and the display. In the video, in contrast, the camera is rigidly attached to the display, and they pan together across a wall that contains several markers used for optical tracking. The display pose is tracked, and a virtual image is superimposed on each marker; the real-world markers are dimly visible as patterns of black-and-white squares through the virtual images. The video was shot through an HMD at 300 frames per second, and is played back at one-fifth speed, making it easy to see the relationship between the virtual and real images. You can see that because the virtual images are only updated once per displayed frame, they slide relative to the markers – they move ahead of the markers, because they stay in the same place on the display, and the display is moving – for a full displayed frame time (five camera frames), then jump back to the correct position.

This phenomenon is not exactly what happens with the HMD judder I’ve been talking about – the images are moving relative to the camera, rather than having the camera tracking them – but it does clearly illustrate how the temporal quantization of displayed pixels causes images to slide from the correct position over the course of a frame. I strongly recommend that you play a little of the video one frame at a time, so you can see that what actually happens is that the virtual image stays in the same position on the screen for five camera frames, while the physical marker moves across the screen continuously due to motion of the HMD/camera. If you substituted your eye for the camera and looked straight ahead, as the camera did, you would only see strobing of the virtual images, not smearing, as the virtual images jumped from one displayed frame to the next. However, if instead you moved the HMD as in the video but at the same time moved your eye to keep it fixated on either the physical or virtual marker, you would in fact see exactly the form of judder showed in the last diagram; you should be able to directly map that scenario to the last diagram. In particular, the images would smear.

You might reasonably wonder how bad the smear can be, given that frame times are measured in milliseconds. The answer is: worse than you probably think.

When you turn your head at a leisurely speed, that’s in the neighborhood of 100 degrees per second. Suppose you turn your head at 120 degrees per second, while wearing a 60 Hz HMD; that’s two degrees per displayed frame. Two degrees doesn’t sound like much, but on an Oculus Rift development kit it’s about 14 pixels, and if an HMD existed that had a resolution approximating the resolving capability of the human eye, a two-degree arc across it would cross hundreds of pixels. So the smear part of judder is very noticeable. Since I have no way to show it to you directly, let’s look at a simulation of it.

Here’s a rendered scene:

And here’s what it looks like after the image is smeared across two degrees:

Clearly, smearing can have a huge impact on detail and sharpness.

In contrast, this video shows how smooth the visuals are when a high-speed camera is panned across a monitor. (The video quality is not very good, but it’s good enough so that you can see how stable the displayed images are compared to the shifting and jumping in the first video.) The difference is that in the first video, tracking was used to try to keep a virtual image on a see-through HMD in the right place relative to the real world as the camera moved, with the pixels on the HMD moving relative to the real world over the course of each frame; in the second video, the image was physically displayed on a real-world object (a monitor), so each pixel remained in a fixed position in the real world at all times. This neatly illustrates the underlying reason VR/AR HMDs differ markedly from other types of displays – virtual images on HMDs have to be drawn to register correctly with the real world, rather than simply being drawn in a fixed location in the real world.

Besides smear, the other effect you can see in the first video is that the images snap back to the right location at the start of each frame, as shown in the last space-time diagram. Again, the location and timing of photons on the retina is key. If an image moves more than about five or ten arc-minutes between successive updates, it can start to strobe; that is, you may see multiple simultaneous copies of the image. At a high enough head-turn speed, the image will move farther than this threshold when it snaps back to the correct location at the start of each frame (and even a very slow 10 degrees per second head turn can be enough for images containing high frequencies), so judder can feature strobing in addition to smearing.

It’s worth noting that this effect is reduced because intensity lessens toward both ends of the smear for features that are more than one pixel wide. The reason is very straightforward: the edges of such smears are covered by the generating feature for only part of the persistence time. However, that’s a mixed blessing; the eye perceives flicker more readily at lower intensities, so the edges of such objects may flicker (an on/off effect), rather than strobe (a multiple-replicas effect).

Also, you might wonder why juddering virtual objects would strobe, rather than appearing as stable smeared images. One key factor is that any variation in latency, error in prediction, or inaccuracy in tracking will result in edges landing at slightly varying locations on the retina, which can produce strobing. Another reason may be that the eye’s temporal summation period doesn’t exactly match the persistence time. For illustrative purposes only, suppose that the persistence time is 10 ms, and the eye’s temporal integration period is 5 ms (a number I just made up for this example). Then the eye will detect a virtual edge not once but twice per frame, and if the eye is moving rapidly relative to the display, those two detections will be far enough apart so that two images will be perceived; in other words, the edge will strobe. (In actuality, the eye’s integration window depends on a number of factors, and does not take a discrete snapshot.) Note, however, that this is only a theory at this point. In any case, the fact is that the eye does perceive strobing as part of judder.

The net effect of smearing and strobing combined is much like a choppy motion blur. At a minimum, image quality is reduced due to the loss of detail from smearing. Strobing tends not to be very visible on full-persistence displays – smearing mostly hides it, and it’s less prominent for images that don’t have high spatial frequencies – but it’s possible that both strobing and smearing contribute to eye fatigue and/or motion sickness, because both seem likely to interfere with the eye’s motion detection mechanisms. The latter point is speculative at this juncture, and involves deep perceptual mechanisms, but I’ll discuss it down the road if it turns out to be valid.

Slow LCD switching times, like those in the Rift development kit HMDs, result in per-frame pixel updates that are quite different from the near-instantaneous modification of the pixel state that you’d see with OLEDs or scanning lasers; with LCD panels, pixel updates follow a ramped curve. This produces blurring that exaggerates smearing, making it longer and smoother, and masks strobing. While that does mostly solve the strobing problem, it is not exactly a win, because the loss of detail is even greater than what would result from full-persistence, rapid-pixel-switching judder alone.

Why isn’t judder a big problem for movies, TV, and computer displays?

I mentioned in the last post that HMDs are very different from other types of displays, and one aspect of that is that judder is a more serious problem for HMDs. Why isn’t judder a major problem for movies, TVs, and computer displays?

Actually, judder is a significant problem for TV and movies, or at least it would be except that cinematographers go to great lengths to avoid it. For example, you will rarely see a rapid pan in a movie, and when you do, you won’t be able to see much of anything other than blur indicating the direction of motion. Dramatic TV filming follows much the same rules as movies. Sports on TV can show judder, and that’s a motivating factor behind higher refresh rates for TVs. And you can see judder on a computer simply by grabbing a window and tracking an edge carefully while dragging it rapidly back and forth (although your results will vary depending on the operating system, graphics hardware, and whether the desktop manager waits for vsync or not). It’s even easier to see judder by going to the contacts list on your phone and tracking your finger as you scroll the list up and down; the text will become blurry and choppy. Better yet, hold your finger on the list and move the phone up and down while your finger stays fixed in space. The list will become very blurry indeed – try to read it. And you can see judder in video games when you track a rapidly moving object, but those tend to appear in the heat of battle, when you have a lot of other things to think about. (Interestingly, judder in video games is much worse now, with LCD monitors, than it was on CRTs; the key reason for this is persistence time, although slow LCD switching times don’t help.)

However, while judder is potentially an issue for all displays, there are two important differences that make it worse for HMDs, as I mentioned last time: first, the FOV in an HMD is much wider, so objects can be tracked for longer, and second, you can turn your head much more rapidly than you can normally track moving objects without saccading, yet still see clearly, thanks to the counter-rotation VOR provides. These two factors make judder much more evident on an HMD. A third reason is that virtual images on a monitor appear to be on a surface in the world, in contrast to virtual images on an HMD, which appear to be directly in the world; this causes the perceptual system to have higher expectations for HMD images and to more readily detect deviations from what we’re used to when looking at the real world.

Next time: the tradeoffs involved in reducing judder

Judder isn’t a showstopper, but it does degrade VR/AR visual quality considerably. I’ve looked through a prototype HMD that has no judder, and the image stayed astonishingly sharp and clear as I moved my head. Moreover, increased pixel density is highly desirable for VR/AR, but the effects of judder get worse the higher the pixel density is, because the smears get longer relative to pixel size, causing more detail to be lost. So is there a way to reduce or eliminate judder?

As it happens, there is – in fact, there are two of them: higher refresh rate and low persistence. However, you will not be surprised to learn that there are complications, and it will take some time to explain them, so the next part of the discussion will have to wait until the next post.

By this point, you should be developing a strong sense of why it’s so hard to convince the eye and brain that virtual images are real. Next time we’ll see that the perceptual rabbit hole goes deeper still.

16 Responses to Why virtual isn’t real to your brain: judder

It certainly seems a shame that technologies like “LightBoost”ed (strobing backlight) 120Hz LCDs are only commercially released for 20″+ 3D gaming screens. Moving to an 8ms frame interval with low persistence (LightBoost screens seem to have a configurable persistence as this acts as their brightness dial but 1.4-2.4ms seems the typical range) certainly seems to tie in well with the desired judder reduction methods.

I look forward to reading why even tech like that is an imperfect solution in the next part.

LightBoost-type technology is definitely in the right direction, although yes, next time we will see an unexpected issue with low persistence. A specific problem is that LCDs don’t transition very quickly and frame data takes a frame time to download, so it’s hard to get a clean single-frame image with that short pulse. I also think that LightBoost may strobe each frame more than once, although I’m not sure about that.

I told palmer all those years ago, I thought DOME Wide FOV gaming would be the better stepping stone to the future, where we all have home domes. Anyways Jaron Lanier just gave an interview to PBS about steven spielberg and and the head of universal studios lou wasserman talking about throwing up in virtual reality would be a GOOD THING! LOL! Relating to Jaws 3D I suppose.

Now Abrash, are you going to get lost in the technical details, or like Wasserman tried to show Lanier, human beings are creatures that you need to play with sensationalism and showmanship HAHA! Vomiting is a feature, not a bug

Jaron Lanier: I wouldn’t say it’s mystical; I would say it’s uncovering some treasures in our own biology that have just been hidden for a long time. So, when I was very young and playing with virtual reality for the first time, I had the experience of my arm suddenly becoming very large, because of a glitch in the software, and yet still being able to pick things up, even though my body was different. And that sensation of being able to alter your body is different from anything else. I mean, it’s almost like a whole new theater of human experience opens up.

Einstein talked about sometimes imagining his body experiencing these alternate spaces, in order to think about alternate visions of space and time, and I think when we try to stretch what we’re able to think about, we have to stretch who we are. And virtual reality, by its very nature, stretches who you are. It allows you to experience yourself in the world through an entirely different loop, through an entirely different pattern than you’re used to in natural reality. And I think it can’t help but open up new vistas of ways to think, ways to feel.

I couldn’t do a search on oculusvr forums for “depersonalization”, it was too many characters for their search engine, LOL!

Another prediction I tried to stress to Palmer, I have a fat head like Jaron, hot sweaty HMD doesn’t fit my head so well. I would rather sit in front of a dome to get wide FOV gaming than wear an HMD.

Paul Solman: Is this an urban legend, or not, that your head is so big that you can’t actually experience virtual reality yourself?

Jaron Lanier: I can experience virtual reality and do, but some of the head pieces don’t fit on my head. It is true I have an extra large skull, and then on top of it I grew all this stuff, so it has to be a headset that’s a little elastic, and some of them will not fit on me, but many do, so it’s only half true as an urban legend. It’s a suburban legend.

LOL! You tell em Jaron! Furthermore, I think Jaron makes good points how the future of VR will even cause a greater divide between those in our society who can use VR and those who cannot. I think the whole PTSD issue is a bad one, my own father suffering from it from the killings he faced in vietnam and exposure therapy was the worst thing that could have been given to him (after he retired he would never walk on a military base ever again, couldn’t take it) Thank you again Abrash for another excellent post, and how the free sharing of information is good for all of us, I wish Palmer had taken that deep to heart with the OPEN SOURCE HMD.

It talks about how PALANTIR is now one of the new funders of Oculus, yah that same EVIL Palantir that the CIA backs, that spys on all of us, for our own security. People are saying they maybe don’t like Oculus anymore, taking money from that group in that reddit link above, it could be a public relations nightmare!

You once said the free sharing of information was a good thing Abrash! Benjamin Franklin also said the same thing, we should freely share with our fellow humans, and feel good about doing it, it is a blessing. Lets see what your counterpart a few hundred years ago said about folks like Palantir that now funds Oculus! (OMG)

Sell not virtue to purchase wealth, nor Liberty to purchase power. Alex Howlett told me he was very positive about the future that was coming, am I the only critical person that thinks maybe dark forces maybe working into the VR future? A youtube on Palantir, it is not flattering. http://www.youtube.com/watch?v=tEuXez4-Qxo

Having played through all of Half-Life 2, Episode 1, Lost Coast, and most of Episode 2 with the Oculus Rift (~30 hours of play time) I would say the technology is already near the “good enough” limit. Judder is definitely secondary behind increasing resolution, head position tracking, and a better interface (hand tracking, controllers, etc).

However, I appreciate that you are swinging for the fences with tricking the brain at an ever deeper level.

I understand why you’d think that, but I’d say wait until you see a judder-free display and then see what you say. I’d personally take no-judder over more resolution. I really want more resolution, but when you move your head on a juddering display, it doesn’t matter how much resolution you have, the judder makes it irrelevant.

I’m very curious about how a real judder-free display would work, now. I suppose a laser-scanned display would only deliver photons at exactly one correct time, with no long period of phosphorescence. But that kind of technology seems unlikely to be ready for affordable consumer gear, no?

What if we put a layer of MEMS movable mirrors between the pixel display and the eye, and move them so that each pixel’s projected position over the frame is adjusted to match the eye’s motion?

Despite being unfeasible (can we track the eye and update the mirrors that fast? Doesn’t matter because no one manufactures such a thing), it seems like this setup conceptually would cancel out the judder.

It’s not impossible that a laser-scanned display could be usable in a consumer-priced HMD if there was enough volume, but there are a lot of other issues with existing scanning lasers, like frame rate and FOV.

Movable mirrors is a clever idea, but unfortunately it doesn’t work that well. It works perfectly for the virtual objects the eye is tracking, because they stay in the same place on the retina. However, all virtual objects that the eye isn’t tracking now move with the eye over the course of a frame, then jump at the start of the next frame, in a pattern that may not match their motion at all well. If your eye is moving left to right and an object is moving upward, it will slew to the right over the course of a frame at exactly the same speed as the object the eye is tracking, then jump back to the left at the start of the next frame, giving it a choppy zigzag path. Overall it still might be an improvement – whatever your fovea was perceiving would look great – but having the rest of the scene jumping around intra-frame is definitely not ideal. By the way, it’s actually not a hypothetical question, because you could make a system that panned electronically rather than mechanically over the course of a frame based on eyetracking.

I know you are working within the current limitations of small and cheap phone displays, but I’m curious if the issues you raise, from judder to fringing to latency (and the latency based leaning) can all be fixed with displays that currently exist, just not at the cost and form factor that makes it sensible to your project at the moment? Wouldn’t a ~240hz OLED display, given enough graphical hardware, fix all of these issues?

Also, I’m curious if you (perhaps collectively you, not you personally) have been working on any kind of recommended cinematography for use with HMDs, for things like directing people’s attention and avoiding immersion breaking types of camera movements or improper use of depth of field. While games take a lot of cues from Hollywood at the moment, I would think proper VR environments would have to start looking to Disney park design to get people to go the right direction and see what the designer wants them to see. Have you done any work in this area?

240 Hz OLEDs would be better, but that’s still 4 ms of persistence, which is definitely enough to produce judder. Plus now you have to render at 240 Hz in stereo. Not to mention that you have to get 240 Hz OLEDs; they’re certainly technically possible, but I don’t know of any way to buy them off the shelf.

Good thoughts on recommended cinematography. That’s only in its infancy, and it’s going to be interesting to see it evolve.

What about a very bright OLED display that’s only lit for a fraction of a ms, giving very low persistence?

Also, has anyone looked at developing a display protocol that transmits frames faster than they are displayed at? I.e. the gfx card / cable transmits the entire frame in 1 ms, regardless of display refresh rate, and the cable sits idle for until the next frame is ready? This could reduce the added latency of waiting for a full frame to be downloaded before displaying it.

I read the entire blog today. Excellent points that often get overlooked! Thank you also for labeling the nasty phenomenon I discovered in my first ever programming assignment a few months ago; Judder. My assignment used OpenGL to drive around a virtual world. I was using glut keyboard callbacks for rotation and translation. When you did both simultaneously it resulted in juddering. I should mention that I’m a device engineer by trade but took two programming classes at the very end of my Master’s degree. Those two classes, graphics and visual interfaces, sparked in me a desire to pursue VR/AR. I didn’t even know AR existed until a few months ago.

It seems from your posts that you are open minded on optimizing the pipeline for VR/AR but stuck on the idea of better displays. Most of the ideas I saw also locked into the current generation of hardware, because that is what is cheap and available, I know. I work in manufacturing and the fact is the established volume manufacturers are going to chase the large markets; you’ve already mentioned that. You also seem to be skeptical about trying to modifying the hardware side of the pipeline, probably for the same reasons.

One longer-term approach might be to cooperate with proprietors of emerging display technologies. They want to prove their tech and may be more willing to work with you through the development stages and be able to provide novel solutions for AR/VR. I’ve arrived at essentially your same conclusions in my two moths of literature search in the field. I have some ideas that just can’t be accomplished with the current display hardware. One promising technology I discovered this week is laser diffraction projection, specifically as demonstrated by Light Blue Optics.

Basically the render image is transformed into a phase-only inverse Fourier field and decomposed into noisy subframes. The subframes are rendered color-sequentially at an accelerated framerate (LCOS) fast enough that the eye will merge the frames into a high quality image. You could really leverage this in your favor to reduce latency, judder, and other related issues you’ve presented. It’s the idea of quickly displaying many low resolution images to composite a higher quality whole that is appealing.

Maybe one way to leverage this would be to render the image at a lower resolution more frequently and use pixel tiling to maintain detail through a series of frames. Smaller frame also reduces the time required to transform and decompose each frame. This allows the scene to be updated at a much faster rate, decreases the time between render and display, and spreads out the contribution made by each frame. I believe that would smooth out fast transitions in a way that is more natural to how the eye works.

Maybe it’s not feasible with the tech off the shelf, but I think it’s worth pursuing, especially for the long term.

Makers of non-mainstream/emerging display technologies are interested in working with us – but that doesn’t change the way costs work. Unless they know they will be selling millions (better yet tens of millions) of displays, they have to spread the fixed costs across too few units, and the costs become too high. The only alternative is for them to sell at a loss. In the long term that’s solvable, but someone has to put up a lot of money in order to get to that long term results. And no one had stepped up to do it yet.

Also, a lot of potentially promising stuff isn’t even emerging, in the sense of anyone making it in the right form for VR/AR even as a prototype right now. The cost of getting that to market is extremely high.

With the laser diffraction, wouldn’t the subframes come apart spatially from head motion, since they’re displayed sequentially, much like color-sequential LCOS does?

Regarding subframe fringing: If rendered and displayed in the traditional fashion then yes this will still (always) be an issue. Right now most displays use high resolutions and low frame rates which magnify the issues you’ve discussed thus far. I’m not clear on the genesis of this approach (e.g. hardware or software driven – I’m guessing film media) but it has worked fine for most purposes. It works because of the way the human visual system integrates input and sends it to the brain.

But it seems there is another way to cheat the brain. Instead of high resolution frames shown “just fast enough” you could also use low resolution frames shown at a very high rate to fill in the detail. In this technology I see the possibility to push towards maximizing frame rates and minimizing persistence. Rather than take one full resolution image and break it into subframes, you render more frequently with lower resolution and display them faster. So I’d stop calling them subframes and start calling them something else like liteframes. I’m not familiar enough with the technology to know, but perhaps you can even decouple the color from the render. Three RGB liteframes each one different, one step further in time. The visual system will do the integration. Maybe it won’t really work that way, maybe you’ll get a very low-latency blurred image. But from the way the technology is described it may be feasible. But who knows until it’s tried.

Regarding high-volume sales: I’ve been thinking for the past several months on possible roadmaps that could give the necessary foothold. The glasses approach is by far the most popular because it integrates into the current pipeline and has mass market potential. But I don’t think this will solve the display issues for AR/VR because the mass market has no impetus to change the current rendering method. Developers will work around the existing limitations. I’m a fan of the VR path, but the existing applications will have a hard time getting mass appeal, even the future of 3D movies seems to be precarious. I think the Oculus Rift will help, but I still wonder if there is anything else.

I recall going to the fair in the 80’s and seeing huge lines for the VR Booths. And I always loved hearing stories of friends who went on trips (Vegas) and talked about various VR experiences. I think people will support the tech if cost could drop and availability go up. I wonder about the potential of a business that specializes in VR experiences to provide content, income and user feedback for hardware development. Could you be successful with single-room AR experiences, like a bunch of booths. Or perhaps multi-room with tracking to give a VR laser-tag like experience. I’d personally enjoy attending and running such a business. That’s actually what brought me to the forum. I’m doing my feasibility and literature search, learning about the tech, the people, the industry. I think if hardware can get “just good enough” it will provide a place to start. I think it could become a permanent entertainment avenue like arcades, movies and laser tag.

I haven’t tried it, but I’m dubious. You’d have to have very high frame rates to eliminate subframe fringing. Also, high frame rate and low resolution might result in smooth, judder-free displays, but it would not result in high perceived detail. The eye would see whatever detail was actually there (not much) very clearly, thanks to the high frame rate.

VR booths could be interesting. Possibly there could be a return to the arcade days of the late 70’s and early 80’s, which could be cool. However, they are kind of limited from my perspective; I’d like to get inexpensive VR into the hands of millions of people, rather than have thousands of people come to a location to experience expensive VR. Nonetheless, location based VR is an interesting possibility for someone to try.

I’m not sure what your last question refers to. I’ve discussed in past posts that VR displays with high refresh, faster frame download, etc., is doable and would make a big difference, but there’s no incentive currently for panel manufacturers to go in that direction.

I'll post here whenever there's something about what I'm doing or about Valve that seems worth sharing. The initial post is an unusual one - it's long, my attempt to distill the experience of my first year and a half at Valve - but I think it's well worth reading to understand what I'm doing, why I'm doing it, and the context in which it's happening, and just to understand more about Valve in general.

Michael Abrash is the author of several books, including Zen of Code Optimization and Michael Abrash's Graphics Programming Black Book, and has written columns on graphics and performance programming for several magazines, including Dr. Dobb's Journal and PC Techniques. He was the GDI programming lead for the original version of Windows NT, coauthored Quake at Id Software with John Carmack, and worked on the first two versions of Xbox. He is currently working on R&D projects, including wearable computing, at Valve. He can be reached here.