A blog by Michael Abrash

Thursday I gave a 25-minute talk at Game Developers Conference about virtual reality; you can download the slides here. Afterward, I got to meet some regular readers of this blog, which was a blast – there were lots of good questions and observations.

Much of the ground I covered will be familiar those of you who have followed these posts over the last year, but I did discuss some areas, particularly color fringing and judder, that I haven’t talked about here yet, although I do plan to post about them soon in more detail than I could go into during a short talk.

Putting together the talk made me realize how many challenging problems have to be solved in order to get VR and AR to work well, and how long it’ll to take to get all of those areas truly right; it’s going to be an interesting decade or two. At least I don’t have to worry about running out of stuff to talk about here for a long time!

Update: Here’s a PDF version of the slides. Unfortunately, I don’t know of any way to get the videos and animations to work in this version, so if you want to see those, you’ll have to use the Powerpoint viewer.

26 Responses to Slides from my Game Developers Conference talk

Thanks a lot for posting that Michael. I really enjoy reading about the challenges of VR. Is there any chance we will be able to see Joe Ludwig’s slides? It sounds like they might be very interesting also.

I agree that cost and resolution are not the only barriers to adoption of consumer VR. To me, the ability to perform more meaningful interaction with the content beyond mouse and keyboard is key, as is the ability to integrate various sensors (motion, camera, position, biometric) into understanding the context of the user. See my post here for some thoughts.

Also, what do you think about variable-resolution goggles like the old Fakespace Wide5, which had higher resolution in the center of the visual field and lower in the edges. Would that be useful in your opinion or too much of a programming hassle?

Higher resolution in the center is a good approach, and not a problem in terms of programming (it’s just an undistort pass), and in fact that’s true of the Rift, due to the lens distortion. However, remember the fovea can move 25-30 degrees in each direction, so the higher-res area has to be pretty big.

Everything you mention can affect the VR experience, but some of it will only matter after the basic stuff (like HMDs) works well enough. That’s why I said that it’ll take decades to make VR great.

My first comment here, as I’m taking my first steps into VR/AR, though your talk did give me a head start of where we currently are. Though my understanding might be completely wrong here.

You spoke about anomalies that occur when dealing with human perception. Considering that we currently don’t possess the technology to bring in pure human perception into the the world of VR/AR, can’t we use the anomalies/bugs to our advantage in some way?

By that I meant, during the good ol’ days of PC game dev, many bugs/restrictions were taken into advantage, by spawning new tiles/players/opponents, or just covering the crack with paper, so to speak, so that the core bugs aren’t really discovered unless one beta tests a game very, very efficiently.

How far is that possible when you port a 3D game into the VR/AR? Is it possible at all or would that cause such a perceptive abnormality that it just won’t be good enough for the human brain to perceive/reciprocate its existence?

It’s an interesting thought, but I don’t know if there’s a way to apply it to VR. Of course, you could and would design games so that the art, animation, movement, etc., worked well with the limitations of the display, but I don’t think that’s what you mean. The thing is, the kinds of anomalies I’m talking about are core to the way we perceive the world, and perceiving the world as real is core to VR, so I’m not sure it’s possible to paper over them. But I’d be interested to hear any ideas people have!

Hello Michael,
Another great insight! i’ve just recently found VR as my newfound obsession, and while i’m not much of a coder or hardware guy, I find your research and progress to be absolutely fascinating and informative. Not only are you very open about your findings and such, but i’m really blown away by the fact that you are open to discussion and are so proactively involved with the community. I like the way you work.

Great and thoroughly fascinating presentation. I tried the Oculus Rift at GDC and then promptly ordered a dev kit – it’s an amazing experience.

I wondered what your take on depth of field in relation to VR is. I noticed when trying the Hawken demo that when I would look at the cockpit overhanging my head, it looked a little flatter than it should as the background was not blurred in my view like it would be to the naked eye. Do you think that retinal tracking devices would need to be implemented in the VR device to fix this?

Depth of field would be great – but is highly non-trivial. It seems to me that it would require both eyetracking and display devices that can actually display depth of field. Kurt Akeley did that by blending multiple screens, but that’s not a promising solution for an HMD. Depth of field isn’t a requirement, but I think it is one of the hard problems for making AR/VR great, and will take some time to solve.

What about having the screen move with the eyes instead of with the head? That is, have the screen(s) track with the eyes instead of being statically mounted to the face. If the physical screen pixels moved along with the eyes then judder, color fringing, low pixel density, etc wouldn’t be such issues, although it would add slightly more complexity to tracking, and provide new latency problems (probably new cost + weight of device problems as well).

In the meantime Microsoft and the University of Illinois have got their research paper on their IllumiRoom out. I still think this is a far more sensible approach to this whole subject. At least in terms of VR. And it is working now today with a much more limited amount of problems and could likely be seen in the next Xbox platform

I'll post here whenever there's something about what I'm doing or about Valve that seems worth sharing. The initial post is an unusual one - it's long, my attempt to distill the experience of my first year and a half at Valve - but I think it's well worth reading to understand what I'm doing, why I'm doing it, and the context in which it's happening, and just to understand more about Valve in general.

Michael Abrash is the author of several books, including Zen of Code Optimization and Michael Abrash's Graphics Programming Black Book, and has written columns on graphics and performance programming for several magazines, including Dr. Dobb's Journal and PC Techniques. He was the GDI programming lead for the original version of Windows NT, coauthored Quake at Id Software with John Carmack, and worked on the first two versions of Xbox. He is currently working on R&D projects, including wearable computing, at Valve. He can be reached here.