Every year at Quakecon, id Technology Magician John Carmack gives the keynote address. As I did last year, I’m going to watch the whole speech and make a few notes and observances along the way. Unlike last year, the speech is over three hours long. So This might take a while. As of this writing, I haven’t listened to the whole thing yet.

It’s interesting to hear him talk about the game so openly. Most companies do not do this. Most companies tell their programmers to keep shut, and send out shills to say things like, “We’re very happy with how Shoot Guy III is performing and have full faith in the development team.” Then they lay off half the team, cancel Shoot Guy IV: Shoot Harder, and assign the remaining personnel to making a crappy movie tie-in. Honesty is antithetical to people who want to steer the conversation, and it’s only because Carmack has so much pull that he can make this happen.

I’d love to have a more open, honest dialog between the people who create the product and the people who consume it. I understand why this doesn’t happen, but I think the benefits would outweigh the downsides. Our current culture of spin, deliberate “leaks”, and managing access for puff-piece exclusives is bad for everyone. Changing this would require changing the culture that underpins the business, so I’m not holding my breath.

I’m just saying: From an engineering standpoint there’s incredible value in being able to look at a mistake and saying “We screwed up”.

Carmack apologizes for the PC problems at the launch of Rage. Half of all PC players couldn’t play the game? I didn’t realize it was that bad. This might have been the first time I didn’t get an id title at or near launch, so I missed out on these problems. As I understand it, id was doing their internal testing on graphics drivers that their customers didn’t have, and Carmack goes on to explain how they made that mistake.

Having played the game well after launch, I didn’t see most of these problems. I will say that while it’s tough get consumers to understand why “runs smoothly at 60fps” should be a selling point, it really does make a difference to gameplay. I don’t think I can reliably tell the difference between 30fps and 60fps, but I can feel uneven framerate and hitching.

He’s talking about the fact that Doom 3 weighs heavier on your graphics card than Rage, even though Rage is seven years newer, which is more than an entire graphics generation ahead.

Most people just think of graphics cards as just this single spectrum of power. More expensive card = more power. But depending how you divide things up, there are three different aspects to the power of your card: Memory is a measure of how much texture data you can use, which lets you have larger resolutions and more detailed textures. Speed is obviously a measure of how fast the thing can draw pixels, which directly impacts the framerate. Throughput is how fast the card can talk to the rest of the computer, which is either irrelevant or critical, depending on the type of rendering being done. If the scene has a lot of dynamic, changing stuff in it (like many animated figures) then it might need a ton of throughput. If it’s mostly a static scene (a Minecraft world with monsters turned off) then it might need almost nothing. Throughput is generally driven by bus size, which you might recognize as a bunch of inscrutable and unexplained acronyms if you’ve ever shopped for a graphics card: PCI, AGP, PCIe, etc. Drivers also impact just how quickly the data can be moved from main memory to the graphics card.

On the PC, different cards have different strengths in these three areas. On the consoles, throughput isn’t as much of an issue since the GPU isn’t a graphics “card”, but just another processor that shares a common pool of memory with everything else. It’s the difference between a couple of office workers collaborating through the internal company mail system versus those same two people being in the same cubicle farm. They might be slow employees, but they don’t have to worry about waiting for things to go through the mail. Again, this is either critical or irrelevant, depending on what you’re trying to do.

When Carmack talks about “passes” he’s talking about rendering passes. Doom 3 does a lot of rendering passes, which is why it burns more GPU power than Rage. In a single-pass environment, you send a polygon off to the graphics card to be drawn. Then it’s done, and you move onto the next one.

In a multi-pass environment, you will need to draw the same polygon many times. In the original Quake, the game would draw all of the walls at full brightness. Then it draws them again, but instead of rendering the textures (riveted metal, blood-stained stonework, etc) it drew the patterns of light and darkness that formed the shadows. By blending these two passes together, you wind up with walls with light and shadow.

It gets more complicated. There are “texture passes”, where you send a polygon once and the GPU blends several textures together at render time. Think of this as a painter making a single brush-stroke on canvas. Imagine if they could somehow load their brush in such a way that they could paint with two distinct colors at the same time, in the same stroke. This is faster than making multiple strokes, but there’s usually difficult and chaotic limits to how many texture lookups you can do at one time. (How many different colors of paint you can stack on the brush.)

If you run into this limit, then you end up needing to do multi-pass rendering. That’s where you send the exact same polygon off for rendering a second time but with different texture values. That would be like lowering your brush, cleaning it off, and then repeating the exact same stroke with different paints.

Maybe one graphics card lets you use eight different paints in a single stroke. Another only allows four at a time. Your project requires seven. For best performance on all systems you need to have two different rendering pipelines: One that does the whole thing in a single pass, and another that does it all in two.

He says that Rage draws everything in one single pass, with a pre-pass.

I don’t know what a “pre-pass” is, but I imagine it’s lighter than just a regular rendering pass. So Rage draws the whole world in a single rendering pass, or perhaps one and a half, if we want to acknowledge the pre-pass thing with a hand-wave.

Doom 3 does a pass “per light”. Back when I was fiddling with the Doom 3 editor, the docs clearly warned against having too many lights hitting the same surface at the same time. Play through the game yourself and you’ll see how many surfaces are lit with just a single spotlight. The whole game was speckled with spotlights that rarely touched one another. This is why. You need to do a separate rendering pass for each light hitting a single polygon. If a nutjob level designer aims four lights at a wall, then the game will need to render that wall four times.

This is very different from lighting systems of old, where it didn’t matter how much lights interacted. They were pre-calculated, so all lighting arrangements had the same render cost. The drawback was that the lights were fixed and you couldn’t move them around as the game was running.

Okay, so now we’re 1,300 words in and we’re at the eight minute mark of his talk. Let’s hope I don’t need to cover the next three and a half hours at the same level of detail.

> Let's hope I need to cover the next three and a half hours at the same level of detail.

FTFY :P

Although there’s a big chunk coming up (maybe an hour or so) where he talks about 3D vision and VR. It’s all interesting, but I imagine you’ll be able to summarize it a lot more, as well as the stuff about gaming in the cloud.

I am totally okay with your covering the entire talk at this level of detail!

By coincidence, I started playing Rage earlier this week. It’s a weird beast, but so far I think I got my on-sale money’s worth. Yeah, the plot and world are nonsense pretty much from minute one. And as you said in your earlier post, I really don’t give a rat’s ass about anyone. But I just wanted to run and drive around shooting things in an interesting and varied world, and it delivers. It runs smoothly at 1080×1920 on a four-year old gaming PC. The level design is bright, varied, and interesting. Something about the graphics feels particularity “real,” but I’m having a hard time putting my finger on it. I think it’s the shear organic complexity of the world; the artists were skilled and simultaneously either had almost no constraints in what they could do, or were so skilled that I never notice their constraints.

Have you seen Shamus’s video on Megatexturing (or last years keynote, but Shamus’s video is more directly relevant)? Megatexturing was the big new technology of RAGE, which is the reason the art seems so unconstrained. (From what I remember, it turns out to just be a different, somewhat stranger set of constraints)

I’ve been paying attention to megatexturing since Carmack started talking about it. I think it might be part of the cause, but I’m not sure it’s the entirety. Or if it is, the benefit isn’t that artists can have absolutely unique texturing everywhere, it’s that artists can effectively ignore the simultaneous-visible-texture budget, meaning they can fill the space with far more unique junk knowing they won’t blow the budget when they get to texturing that junk. Whatever it is, isn’t not immediately clear, and I think that exceptional level artistry is a bit part of the answer. (Another element might be that there are huge chunks of the levels full of interesting stuff that I’m prohibited from getting near. Compared to most FPS I’m playing these days, this is a very constrained corridor shooter that just happens to be mostly outdoors.)

The variety of the gameworld is to be commended but those textures are rough even the ‘detail textures’ setting just appears to add high frequency noise. I’ve always been a texture freak even to the point of preferring the nasty Jedi Knight 1 engine over Quake 1 because the textures seemed nicer (even if it was an inferior engine in every way). These rough textures make me a very sad panda.

That being said he’s right about the shooting being more fun in Rage than Doom 3, I’m having a lot of fun having bought in that 50% sale this week. The driving physics are awful though and with now both Valve and iD having shown that vehicle physics and FPS don’t mix can we all just leave that idea alone?

Rage is a beautiful game, just don’t get within 5 feet of anything. :-) Turns out, I’m okay with that. I’m occasionally bothered by the low resolutions, but not often, and the benefit of how everything looks at more than five feet is worth it.

As for the driving, it’s not good, but I’m getting used to it. The core gameplay is sound and fun, it’s just that a keyboard is a crude interface for an analog steering wheel. And bonus suck points for not letting me use the mouse to look around. I keep meaning to plug my gamepad in for the driving sequences as a workaround.

This may have been patched since then. What I’m seeing with an untweaked install on an old gaming rig looks very close to the 8k screenshot, not the auto screenshot. If the game looked like the auto screenshot, I would have been extremely disappointed.

Things get blurry when you get close, and the screenshots in question don’t have anything close except the hands/gun, which are equally nice and sharp on my system.

Other than MegaTexturing getting rid of tiling completely, Rage had a lot of work done by Carmack to get the lighting model physically correct: ensuring lights have inverse-square drop-off and computing lighting in linear (as opposed to gamma-corrected) color space. Which is a load of math geekery nonsense meaning lighting looks a lot more “correct” than usual for a game. Carmack’s tweets at the time were mentioning the artists complaining about having to put so much more effort into lighting :)

Of course, as Carmack off-hand references later in this, all that effort is then immediately ruined by the horrifically bad color toning that’s done everywhere: in particular there are several areas that change the black-point (unlit is not black!?), which is insane. Other engines throw bloom over the top of everything, which is fine (in fact, required for correct results) if you’re working from a HDR framebuffer, but nobody is :(. Unreal Engine 4 looks like they’re working from a physically correct lighting model, though they haven’t said so explicitly, so I’m looking forward to that….

It could also be something to do with depth (I, for instance, have tried employing a depth pre-pass to eliminate overdraw on partially transparent/shaded textures), or something to do with deferred rendering (highly unlikely, considering the probable baked nature of the lightmaps.

Well, from what I know about Virtual Texturing/Megatextures(*), you always have to do some sort of “visibility determination”: You have to know which parts from your Megatexture are needed for a certain view, and at which level of detail (these parts are called tiles). Some of them might already be loaded onto the video card, some might have to be loaded from main memory or disk. In any case, you always need to know which tiles must be loaded next.

A common approach to do this is using a separate rendering pass, where you render your whole scene using a special pixel shader. It does not produce color values, but some kind of identification number – so let’s say you have a polygon which requires tile 1 at LOD 5, then every pixel generated from this polygon will produce something like (1,5) – encoded in some way. These values are rendered into a texture, which is read back into main memory and analyzed. After doing that, you know which tiles are required to render the current scene with best quality. So I’m pretty certain that Carmack’s “pre-pass” is something like this.

In fact, this pass is often not much overhead: For one, the pre-pass is often done at half or even quarter resolution, to reduce the bandwith needed for the transfer back into main memory. And it has a very simple pixel shader (just output some values calculated from texture coordinates) which is applied uniformly. So we can definitely say “one and a half passes”.

@Christopher M.: Maybe it also serves this purpose, although this would require rendering at the same resolution as the main pass. Which usually degrades performance, since you have to read the whole thing back into main memory every frame.

(*): My bachelor’s thesis has a lot to do with it, so I’d say I do know a bit about it ;)

In V-Ray the pre-passes are used to generate the irradiance map, using the lightmap (generated before the pre-passes). The lightmap is a simple lighting solution while the irradiance map is much more detailed.
Perhaps the pre-pass in Rage is the same as in V-Ray, a lighting calculation.
P.S. Also, V-Ray, one of the best 3D renderers in the world, if not the best, is made in my country.

I really hope to hear even one good reason why I should be interested in the device itself. The problems with old VR glasses, like weight and low resolution, are rather irrelevant compared to the big problems of “can it justify its price” and “how many people can actually use it without eye-strain or some kind of pain after a couple of minutes”.

I have a Vuzix 920 Wrap(which I plan on talking about in a post down the page), and what Carmack says is pretty much spot-on. I haven’t had any problems with eye strain and they weigh pretty much nothing, but they’re not really designed to fit on a person’s head.

On Kickstarter, the Oculus Rift is currently going for about 75% of what I paid for my Vuzix headset(including the tracker) and half what they would usually cost. Even with all the shortcomings, I consider the Wrap 920 to have been worth what I paid for it, and if the final price of Oculus Rift is in the same ballpark, it’ll be a fantastic deal.

[blunt comment]Sentiments like yours are WHY VR technology hasn’t advanced any in the past decade and a half. I’m about as upset about that as John Carmack..[/blunt comment]

Actually, I didn’t think it was blunt in any way. I just thought it was dumb.

Look, if you have a new idea and you simply can’t answer simple questions like that.
Then the sensible thing is to go back to the drawing table and think things through again. It’s certainly not to go around complaining that everyone isn’t a starry eyed dreamer like you.

Absolutely right. What does VR technology bring to the world of Game Design? What kind of interface problem is it solving? What mechanics become available with the rise of VR technology?

And don’t you dare say nothing ’bout no immersion. A good game will have immersed through the decision-making process, pacing, atmosphere etc etc. Doom I had me weaving and ducking in my chair as I ducked and weaved baddies, and very few “photorealistic” games have come close to that level of immersion.

You won’t need mouse to aim. That frees your right hand for additional tasks. Augmented reality games. Depth perception. Better FoV options (optimal FoV depends on distance between you and monitor) those are the stuff I thought in 1 minute.

“You won't need mouse to aim. That frees your right hand for additional tasks.” – I suppose that can be used to great effect. I fear though that PC developers (HA! HA hahahaha ha. Ha.) will use it as an excuse to cramp additional controls in their game. Restriction encourages elegance and creativity in problem solving. (pfft, can’t believe I said that with a straight face.)

“Augmented reality games” – ugh, you mean I have to get up to play my game? No thanks. (The idea of AR games is stupid to me. Games are at their best when they manage to convey the experience, by gamefying the necessary elements of the experience and removing the irrelevant parts. The “Reality” part of AR kind of takes that away, because a “realistic” mechanic is not a “gamefyed” mechanic)

“Depth perception” – Carmack talked about it in one of his other videos. There is a lot that goes into depth perception, not just having two eyes. Size and overlaying, perspective, etc etc all the stuff that painters have developed over the years to trick our eyes. Also, I have never had problems with depth perception in a game, and if you do, you might need to Learn Kung-Fu

“Better FoV” Having a wider screen doesn’t really simulate FOV properly (we have almost 120 degree vision, but we only see detail from 90 or something). We’d need eye-tracking and even bigger screens to solve it, although to be honest I see it as a brute-force method. To me games were always about doing the most with the least amount of resources (I imagine that’s what draws Carmack to the field).

I’m surprised no one has done it yet, but how about a trick to simulate side vision? You devote 10-20 pixels off each side to be side-vision. Anything within the 120 arc but not in your actual FOV leaves a colored shadow in your edge side-vision. So if you’re in a cave, your edges are dark. If there is an opening to your left, the left side of the screen turns lighter. If you’re in the open, the bottom is dark and everything around you is lighter. if something flies above you, you see a portion of your top edge get darker. If you are standing at a ledge, all your edges are light.

Here’s my take: the primary way I play video games is while multi-tasking with some projects, or relaxing with the wife, or while watching a movie or television show. In no situation do I want to be 100% immersed in some 3D nonsense with something I have to wear on my head. If there’s a market for decent VR, then it will succeed. However, you can’t blame those of us who have no interest in the technology for somehow hindering it.

Do you, or have you had a tendency for, feeling eye-strain when using a monitor in a dark* room? Or when using a CRT set to 75Hz (over 40 minutes) or 60Hz (over ten minutes)?

* Not particularly dark, just dark enough that a lot of people would turn on lights to read a book or something.

Because some of us do, and I feel completely comfortable thinking claims that “something doesn’t strain eyes” are made in ignorance, unless something is backing it up.

The same applies to something being light, which I would like to note was something I specifically mentioned wasn’t one of the big issues. The pain I was referencing is pain coming from having a light source right next to your eyes or from wearing a VR set with the “wrong” type of glasses. Of note here is that not everyone can use contacts and some have really expensive lenses, increasing the total price of ownership pretty high.

Even if these new VR glasses don’t have problems, the basic question of “what good are they for” still stands. Dasick covers it pretty well.

About the price: VR glasses have a hard time selling themselves to me at a price of 300 USD. That’s not exactly pocket change, and just because there are more expensive models around doesn’t change the fact that it’s still 300 USD.

They have to bring something special to the table and have no faults compared to other display types. Well, outside of the obvious “only you can see what is there without a separate monitor”.

Sentiments like mine are why people haven’t been wasting more money on VR technology. If Oculus Rift actually ends up being any good, I can guarantee it depends on technology that is recent or recently started working properly or recently got to a price that makes it possible for the device to be at all profitable.

Fair enough, I’m generally not prone to eye strain except when I turned my CRT monitor from 85Hz refresh rate back to 60Hz. Never played games above 800×600 again because nothing at the time that I had was able to do so at 85Hz.

Though I did say that I hadn’t had any problems, not that nobody does. I’m also not prone to motion sickness and would probably shrug off the disorientation-inducing effects that Carmack mentions with Sony’s HMZ-T1 where you turn the camera one way and your head the other, but one of my roommates claims he gets motion-sick just watching me play Descent.

I’ll quote my original comment, to which you had responded. I’ve added some emphasis:

“how many people can actually use it without eye-strain or some kind of pain after a couple of minutes”

Emphasised part should imply that I’m fully aware that there will be some people who have no problems with eye strain with this device. Since I thought this was obvious, a respond of “I’m not having any trouble” reads to me as saying “you’re wrong, no-one will have any trouble”.

“I’m not having any trouble” is my personal shorthand for “I don’t have a large enough sample size to make an accurate prediction, and I recognize that my experience deviates significantly from that of the average person.” Further, I gave an example of someone I know who is on the entire opposite end of the spectrum from me to demonstrate the known range of potential experiences I am familiar with. In short, my answer to “how many people will have trouble…” is “I don’t know, but here’s a range of experiences I’m familiar with, maybe you can extrapolate from that.” You get all uppity, then I try to clarify exactly how much of an idea I don’t have, and you jump down my throat.

What I can tell you is that wearing an HMD is absolutely nothing like squinting at something right next to your eye and I kind of wish people would quit trying to make that comparison because it’s misleading. The experience itself is almost exactly like sitting in a dark home theater. The focal length is set to something much less strain-inducing, and is adjustable to within some sort of definition of normal, which unfortunately isn’t sufficient to allow any glasses-wearers I know to use them without corrective lenses.

So you asked a question you already knew the answer to about a product you don’t intend to use in an effort to sabotage interest in that same project. Then I attempted to answer your question earnestly, and you take offense. I point out what you’ve done, and you deny everything.

Yeah, that’s pretty much textbook troll behavior. A very skilled troll, but a troll nonetheless. I’ll be dismissing all of your opinions from now on.

As for price, it will go down. For one, it theoretically offers a much much wider FOV than even triple-monitor setups, which themselves cost way more than any VR headset we can expect. Another, it offers a complete isolation from external visual noise – people are known to pay a lot more for headphones with active isolation technology (even when it interferes with actual audio quality), so the same can be the case here, with visual noise as well.

As for eyestrain and/or pain.. Well, yes, Icompletely agree with that. That’s the one big thing that has to be solved. If this turns out just like the modern “3d” movies – where you got to have perfect eyesight and just the right distance between eye focal points, or it won’t work / will induce migraines – then it just won’t catch on.

Carmack goes into *several* good reasons why you might want to have a VR display! Even for regular office work VR may work out to be far superior to the current 24″ monitor – though figuring out input is a big question I have at the moment. I like the idea of Kinect-style motion-tracking your hands replacing mice, which we’ll probably be able to pull off well enough in a head-set in 5-10 years. For text entry though, I don’t see anything that could replace keyboards until brain-interface, at least unless someone comes up with a dramatically different software UI than keystrokes. So you’d want some sort of pass-through vision to see a keyboard, though virtual keyboards might work well enough, even if they don’t have the ergonomics. THE FUTURE!

‘might ever be’ is completely wrong to put there. A mouse involves waving your hand around and if the technology is good enough, we’re looking at differences in speed of input/output of the speed of an electrical signal vs the speed of light over a couple of centimetres and an electric signal + minute processing (if tech is good enough).

A mouse has all sorts of problems like roughness of surface, the awkwardness of actually trying to do a clean circle with your hand on a 2d plane. A very very slow learning curve, more precision problems in areas (when reviewing Windows 8 people were complaining about how hard it is to move your mouse to a small target in the corner of the screen when you’ve got a split screen monitor)etc

Sure we design around it and they’re pretty small problems they’re problems that we could possibly get around even more easily if we didn’t use a mouse. Heck if we insist that moving your hands is imprecise and difficult, you can just make the input rolling your hand over a desk and pretend there’s a mouse there

I’m talking about Kinect-style hand waving here. Light might travel faster than electricity, but when I move a mouse, I send an input. When I wave a hand in front of a Kinect, it needs to process the image, recognise that yes, that is a hand and that yes, I am waving it around and only then does it send the signal.

If you have a mouse with fibre optics, the speed of the signal will be almost as fast, and it will waste no time on figuring out what you wanted to do.

Additionally, you can’t use your hand the way you use a mouse. If I make a clicking notion, I have to make sure the machine can pick it up. If I lift my hand to move the “mouse” without moving the pointer, I have to make sure the machine can see it.

I am not watching a three hour presentation just so I’ll have to watch it again in pieces when Shamus covers it in order to remind myself what Carmack said. All in due time.

But I’ll note something: “Even for regular office work VR may work out to be far superior to the current…”

That’s an expensive “may” you’ve got there. I’m looking at on a personal level, and 300 USD is a lot to throw at something that may be better. I’m willing to put money into ergonomic devices, but those are far closer to “mandatory” than “possibly nicer than what I’ve got”.

A good thing nobody is asking you to then…? The current $300 price point is for a developer kit for a experimental design that was, up until he got 5000 backers on Kickstarter, probably going to be all hand-made by him, absolutely not a commercial product that he’s telling you to buy! People are talking about what the Rift is going to do for VR, not that this is the awesome VR tech you’ve been waiting for, available now!

I would guess proper commercial products would be not that much cheaper for a good while (I expect cheap and nasty ones for $100, to high end monsters for $1000 to be the final range, no idea on what the “average” model would be though), but on the other hand that’s not too expensive in comparison to the monitor it’s most closely comparable to – after all I’m typing this on a $1000 monitor (Dell U2711, for the curious). If they go with pass-through or overlay (AR “augmented reality”, not VR) and it’s low impact enough, it would probably be a replacement for your phone, too.

Exactly this. Why should anyone be hyped about a device that might come out 10 years from now and might solve some kind of problem that no-one can state outright?

Simon:
How can you be so certain that a $1000 VR goggles will actually be as good as a $1000 monitor? And since the goggles will be something you not only handle daily, but put on and move around in, they’re much more prone to being accidentally broken, so they’re likely more expensive in the long run, than the asking price, anyway.

If you’re making a comparison that “it will be the premium mode like this premium monitor” you’re missing the problem. If all the models that are sub-$300 are essentially crap, you can call them “budget models” all you want, they’re still not usable and might as well not exist. Therefore the actual price is $300 or over.

“Replacement for a phone” Sure, I love walking into lampposts. Pass-through would be whole another problem, by the way.
If you use cameras, for example, they’d be heavier, expensive and tricky to get used to as your “eyes” are in a different place. This assuming you don’t lose too much peripheral vision.
If you make the display see-through, you’ll lose some of the image quality and likely make it expensive. Great replacement for a monitor to be sure.

My understanding of the situation is that Rage was tested with an older version of AMD drivers, yet shipped with a newer, buggy, version and required it to be installed in order to run.

This is pretty bad, but what soured me was when I ran into a comment from Id, where they stated that it was all AMD’s fault because they had made buggy drivers. Neatly ignoring the fact that Id hadn’t bothered to test the drivers at all before shipping them alongside their game.

If either had done their job properly, the whole thing could’ve been avoided. Meaning that in my eyes both were at fault, and Id was being a dick for denying responsibility and pushing all the blame on AMD.

The part where Carmack talks about this gives me a feeling this isn’t the full story, but it’s a positive sign that they’re accepting, at least partial, responsibility.

“Neatly ignoring the fact that Id hadn't bothered to test the drivers at all before shipping them alongside their game.”
Erm, they didn’t? Seriously?
Now, I haven’t paid much attention to it, but it is my understanding that it was essentially a packaging issue. Basically wrong outdated drivers got shipped, instead of the right ones.
As for who’s fault that is. Well, that’s up to where and how those wrong drivers got introduced. I have no clue about that.

My impression, when I looked into this at the time, was that Id were testing with pre-release drivers that didn’t show the issues, and ATI had basically promised that the fixes would go into the next driver release – before Rage’s release. Of course, they didn’t.

Special drivers are apparently common enough for Battlefield 3 to have it’s own pre-release drivers available to the public at the same time as Rage’s until they figured out getting both sets of fixes into the release drivers, so it’s not *quite* like Id were being totally naive here either. On the other hand, half of this is from Carmack interviews, and the other half is from what I saw of when ATI released various drivers, so this is hardly rock-solid journalism :). (Since I played Rage with an ATI card at launch, I was following this pretty close)

If all this was accurate, I’m not quite sure what Carmack was apologizing for here – it sounds like he thinks they should have waited for the fixed drivers to come out before releasing Rage, which sounds like that would have gone over well with investors. Personally, I think he should have added or switched to Direct3D as he tweeted was a possibility a while before release (the actual GL calls are a tiny fraction of the codebase) – the reason Rage had so much trouble with drivers was probably a toss-up between Rage being one of the only OpenGL commercial games and megatextures being a strange and demanding new use of the API.

On the other hand, if you were OK with running with the pre-release drivers and copy-pasta’ing .inis, Rage worked well enough on ATI cards at release, and is pretty much solid now, though you’re still going to be seeing a few frames of pop-in here and there, just as a factor of how virtual texturing works.

so this is hardly rock-solid journalism
You still did beter then regular game journalism, mate.

I'm not quite sure what Carmack was apologizing for here
Possibly, throwing fans a bone. Even if the fault isn’t yours, people like it when you apologize.
That and Carmack might feel resposible anyway because whatever the issue might be and whoever is ultimate at fault; in the end his game didn’t work.
I for one would blame myself if I were in his place. At the very least for trusting that people outside of my control would do what they promised to.

So they didn’t test the driver at all before release, but AMD promised they were fine, so it’s not in any way, even partially, Id’s fault?

Never mind that if they did test, and knew it didn’t work, they could’ve warned before or at release that RAGE won’t work for AMD users at release. Instead, from what I understood, they kept quiet for a couple of days after release before saying anything.

I’m sorry you had a bad experience, I wish I didn’t go through that crap either! But I wish you wouldn’t attack a straw-man here: of course they could have done better with communication, both with their customers and with ATI to get the problem fixed in the first place! Of course they could have held the game back (though only on PC, most likely, and to the fans’ complaints). Of course they could have taken less technical risk to avoid hitting driver problems in the first place! I didn’t challenge any of that!

I only wanted to clarify that it’s quite plausible that ATI has the majority of the blame for the state of Rage at launch, Id is certainly not fault free here, but putting all the blame on them is being unfair: if only because ATI doesn’t have any reason to do better on it’s drivers.

I did not, nor do now, own or have played RAGE. While it seems hard for people to believe, I care about other people’s problems as well.

Apologising doesn’t mean saying that you were the only one who did wrong, it means acknowledging that something you did was wrong. Id didn’t test it, they did something wrong, that’s a good reason to apologise.

Shamus, I really like the way you explain the little details, i hope you can be critical and analytical like this with all the details of the keynote. I almost lost these details in my viewing and i like your way of translating these into bite sized nuggets of wisdom.

Also in tribute to Mumbles… “Hey, this John Carmichael guy looks like Rutscarn!”… She’s ruined it for me, every time i see John Carmack, I’m thinking “that Carmichael guy is awesome”.

As of writing this post, I’ve seen about half of the keynote. Looking forward to the other half tomorrow.

I always enjoy watching John Carmack talk, in part because he reminds me of me. It gives me hope that I may get to his level some day. The VR segment was particularly interesting to me, and I’m about *this close*(pinches fingers together) to asking Shamus to let me write about it(and if you offer anyway, I totally accept!)

I actually own one of those headsets Carmack talked about. In fact I’d be willing to bet money that the first one he treated himself to was either a Vuzix Wrap 920 VR bundle or a Vuzix Wrap 1200 VR bundle. I have the former, and while it’s fantastic and I love it, pretty much everything he says is spot-on. It has a pair of 640×480 LCD screens, which at least really are 640×480 RGB pixels, not 640×480 individual pixel elements. It accepts resolutions up to 1024×768, which is nearly double the size in each dimension. As a result, small text can get blurry. They’re better for video or gaming than for everyday use. The upshot is that since they accept almost as many columns as the two screens have between them, in side-by-side stereo mode(no interleaved, that’s the now-discontinued VR920), you don’t really notice the missing columns. Lenswork has been used to push the virtual focal distance out to about five feet(I estimate), so I’ve never had any trouble with eye strain, but the arms do dig into the sides of my head. I think I may replace them with an elastic band. They make a really nice second screen, though, and an even better private screen.(I probably haven’t used them for whatever it is you’re imagining. Probably.)

The general problems that John Carmack talks about are pretty much spot-on. HMDs are advertised as if they were a TV viewed at a given distance, which is all a ploy to make something with a 32-degree FOV sound impressive. Sure, it’s like a 60-inch TV at ten feet. Or a 3-inch TV pressed right up against your nose. WHOOPEE! Likewise, the blacks are more like reds, and while I haven’t played much with the head tracking(I will address that momentarily), it does have a small delay which is more than a little frustrating.

FYI, I suspect the much nicer HMD he mentions later is this one(available on eBay for about $10k) or maybe the upgraded model(WANT A piSight!)

Where VR tech is really behind the curve is the absolute utter dearth of usable software. The Vuzix drivers come with a set of patches that let you use them with several games(Half-Life anything, Portal 1 and 2, and Left 4 Dead, among others), but if it’s not on that list then… well, I haven’t put anything together for L4D2 yet. A Google search reveals that there is exactly one 3rd-party program available that uses the tracker, it does exactly one thing, and it was abandoned several months ago. The prime reason for this is that Vuzix seems to have a counter-intuitive vendetta against the Open-Source community. I would expect a company that makes all of their money selling hardware for which software support becomes an expense would be more eager to offload the software aspect to consumers. Getting your hands on a dev kit, however, requires signing a license agreement that includes personal information, and all it gets you is the reference libraries to the .dll files(not even the .dll files themselves!), some header files, and a list of functions. Everything’s closed-source.

I’m about ready to end the license agreement I signed(a perfectly valid option included in the agreement) and attempt to reverse-engineer the libraries.

1. Oculus Rift should be partnered with monitors, not for one to replace the other. Some people simply cannot handle these devices (I know all my relatives who did not play games prior to 30 years of age get motion sickness after 20 minutes and have to stop). Although for those of us who did grow up with them OR is the future; where we can instead not have to rely upon a mouse. Although TrackIR already does what the OR does in a few games. I guess we need to interface it with a wii mote for a complete experience. Good time to live in I guess.

2. Einstein quote: ‘Everything should be made as simple as possible, but not simpler.’

John doesn’t load any business/political squirming into his speeches; he comes from an honest and straightforward perspective. He does not say things to sell things ultimately (Valve manage everything they do and say for maximum profit, so do EA, Lionhead, ActiBlizzard; it’s just some choose different methods).

Which is why I like him and will support his ideas (as they are good).

Regarding the VR headsets.. as long as they are bulky things looking and weighing as much as a sci-fi marine’s helmet, they’re not good enough, imo. The ideal end-product should be no larger than those tinted glasses mountain-climbers use..

As for that kickstarter headset – well, isn’t it way too low-res for these days? Also, I know that a decade ago, there was much discussion over how pc monitors are ruining kid’s eyes etc. etc. Regardless of how true that turned out to be, I’m semi-curious about these vr headsets, couldn’t they be genuinely potentially harmful by slapping displays so close to eyes (a distance that normally nobody would be able to focus on properly).

The current specs of the Oculus Rift are quite impressive, actually, and they claim they’re going to get better. It’s no Sensics PiSight, but at the same time, it’s in a price range that ordinary people can afford.(Not all of them should… but they can.) I would be a backer if I weren’t currently boycotting Kickstarter. I’ll definitely be among their first post-Kickstarter customers.

As for size, I’m actually not put-off by the bulkiness(I recently added an HMD to my Minecraft skin), and the weight is generally less than the average gamer’s super-noise-cancelling-mega-bass-uber-headset. The simple fact is that you’re NEVER going to get “The Matrix” unless you’re intercepting nerve signals directly from the brain… like they were in The Matrix(SPOILERS FOR A 13-YEAR-OLD MOVIE!)

As for eye damage… pretty sure that’s an undecided issue. The lenses are designed to push the focal length out to something more neutral than the single-digit-millimeter distance from your pupils that the glasses actually sit, and I personally have had more discomfort caused by the arms of my Vuzix WRap glasses than the displays themselves. I played a Virtual Boy demo unit back when they were on the market, and come to think of it I think I’ve heard that the damage those could cause to childrens’ eyes and depth perception were never confirmed. Still enough to scare parents away from buying one.

At this point, I think the eye strain issue is largely a scare story, and until existing medical treatments and research that are currently in active practice which require killing a child to work at all are ended, I don’t believe it’s worth discussing.
(Naming the practices in question would violate Shamus’s ban on certain conversations, so I will not, nor will I respond to inquiries therof.)

From an engineering standpoint there's incredible value in being able to look at a mistake and saying “We screwed up”.

I estimate about half of all problems of the human civilization could be solved relatively quickly if people did that. Of course that would also required that, instead of looking for someone to blame, people would have to start looking for solutions.

Shamus, something on your site seems to cause Firefox to choke on FlashPlayer, sending into into an endless memory leak loop with a single thread using one CPU core at 100%. I know it is not your fault Firefox or FlashPlayer does that, but I have never experienced this anywhere else.

I have experienced this the second time now, and both times it was here, just when I clicked “edit comment” because I made a typo*. The first time was a week or two ago, and I did not think much of it, but now it happened again, with exactly the same behaviour, even the zig-zag line of increasing memory consumption looked the same, so there must be a pattern.

*In this case, I improperly closed the <i>-tag around the world “all”. Due to my specific browser settings, cookies are deleted upon closing (had to kill the Firefox process), and so I cannot edit it myself, so would you fix that or some other moderator, please? :)

Two of the things I look forward to this time of year are 1) Carmack’s keynote speech and 2) your annotations on it. I am totally going to bother my students with all this information once the school year has started. :-)

I know you dutifully starred the fact that Doom 3 “holds up” graphics-wise only, that gameplay is a separate issue.

I’d like to add a “yes, but” to that. The graphics are part of the reason gameplay was so lackluster for me. Part of the fun of the original Doom games was having the chance to face so many dudes that at several points in the game, aiming wasn’t a problem. Emptying your chaingun in one general direction would score a hit with every round, and you still wouldn’t have polished off all of your foes. Thanks to the huge amount of resources needed for even a trio of monsters in Doom 3, you knew going in that you’d never be facing a horde of baddies. It’s like discovering the maximum polygon count on an old (or Wii) console game for a given area. In Destroy All Humans, if you left enough vehicle wrecks in your immediate vicinity, you could guarantee that no more cars or tanks could show up until you moved them away and allowed the game to remove them. Doom 3 was this, though it was even more heavily scripted.

Sorry. Had to rant. I still can’t get the Doom 3 sequel/add-on game I bought on Steam over a year ago to run, even though the core game runs fine. Weird.