Disclaimer: Like I said at the start of this series, I am not a lawyer. This is a complicated case and I am not an expert on the law, VR, or corporate contracts. I’m working with incomplete records of complex events where there was often more than two sides to every story. I’ve done what I could to be accurate, but series is intended as opinion and commentary, not authoritative historical record.

VR is a strange thing. For people who haven’t tried it, it’s natural to assume this is just another technological advance like plasma screens or surround sound. They think this is just the next step up in fidelity.

This is not the case. VR is as different from looking at a screen as a screen is different from a radio. VR engages parts of the brain that aren’t really involved or excited by traditional screen experiences.

Presence

A screen grab of the VR demo at Valve in 2014. This is back when they were still using the Oculus Rift, before they developed the Vive, their own competing headset.

A notable example is one that Valve was offering in its VR labs in 2014. In the demo, the user would find themselves standing on a narrow stone platform floating in a vast open space. The space wasn’t even designed to look real. The skybox was comprised of old webpages. The platform texture looked like something out of Half-Life 2. If you looked at this on a traditional screen it would be incredibly boring. It looks like “Baby’s First Game Level”. It’s cheap and dull and you wouldn’t give it a second look.

But in VR this stupid box room can be a visceral experience. If you’re at all nervous around heights then you’ll probably catch your breath, feel your knees lock up, and have an intense desire to grab onto something solid. You know you’re in a VR lab and you know it’s just a simulation, but the input reaches deep down and tickles the atavistic parts of your brain. You can see a similar idea at work in the Fear of Heights VR demo. While FoH makes for a better demo to watch, I think the Valve demo makes the more dramatic case for VR, since it accomplishes the same effect using only rudimentary visuals. It manages to convince you using unconvincing graphics, thus driving home just how different it is from traditional screen experiences.

This feeling of “being there” is called presence, and it’s only possible in VR. This effect isn’t a novelty. It persists, even in people who use VR regularly.

This is good, because it makes VR an amazing product with new possibilities. But it’s also bad, because it’s very difficult to make people understand how different it is. You can’t just show it on television or have them download a demo. If you want someone to understand how amazing VR is, then you need to stick a VR headset on their noggin and stand back.

In April of 2012 John Carmack reached out to Oculus and asked if he could try their prototype, called the Rift. Palmer Luckey – being a fan – sent him one. It was one of only two prototypes in existence. It’s entirely possible this was the best VR headset in the world.

The Rift was good, but like so many times during the evolution of VR, this breakthrough only revealed that the VR was a little more complicated and challenging than everyone anticipated. There were several problems that would need to be solved before VR would be ready for the world.

Problem #1: Field of View

This was what passed for VR in the 1970s. The past sucked.

The whole point of VR is to envelop you in a virtual world, which means the scenery needs to fill as much of your field of view as possible. You might notice that bringing up a photograph of a relaxing mountain scene on your mobile phone and mashing your face against the screen does not produce the desired effect. You can’t comfortably focus on things that close to your eyes, and even if you could it would ruin the sense of depth and being enveloped in the virtual world.

This problem is easy to fix with lenses, assuming you don’t mind cutting off the user’s field of view. You can create something that’s basically a View Master. You’ll have a comfortable view of things that appear to be in the distance, but you’ll have the same field of view of someone looking through binoculars. The drawback is that this pretty much ruins any sense of presence.

Programmer Michael Abrash gave a presentation at Steam Dev Days in 2014, talking about this exact problem. He pointed out that if you wanted to properly bend a rendered image to fill the user’s field of view, it would require a complex chain of nine precise lenses, the largest of which would be over a foot in diameter. That’s obviously not the sort of thing you can comfortably wear on your face.

The lenses required for un-distorted VR. Taken from the Abrash talk linked above.

Of course, that’s what you get if you want a perfect, undistorted image delivered to the eye. Palmer Luckey experimented with different lens arrangements and came up with a system that allowed the image fill the user’s view using only two sets of lenses. The lenses were reasonably small and lightweight and could fit within the expected volume of a VR headset.

The problem with this solution is that bending an image that aggressively will inevitably cause distortions. When wearing the Rift prototype, the users would see the world as if through a fisheye lensActually, I think Luckey’s lens creates the OPPOSITE of a fisheye distortion, and fisheye is used to correct it. This is hard to confirm because I don’t think we have an accepted term for whatever the opposite of a fisheye lens is..

Once he was able to try the headset for himself, Carmack was able to solve this problem on the software side. If the lenses create a fisheye effect, you can negate this by simplyActually it’s not simple at all. creating an image with the opposite distortion so that it is “corrected” after passing through the lens.

Problem #2: Chromatic Aberration

You can see the effect is strongest at the edges of the view, and basically vanishes in the center.

Unfortunately, the strong lenses have other side effects. Different wavelengths of light bend at different angles. Which means that while wearing the headset, the user will see the various color ranges appear displaced from one another. By coincidence, this looks sort of like stereoscopic images created using the red/blue anaglyph system in old 3D movies. But the two things are actually unrelated. In fact, in an anaglyph 3D image the color separation is required for the effect to work, while in VR the color separation kind of ruins itOr just makes it crappy..

Again, the solution seems to be to correct for the lens behavior in software. You can have the software render the image with the color ranges displaced in the opposite direction, so that after passing through the lens they will be properly re-combined into the final whole.

This explains why VR screenshots always have those strange blurry color auras around them, even though people wearing VR headsets don’t see that effect.

Problem #3: Head Tracking

The problem here is that your head doesn't pivot perfectly around an axis located between your eyes. When you tilt your head forward, you also MOVE your head forward, and the distance moved varies from person to person.

I owned a Devkit 2, the second-generation headset from Oculus. The original Rift didn’t have positional head tracking. If you turned your head, the virtual view would turn as expected. But if you moved your head to the side then the world would feel like it was moving with you, as if the whole world was strapped to your face. (Because, you know, it was.)

With my Devkit 2, occasionally I’d move outside of the active area for head tracking and it would stop working. This gave me a few seconds to experience what the original Rift must have been like. The moment this happened, it would instantly bring waves of VR sickness.

What does VR sickness feel like? It’s a bit like a headache. It’s also a bit like being dizzy, with maybe a bit of nausea. It feels kind of like all of those things, but it really is a distinct sensation. Regardless of how you describe the sensation, it feels terrible.

Worse, VR sickness can linger. You might start to feel a little uncomfortable in the simulation, so you take the headset off. But the problem doesn’t vanish immediately. If you continue to engage even after symptoms begin, the VR sickness might hang around for hours. It depends on the person.

In any case, head tracking is a key component of avoiding VR sickness. While everyone has different tolerances for VR and experiences differing levels of VR discomfort, I think Devkit 2 – the version of the Rift with head tracking – represented the Minimum viable product for VR. It had just enough technology to deliver on the basic premise of “being there” and giving that sense of presence. While a small minority of people can enjoy VR without head tracking, the vast majority of us need it to avoid sickness.

Problem #4: Latency

The user's head moves. Then the camera takes a snapshot of the user. Then the computer processes the camera view and figures out where the user's head is now. Then a new frame is rendered at the updated position. Then it shows up on the screen. There are a lot of steps involved, and the longer it takes the worse it all feels.

Imagine you’re wearing a VR headset that, for some reason, is only rendering things at one frame a secondProtip: Don’t actually do this!. You’re looking directly ahead at a virtual lamppost or some other landmark. Then you turn your head to the left. Because you’ve got such a horrible framerate, the display doesn’t update as you move your head. Instead, the lamppost remains in the center of the screen, which means that it will appear to “come with you”, always hovering directly in front of your eyes. (This will probably cause the VR sickness I mentioned above.) Then finally the rendering catches up. The headset updates, and the lamppost is suddenly on the right side of your field of view, where it should be. To someone wearing the headset, it feels like the lamppost floated to the left and then abruptly jumped over to the right.

As you improve framerate this effect will diminish, but it’s very hard to get it to go away entirely. Even at 60fps, it means the lamppost will float to the left for 16 milliseconds and then appear to jump a tiny bit to the right as the display catches up. To someone wearing the headset, it feels like the lamppost is sort of “vibrating” as they turn their head side-to-side. This effect is called judder, and it’s hard to get rid of.

You might think that this problem can be solved by buying a faster graphics card or rendering simpler scenes, but that’s not the case. Better graphics cards get us more throughput, but we need better (lower) latency.

I need to deliver all these pixels on time!

Let’s say Bob Nvidia is running a shipping company. His trucks can deliver me 1,000 packages by next week. Pretty soon he upgrades his fleet and now he can deliver 2,000 packages a week. A year after that, he can deliver 4,000 packages in a week.

Now imagine I don’t necessarily need a lot of packages, but I need them tomorrow. That’s the latency problem. The system is optimized for volume and now we’re trying to do something faster than anyone would have thought was reasonable when the system was designed. In 2012, most graphics cards, graphics drivers, display screens, and games had been designed under the assumption that the user wouldn’t have any use for more than sixty frames per second.

I’m 45 years old. When I was younger I could really tell the difference between 30fps and 60fps. But at this age the difference is pretty slight and I often don’t notice. As long as it’s consistent, I’m fine. But even my worn-out eyeballs are sensitive to framerate in VR. 30fps suffers from horrible judder that makes me want to slam my eyes closed when I turn my head. I would say that 60fps is basically tolerable, but only if I’m playing something slow-paced. For an action game, I’d probably want something closer to 90 or even 120fps.

To get the latency down to something comfortable, Carmack had to dig down in the rendering layer, look for bottlenecks, and then figure out ways to get around them. The raw power (in throughput) was there, but since there wasn’t previously a demand for extremely low latency rendering almost nobody had paid attention to this stuff until now.

Problem #5: Center of Projection

A lot of processing power is spent rendering two views that are almost - but not quite - entirely similar.

According to the Zenimax complaint, there was a fifth problem that Carmack solved, which was the “center of projection” problem. This could mean a lot of things and I’m not sure what they’re talking about specifically. I’m sure this has something to do with how to position and orient the user’s virtual eyes within the simulation.

The software needs to render two different views – one for each eye. You can think of their virtual eyes as a pair of cameras floating around the world. If you aim both cameras in exactly the same direction then the 3D effect won’t quite work as expected. You won’t get the feeling of close objects floating right in front of your face. To get that, you need to angle the cameras inward – to make them slightly cross-eyed as it were – to make objects “pop”. But hang on – isn’t that something that, in the real world, they do with their own eyes? If we render a cross-eyed view and then they cross their eyes to look at it, won’t that be… wrong? But we can’t cross their eyes for them and we can’t track where their eyeballs are looking, so how do we know where to put the cameras? Hm.

This is complicated stuff and there aren’t necessarily obvious answers. I wasn’t aware that Carmack made any particular breakthrough in this area. I’m not really disputing this claim. I’m just saying that I can’t nail down what specific advance they might be talking about in this instance.

And then the executives took credit for everything anyway.

I said above that Carmack solved these problems, but in truth it’s not at all clear who did the solving, which is important because that’s basically what this case is all about. Zenimax is trying to make the case that once Carmack had his hands on the prototype, all further advancements were his work alone.

But this was a collaboration between a hardware engineer and a programmer regarding a product that requires perfect integration of hardware and software. It would be a pretty big stretch to give either party credit for the whole thing.

For example, Luckey was certainly aware of problems #1, #2, and #3. Possibly he envisioned the solution himself, but lacked the coding expertise to realize it. In which case, Luckey was the “inventor” and Carmack was simply the engineer who followed the blueprint. Carmack’s contibutions were no doubt significant, but it’s tough to prove he did any particular thing aside from write code. And since he now works for Oculus, Zenimax probably isn’t interested in asking either Palmer Luckey or John Carmack who did the heavy lifting when it comes to new ideas.

On the other hand, innovation #4 is most certainly Carmack’s work. There probably aren’t many people in the world more uniquely qualified to optimize a rendering pipeline for low latency. Carmack even did some original research a few years earlier during the development of Quake Live, trying to figure out just where all the processor cycles went between the moment the user pushes a button and the moment the result shows up on screen.

This brings us to the details of the Zenimax complaint. We’ll get into that next time.

Footnotes:

[1] Actually, I think Luckey’s lens creates the OPPOSITE of a fisheye distortion, and fisheye is used to correct it. This is hard to confirm because I don’t think we have an accepted term for whatever the opposite of a fisheye lens is.

Tangentially, the brilliant idea of rendering the distortions in to the image and having the imperfections of the lenses correct for them to give you the desired result is also something used in selling music systems.

Cheap stereos with poor quality speakers introduce distortions into the music reproduction. If you play a specific piece of music through them system you can record the output and measure the distortions. Then you can engineer a special CD that has the ‘anti-distortion’ baked in, so that when you play that specific CD through the system it ear engineered for, you get close to perfect sound reproduction.

Which you can then use in shops to show how ‘good’ the system is.

So beware of shops selling stereos where they won’t let you put your own CD in to test the sound quality, if you are a sound reproduction junkie…

I will note that this idea is also used, directly, in the Hubble space telescope. Every active instrument on Hubble is now designed so that its optics distort the image in a way that precisely compensates for the way that the main mirror distorts the image. This means that you don’t have to have an additional set of corrective optics (COSTAR) adding errors and reducing throughput.

This is true. The main mirror on the hubble was ground to the wrong shape. The error was a matter of microns i believe, but nevertheless it threw the image off badly. Instead of going into space carrying a god-knows-how-heavy main mirror replacement, they just stuck on a lens to correct for it, like spectacles.

The main mirror on the Hubble was ground to exactly the right shape, on Earth. Then it was taken into space, and it unbent under not-its-own-weight-anymore.

The net effect is the same, but “Forgetting that the thing you are machining is under the effects of its own weight” is a much more forgivable error than “Put a mirror into space that wasn’t exactly the right shape”.

In short, the main issue was there was a flaw in the measuring device they used to ensure the mirror was built to the correct tolerances. “The incorrect assembly of the [Calibration] device resulted in the mirror being ground very precisely but to the wrong shape.”

As a side note… this is why the difference between precision and accuracy is so important.
Precision – Minimum variation in what you’re doing (all your darts are grouped together, but not necessarily at the bullseye)
Accuracy – Hitting your primary target (all your darts are centered around the bullseye, even if spread out)

Because the construction still had high precision, even if the accuracy was off. They were able to account for it. But if the precision was off, it would have been near impossible to correct for it.

I went to the University of Hawaii at Hilo for my undergraduate degrees, which runs a “small” (~1 meter in diameter) telescope on the summit of Mauna Kea, ostensibly for students. During my time there, back around 2010″“2011, the telescope was in the process of upgrading to a new, slightly larger mirror, which was very exciting for us astronomy students. When the new mirror was put on the telescope, it was ground subtly wrong and threw everything out of focus, and guess what…it turns out the mirror was made by the same company that made the Hubble main mirror, or so I heard from people who would know. I graduated and drifted away some months later, so I don’t know if they ever managed to correct for it or not.

The slightly friendlier application of this is with amplifiers which use some test tone and a microphone to calibrate themselves to the attached speakers.

That said: Neither the “special CD” nor calibrating the amplifier will save actually bad speakers. You can “straighten” the frequency response and such but if a speaker is incapable of producing (say) a certain volume at low frequency than nothing can save it. Although you probably won’t be able to hear the difference in a store that has a few more people in it, and probably several stereos playing in the background… the trick I’ve seen used most is for “low-end” customers to just use some bass-pumpin’ music and turn it up a few notches. A: you can hear the music more clearly than from the other systems you’ve heard already, because this one is above the background noise. B: It’s so loud you don’t want to keep it on for long, so you’re really in a hurry to turn it down and not examine too closely what you’re hearing.
At the higher end, staff in small stores with quiet rooms for listening usually recognize that people bring their own music, and people shopping at that spectrum have calibrated themselves to what a particular piece should sound like and now what to listen for.

All that said, now I’m kind of tempted to try the reverse compensation trick with some of my music to see what quality my system could produce… I suppose it could be done in Audacity? But I also suppose I’d need a really good microphone…

You can be aware of, say, the lens distortion effect. And you may have the idea that it could be solved in software. None of that would be particularly hard, within the framework of developing the headset.

But I see a gulf between seeing WHAT must be done and seeing HOW to do it. It goes beyond what most people would call mere engineering, and into profoundly original research territory.

The lens distortion is an example that strikes me particularly, because it looks like it could get quite tricky. And in research, I’ve seen spectacular breakthroughs that were people that were able to do what everyone in the field knew had to be done, but no one else had managed to pull off. These breakthroughs usually are obtained by raw willpower and technical talent, and Carmack strikes me precisely as the kind of guy who can pull those off.

All this to say, having “the idea” may not really count for much when the idea is so much more obvious than how to pull it off. (At least in the real world. No idea how it plays out in courts.)

Not a lawyer, don’t actually know. But with all the issues the US has with Patent Trolls where companies own vague generalized ideas with no physical implementation of said ideas, and can sue people for using them, my guess is that the original idea is very important.

I agree with you in a normal world that the person who did the majority of the actual work resulting in physical results should have “the credit”, but I would be surprised if that’s the way the courts currently work

The way this works conceptually is to render as if to a deformed screen, using some form of mesh to define the deformations. You can apply the different RGB distortions in one pass quite easily.

This is simply a special case of texture and projection mapping, a thing which GPUs are very good at.

I am certain that this idea had nothing to do with Carmack, because a VR headset simply could not exist without this step.

You don’t build something with such severe known hardware limitations that mean it simply will not work at all, unless you also have a pretty good idea of how the software can correct for it – even if you don’t have the l33t coding skillz to actually implement it.

The hard part here is working out what the transformation needs to be, and how imprecise you can be and still get away with it.
This requires extremely accurate measurements and modelling of the lenses and eyeball, and more importantly, a thorough understanding of how human binocular vision actually works in detail.

The #4 gofaster stripes are a very big deal, and almost certainly Carmack’s work. Only a GPU driver writer would be as well placed to do that kind of thing.

The hard part here is working out what the transformation needs to be, and how imprecise you can be and still get away with it.

Which Abrash already did to get his nine-lens setup.

The physics of optics is well understood. The equation for a nine-lens distortion is not a simple equation, but it is one single field equation. To get the inverse, take the fourier transform, more or less. The hardest part would be convincing the computer to do the equations with sufficient accuracy.

“But I see a gulf between seeing WHAT must be done and seeing HOW to do it. It goes beyond what most people would call mere engineering, and into profoundly original research territory. ”

I’m wondering what you think qualifies as “mere engineering,” because the process needed to solve this problem is exactly the sort of work that engineers tackle on a regular basis. In short, it’s what we (engineers) do.

So my job is VR research into cybersickness, so I live this topic. I will point out that Shamus as a migraine sufferer would be reject from most VR studies on ethical grounds, as migraine history is puts you at a high risk for cybersickness….and many ethical boards won’t let you make people sick that you know will get sick. Also, if you know they will get sick, their data isn’t that useful.

So an interesting thing you mention is the field of view, which in turn effects cybersickness. A wider field of view feels more natural, but it also allows you to feel more vection (visually induced motion sickness). So if we widen the FOV, people like it more, until they start moving then they start getting sicker. It’s why some games are now starting to blacken your peripheral vision when you turn, it's slight, but helps a lot.

Cybersickness sucks, as it has so many factors, that if you try and isolate and solve a problem for one factor, you effect 5 others and possibly replace one problem with a new one. It makes it both a really interesting topic, and really frustrating problem to fix.

They are starting to create eye tracking for VR which hopefully will allow us to control that depth of field problem you alluded to. At the moment the scene is focused around an area a few meters away, meaning things closer or further tend to be blurry. Some games will change the depth of field based on where the center of your view is…..but it does not feel good. Here hoping that's more of a solved problem in a few years time.

Presence is really interesting. I’m badly afraid of heights, but in VR? It doesn't affect me. I have a lifetime of video game experience and my brain just refuses to see a VR environment as anything else. I find it really nice and interesting, but I can never seem to be engrossed by it. Which might not be a bad thing, higher presence has been found (in some studies) to lead to higher cybersickness.

I was actually surprised to learn that current VR headsets don’t do eyeball-tracking. This is not something new, although I assume the reason they don’t do it, is that it’s expensive or impossible to fit inside of a VR headset right now. At minimum you need one camera watching each eye, plus some processing power to do the image-recognition to get the pupil’s position and orientation. More of each is better, though. :)

Eye tracking is expensive tech, and a full VR headset with tracking is expensive enough as it is.
Eye tracking is extra weight on an already heavy headset.
And every millisecond counts, so added image processing increases lag.

But give those smart engineers some time and that’ll be solved, there are some add-on solutions coming out this year. Hell, I don’t even know if eye tracking will help, but it’s something that will be researched as the tech comes available.

Eye tracking tech seems very hot right now. I read a lot about it in the context of self-driving car. (Different problem though, because different time scale. ) So it’s fair to hope that the state of the art will improve very fast soon.

My dad did his Doctoral thesis on presence in virtual reality. In 1996. I got to be part of the test group

Ah, that’s really cool, but I hear those old VR stuff were terrible to use. It is interesting that there was heaps of research in VR in the 90s, then it just stopped around 2000, and then picked up again around 2010. We had a decade where very little happened in VR cause the theory was just too far ahead of the tech.

I’m avoiding presence in my thesis, except for some notes to it existing. I had to write a section about it and while researching I realised just how deep that rabbit hole goes and I’m staying well out of it.

I, too, have mild acrophobia, but find that VR does not trigger the same responses. I think my 3d art background does the same as your videogame experience. I realize that my head is moving a camera in a scene, and that this is just a way of viewing a bunch of geometry and textures.

Oddly, the people who get the sickest are the ones who have the most experience with the “real thing”. They see this a lot in flight simulators which are used to train fighter pilots. The trainee has no problems. The REAL pilot gets sick, because it LOOKS like flying but doesn’t FEEL like it.

What I struggled to get was I’m shortsighted so I thought I wouldn’t need my glasses on when using it (since I’m focussing on something a few cms from my eyeballs). Yet “distant” objects were still blurry?

I’m guessing it something to do with bending beams of light with my eyeballs but I still can’t get my head around it.

The focus distance of the Rift is 2 meters. After the image passes through the lenses, your eyes are basically focusing on something 2 meters away. If you need glasses to look at objects 2 meters away, then you’ll have to wear your glasses under the headset. There’s enough room in there for eyeglasses, assuming you don’t have enormous 70s-style rims.

This is another one of the strange things about VR. In the real world your focal distance always matches up with with the apparent distance of the thing being viewed. But in stereoscopic 3D (movies, VR, etc) the two can differ. If I’m watching a 3D movie then maybe it feels like an object is right in front of my face, but my eyes are still focusing on the movie screen, which is farther away. Maybe some mountains feel like they’re far away, but again – I’m focused on a screen. This disconnect between apparent distance and focal distance is why 3D (not just VR, but 3D in general) feel uncomfortable.

As someone who is exceptionally nearsighted — I WISH I could see things clearly even a METER away — this is what has stopped me from even considering 3-D and VR. I need my glasses to see, dislike contact lenses, and have never found any case where I put something over my glasses at all comfortable. I’ll stick with HD; it seems to work well enough for me.

I have to imagine that someday VR will be able to correct for vision defects like it currently corrects for the lens “defects” – enter your prescription and it distorts in the opposite way that your eyes are distorting it.

It might even be possible to do without any additional performance cost, if it can be combined with the existing fisheye distortion step.

But… I started thinking, and it occurs to me that you could just build a virtual optometrist into the VR headset. When you use it for the first time you answer a bunch of “which is clearer, image 1 or image 2” type questions, and after that it produces an image tailored to your vision correction.

This should be able to correct for astigmatism as well as near/far sightedness.

>”which is clearer, image 1 or image 2″
>”They’re both frikkin’ the same! There’s been no discernible difference between them for the last three pairs of ostensibly-different images you’ve presented me with! I’m basically flipping a coin for my answer at this point!”

Yeah, my eyes only work at 6 inches, too. With glasses they work best at 1 meter, but I specifically got glasses optimized for my work (which is starting at a screen about 33″ from my head). I really need bifocals to simulate the vision normal people have. Soooo, I generally pass on movie theaters and especially on 3D movies.

I’m 46, so I’m not really expecting to be around when they have all this sorted for people like me (which will probably involve self-adjusting contacts and/or replacement eyes), but technology is moving pretty fast these days.

I have a feeling that as polarized 3d becomes popular and standardized, prescription 3d sunglasses will become a thing. Something that helps you drive in bright sun AND see the movie might be profitable, and shouldn’t cost much more than just prescription sunglasses.

Typically, sunglasses are made with vertically polarized filters to reduce glare (because light from glare is predominately horizontally polarized).

For 3D glasses you need each eye to have a different polarization. If you had horizontal on one eye and vertical on the other you would get weird effects from the glare (I assume, never tried it).

Furthermore, a lot of 3D images are created with circular polarization rather than linear, so that you can tilt your head while watching the movie. This would at least cause both eyes to interact with horizontally polarized glare in the same way, but it would not reduce glare compared to the stuff that you actually want to see.

The naked (human) eye cannot distinguish between the various types of polarized light (this guy can though).

Based upon your observation I would *guess* that your TV never produces unpolarized light at all, and just turns off one of the two polarizations when you watch a 2D signal. I don’t think mine works that way, but I’ll check when I get a chance later.

To test this, try watching your TV in a mirror with the 3D off. Your other eye should go dark instead (but only if it is circularly polarized).

If the light is linearly polarized you should be able to tilt your head to change which eye goes dark.

No 3d tv,I just had experience with it in theaters.And I dont mean that just the screen gets dark,everything gets dark,even before anything is projected on the screen.Now that might be because of the lights they use in there(again,I did not try this outside).The whole lens seems almost opaque,while the other one is perfectly transparent.

I might test this further if I remember it next time I go to see a 3d movie,or if I ever shill out for a 3d monitor.

When unpolarized light strikes a linear polarizer it becomes polarized, and loses half of its intensity. That is probably why you notice the lights get dimmer when you put on the glasses.

Note that a circular polarizer is made out of a linear polarizer and a “quarter wave plate” (which does not affect intensity) so you will observe the dimming effect regardless of whether your glasses are linear or circular.

Yes. Circular polarization basically means that the electric field vector is rotating. In light the electric field vector is always perpendicular to the direction the light is traveling so there are only two ways that it can rotate.

One way (not the only way) to distinguish the two possibilities is to curl the fingers of your right hand in the direction that the electric field is rotating. You can distinguish the polarizations by whether your thumb points in the same direction that the light is moving or the opposite direction.

When the light bounces off of a mirror the rotation of the electric field does not change, but the direction that the light propagates DOES change, so the polarization flips.

Cyborg eyes or techno-contacts shouldn’t be necessary. An additional lense(s) or otherwise more performant VR headset would be able to make the corrections. As Restam points out above, this might even be possible with the current lenses, if the software anti-distortion can be customized in a sufficient manner.

Remember that I need a different correction based on how far away I’m trying to focus. Software is the best bet, but I’m not sure how it can detect my focal intention other than just assuming I’m centering on the thing I want to look at (which isn’t always the case, especially when simulations involve glass)

30 cm is how far I can see my finger before it gets blurry,so I understand your restraints.But Ive seen a bunch of 3d movies in the past 5 years or so,and I can tell you that when the movie is filmed for 3d and not converted,and if the projector is properly calibrated,there is no problem.Now as for the benefit of it….ehhh,if the extra price you have to pay for the effect is 50-75 cents,then its worth it.More than that,probably not.

One caveat though:current 3d makes things darker.So if you are watching a bright cartoon or a bright and colorful movie,like a mcu movie,then youll be fine.But if you watch something thats already going to be dark,say batman v superman,you really should go for 2d.

I experience these kinds of issues in regular movies too. The blurry image of the 18-wheeler above is how I see every movie scene in a theater that uses a pan shot. It’s very annoying. I’m not sure why it seems it’s just me who experiences this. On a small screen I can still see it but it doesn’t cause me problems. It just looks wrong in a very fake way. However it’s only annoying and not a problem. There are very few pan shots in movies. In video games it can also happen with screenshake and motion blur. Those are a problem. Those can wreck a game for me. Doesn’t matter if it’s just a small screen, I can’t play it for any length of time.

3D movies it is much more pronounced and it is a real problem. I used to be able to watch 3D movies back in the day that used older technology. I cannot watch any modern 3D movie. Soon as the shot pans, (for example) my eyeballs are having none of it. There My eyeballs immediately start aching badly. I don’t feel nauseous or anything. Once my eyeballs start aching it is only a matter of time that it develops into a headache. That scene from the Hobbit is exceedingly bad for me (and looks exceedingly bad to me too.)

I haven’t tried anything close to modern VR. I have zero hopes for it. It’s just going to be all the problems I have now but much much worse.

For what it’s worth, pans in movies ARE blurry. They’re shot at 24fps, and the 180 degree rule means the shutter speed is 1/48th of a second. With even a modest pan, that results in a lot of motion blur. Drives me nuts – there’s no reason except nostalgia for movies shot digitally to use 24fps, and it looks crappy to people like us that consume a lot of content (games) at 50+fps

In my experience, 3D makes this even worse – 24fps in 3D just doesn’t read as real for my brain, so it turns into an unpleasant slideshow. I saw the first Hobbit film in 3D at a cinema that wasn’t capable of 48fps projection and it was a particularly bad offender.

Thanks for the explanation. Things start to blur for me at just over a meter though if I squint I can add a little more. I’ve been able to use a headset with glasses on but the first time I tried, I didn’t bother wearing my glasses and noticed it.

To my knowledge, there isn’t any customization. In fact, that didn’t even occur to me until this conversation. But now that we’re talking about it, it seems like offering different lenses for different focal distances could be good. If someone needs glasses to see at a distance but can see things at 1m just fine, then VR could let them have a glasses-free experience! Better than real life!

Uh, except for the VR headset, which is heavier and more obtrusive than any eyeglasses. Still though.

The image of a trucker speeding across the country to deliver a load of pixels puts me in mind of a movie or something about the anthropomorphized inner workings of some teenager’s gaming PC. Like Osmosis Jones or Inside Out, but a computer.

On both the Rift and the Vive, I have absolutely no problems when I’m not moving in the VR world. I can tolerate frame drops and judder, and all the other weird graphics artifacts. But move even a little in the VR world, and I want to rip the headset off!

It’s a shame, since I’ve spent a lot on headsets and really wanted to like VR. Hardly any demos are usable though. And since I use a wheelchair, and need both hands to push it, the games that put a controller in each hand and then expect you to move around and shoot are also impossible for me.

There’s one where you just sit at the bottom of the ocean and watch sea creatures go by. That one is perfect for me!

> A screen grab of the VR demo at Valve in 2014. This is back when they were still using the Oculus Rift, before they developed the Vive, their own competing headset.

Weird, because they had working prototypes since 2011. According to some visitors reports, the stuff they had in early 2014 was “lightyears ahead” of Oculus Rift DK1.
Maybe they used the Rift for some demo so they don’t show their secrets. Or it required a special environment, like their room with QR codes on the walls.

About positional head tracking, I still wonder how it works on smartphones based HMD as none have that. I didn’t try any, but people seem fine.
I guess experiences are specially tailored, and people know they shouldn’t “translate” their head.

Caveat: not a VR engineer, and more than that, I’m subject to simulator sickness even without VR, so take what follows with an appropriate quantity of NaCl.

I suspect that the “center of projection” problem is related to the fact that you can’t just point the incoming image at the eyeball and expect that the pupil will be there to receive it.

When you’re looking at objects 10m away, the horizontal or vertical movement of the pupil doesn’t have much effect on what is apparently where. When the “object” is a couple of cm from your eyes, that physical movement means that focusing can change pretty radically.

I took phrenology as poetic shorthand for the observable fact that “people’s head are all shaped slightly differently,” with no implied approval for the associated discredited idea that these individual shapes somehow influence or reveal our personalities. (Given that the post was talking only about physical differences in head-shape, rather that intangible personality attributes.)

The biggest issue about this whole shebang is the intricacies of the law surrounding it.If someone does THING for COMPANY,and they had a contract signed that their ideas while working for COMPANY belong to COMPANY,then what happens if later they want to do THING for someone else?You cant really forget an idea(unless you get amnesia),and if you try to approach it from a different angle,you will still strongly veer towards your original solution.So if you work from scratch,and still arrive at the same conclusion,does that count as different or the same?

I *think* big companies just accumulate enough patents that they can counter-sue whenever somebody patent trolls them (think mutually assured patent destruction). That would render the point of whether or not you can continue thinking the same thoughts in a new job moot in most cases. If your idea happens to be in the rare “billion dollar” idea category then it might cause a law suit anyway.

If you work for a small company just keep your head down until a big company buys you.

The approach compaq used to build an IBM compatible was to have one set of engineers reverse engineer the PC and use that to define a set of requirements, then have a completely isolated different set of engineers come up with a solution to it. That way when IBM sued them they could prove that they invented everything. Most other companies just pulled bytes from the bios and so died in court.

So how much of this is being done on a cpu or gpu, and how much is being done on a chip inside the headset? Is the headset anything more than screen, lenses, and audio? Problems #1 and #2 could likely be built into the headset now. Problems #3 and #4 seem susceptible to long and/or slow data cables. Even with eye tracking, #5 is going to involve a beefy gpu (or two) to pull off well. I suppose you could also do it by adjusting the lenses, but adding moving parts to a headset seems unwise.

There is no such thing as a “slow cable”, latency wise. Signal propagation in copper is around 2/3 the speed of light, so any reasonable cable will delay the signal by only a few tens of nanoseconds. Even at 120fps, you’ve got 8ms per frame, so the cable is pretty insignificant.

Cable latency has nothing to do with the speed of the signal through the cable,but the amount of stuff thats being sent through it.It doesnt matter if you try to send your data at the speed of light,or at one third that speed,if you try to send 16 gigs of data,it will take some time for it all to arrive.Now I dont know how much data is being sent to and from the glasses,but it has to be significant,so slowdowns due to the cable bottleneck are one of the problems that had to be overcome.

Yes,but bandwidth can lead to latency problems.You send 5 frames to the glasses,but only 4 go through before the return signal about the head movement,then the 5th frame gets sent to display a delayed image.