Here's a little tutorial about how to create the anaglyph stereoscopic 3D effect as I did it in Hyper Blazer.

Although it might need some tinkering to make it look good and even though anaglyph 3D is not the most effective stereoscopic 3D technique, it's a trick that's so easy to add to an existing OpenGL game that I hope to see more implementing it as it can really add something to the experience.You do look like a total dork when you wear those silly red-cyan 3D glasses though

So the idea is to render the image twice, one for your left eye and one for the right eye. Both images are filtered in a way that each eye only sees one image, and are blended together.

Both the direction and position of the left and right eye sights need to be changed a bit so that:1) They are about 10 cm or so apart2) They look slightly 'cross eyed'

Then the images should be filtered so that if you wear the red-cyan glasses, your left eye will only see the image meant for the left eye, and the right eye only sees the image meant for the right eye.This means that since the left glass of red-cyan glasses is red, only the red colour component should be rendered for the left eye, and only green and blue should be rendered for the right eye. Then the images should be merged together.

So what you do each frame is this:1) clear colour and depth buffers2) set glColorMask to only render red3) render the frame translated and rotated for the left eye4) clear only the depth buffer (not the colour buffer because otherwise the previously rendered red image would get erased)5) set glColorMask to only render green and blue6) render the frame translated and rotated for the right eye

// reset the colormask again for things like HUD rendering or other things that do not have depthGL11.glColorMask(true, true, true, true); } else {// no anaglyph mode means rendering for just one 'eye'renderEye(0, 0); }

renderHUD();

renderEye() renders everything in the game with 3D depthrenderHUD() renders things with no 3D depth (like the HUD)

The 'eyeDistance' will be used in the renderEye() method to translate the camera position a bit along the x-axis (so that both virtual 'eyes' have a distance in between them)

The 'crossEyedness' will be used in the renderEye() method to slightly rotate the camera around the z-axis.This is VERY important because it determines how far the player is looking and how the depth will appear. Where the lines of sight of both eyes meet will be rendered with no depth.With no crossEyedness, the player would be looking infinitely far, which implies that the game is rendered such that everything pops out of the TV screen and the farthest object would appear at the location of the screen, which doesn't look natural. What you want is that most of the image appears 'behind' the screen, and some really close objects popping out of the screen so you'll have to adjust the crossEyedness value for that.Without crossEyedness, the effect mostly doesn't work at all.

Some notes:* You will have to avoid colours like full red, green or blue. They will appear only in one eye, which will give you a big headache and no depth. In Hyper Blazer, all colours are a somewhat greyed out in anaglyph mode to avoid this.* In my experience, the effect usually doesn't work very well on laptop screens, but it works quite well on my flatscreen TV and my CRT monitor* Adjust the eyeDistance and crossEyedness in such a way that you don't overdo the effect. Overdo it, and you'll get a headache* Obviously using anaglyphic rendering will cost performance as the whole scene has to be rendered twice per frame* I used LWJGL in the example code, but of course any OpenGL binding will do* You can easily order red-cyan 3D glasses on the internet. They're dirt cheap

You've made a mistake. If you bring the eyes apart and rotate the direction they look, objects infinitely far away will be rendered infinitely far apart on the screen. In reality, things infinitely far apart is straight ahead on both eyes. What you really want to do is SKEW the projection matrices.

To make it perfectly correct, you need to know the eye separation of the player, the distance from the player to the monitor and the size of the monitor.Then you need to set the fov to match what the player would actually be able to see through the screen, and calculate the skew so objects at screen distance would be rendered with an offset of 0, or so that objects at infinity would be rendered exactly eye distance apart (these two will match up if you know the exact head->screen distance and do the fov calculation right).

If you bring the eyes apart and rotate the direction they look, objects infinitely far away will be rendered infinitely far apart on the screen.

If you consider that you look slightly cross eyed if you look at an object near to you, then why shouldn't it be rendered that way?That's exactly what your eyes do, isn't it?

Quote

In reality, things infinitely far apart is straight ahead on both eyes.

Yes, but only if you're looking at things infinitely far.Not if you look at something close to you because your eyes are then rotated inwards to the object you're looking at.

Now I'm not saying that it's impossible I made a mistake, but I'm not sure I follow your argumentation of what I did wrong and why things should be skewed instead of rotated.In any case, I'm going to try what happens if I use your way of doing it.

The black boxes are the boxes we wish to render. The black circles are the eyes. The black lines are the screens.The colored lines show the projections.The bottom area shows how it shows up on the screen.

Still I can't really wrap my head around why my method is wrong.And thinking about it, this statement is actually wrong:

Quote

If you bring the eyes apart and rotate the direction they look, objects infinitely far away will be rendered infinitely far apart on the screen.

This implies that an object infinitely far away is not rendered because it would be outside of the screen in both view ports, but I think you don't take the FOV into account.If you look at my attached awesome graph, you'll see what I mean.(The black box is what the 2 eyes are looking at. Even at that view, objects infinitely far away will not be rendered infinitely far apart on screen)

Could it be that my method is just different from yours with perhaps a slightly different result and not actually plain wrong?

I have allways done stereoscopic image with only a translation. Even stereoscopic camera doesn't do rotation.

I think the trick is that camera are not eyes. Camera will provide an image for each eyes and then your eyes will do the focus.You can't predict where the user will focus on (Background ? Far away object ? Near object ?) so you can't define a rotation.

Oh, yes, you're right! I totally didn't think that one through. Yes, you can rotate and have things at infinity still show up on the screen.Thanks for graphing it so I understand.

My gut feeling is still that a skew is right, but I have no proof now.

There's another improvement to be done. If you just no the glColorMask think, a pure red object will be black on one eye and white on the other, and a pure cyan object will be reversed. To solve this, you need to make the red eye see a monochrome version, and mix in the red color on the cyan eye. I added this to my textureloader and anywhere I manually set a color:

I have allways done stereoscopic image with only a translation. Even stereoscopic camera doesn't do rotation.

I think the trick is that camera are not eyes. Camera will provide an image for each eyes and then your eyes will do the focus.You can't predict where the user will focus on (Background ? Far away object ? Near object ?) so you can't define a rotation.

In fact cameras are very much like eyes.

"In fact slight rotation inwards (also called 'toe in') can be beneficial. Bear in mind that both images should show the same objects in the scene (just from different angles) - if a tree is on the edge of one image but out of view in the other image, then it will appear in a ghostly, semi-transparent way to the viewer, which is distracting and uncomfortable. Therefore, you can either crop the images so they completely overlap, or you can 'toe-in' the cameras so that the images completely overlap without having to discard any of the images."http://en.wikipedia.org/wiki/Stereoscopy

Your stereo camera setup defines what you look at, exactly like a single camera. You as a user can still look somewhere else, but it won't be where the camera intended it. Again, exactly like a single camera.In an ideal world, you'd wear contact lenses with little monitors in them that would track the rotation of both eyes and adjust the view ports accordingly. But that's not really viable yet, so we're stuck with one 2D screen with a fixed view that we somehow want to translated to a stereoscopic image.Well, that's my theory anyway

@Markus_Persson:Cool, I'll check that out when I get home! Now I just prevent it by greying everything out a bit, but maybe your method is better.

That's because they're red. With anaglyph, you lose one color channel.

You can't show a red color that has the same color intensity on both eyes, because to the left eye EVERYTHING is red, and to the right eye NOTHING is.

The 2nd picture looks indeed a lot easier on the eyes, but now you completely seem to loose the colour red.You're right you can't display 100% red and not wanting to claw your eyes out (or even be able to see depth with those colours), but there must be way to shift those problematic colours a bit more towards gray without loosing those colours altogether. Hmmm....

Well, you're losing an entire color channel, leaving you with just two. It might be possible to hue-shift red and compress the entire color spectrum. Since you're seeing things through a horrible color filter anyway, the brain might adapt and interpret the color as red. =D

Well, you're losing an entire color channel, leaving you with just two. It might be possible to hue-shift red and compress the entire color spectrum. Since you're seeing things through a horrible color filter anyway, the brain might adapt and interpret the color as red.

Compressing the entire color spectrum is exactly what I do, and it works fairly well.But then again, anaglyphs are the worst way of projecting stereoscopic images the way it abuses colour filtering, so it's all a big trade-off anyway.

Reading it again, I'm not exactly sure what I was thinking, but the idea is to bring all colors closer to gray Perhaps strictly speaking the word compression is not the right word, although that's probably a matter of interpretation

Like he said, our brain make correction on focus. What is good for a movie, is no good for a game :- in a movie, viewers are forced to focus on a particular subject. In this case, your view is wrong but you make correction, your eyes don't have to make the focus so it is less stressfull.- in a game, you just can't focus on something

Quote

Reading it again, I'm not exactly sure what I was thinking, but the idea is to bring all colors closer to gray persecutioncomplex

I don't know the detail but anaglyph driver from nVidia do a color filter too. It seems to be mostly more red to me.

Wow, that would be an epic mistake as that camera is filming the most expensive movie ever I hope you're wrong because I'm kinda looking forward to that movie

In Ice Age 3d, they played around a lot with the focus depth and stereo effect strength across shots. The end result was that, sure, stuff looked 3d both in closeups and wide shots, but the scale of everything varied widely. In wide shots, the huge lumbering mammoths looked like tiny toy figures. It was horrible.

It appears from that video that Cameron is going to do something similar.

Like he said, our brain make correction on focus. What is good for a movie, is no good for a game :- in a movie, viewers are forced to focus on a particular subject. In this case, your view is wrong but you make correction, your eyes don't have to make the focus so it is less stressfull.- in a game, you just can't focus on somethingI don't know the detail but anaglyph driver from nVidia do a color filter too. It seems to be mostly more red to me.

That is indeed exactly the problem of using stereoscopic 3D in an interactive game: The rendered view ports are not corrected according to what the user is looking at.This is not a problem with 2D (although it is too as soon as you implement depth of field effects).

However this doesn't make rotating the view ports inwards wrong. If you don't do it, the camera is focusing at infinity instead of closer by, and you still have exactly the same problem.

In Ice Age 3d, they played around a lot with the focus depth and stereo effect strength across shots. The end result was that, sure, stuff looked 3d both in closeups and wide shots, but the scale of everything varied widely. In wide shots, the huge lumbering mammoths looked like tiny toy figures. It was horrible.

It appears from that video that Cameron is going to do something similar.

No I disagree, to me it sounds more like that in Ice Age 3D they made a few mistakes that have nothing to do with rotating the viewports:More specifically, it sounds exactly like they separated the view ports too far apart in the wide shots (probably in an attempt to enhance the 3D effect), which makes everything look too small.

As explained in the video, they avoided that problem in the new camera by placing the lenses at the same distance from each other as human eyes (old film camera's were too big to do that).

I try to do a drawing to see what stereoscopic with rotation differ from stereoscopic without. But I didn't manage... when using rotation, projection plans are not parralel to the screen !

I see what you mean.

But I think it's a problem caused by rendering 2 different view ports to the same screen while ideally it should be 2 screens right in front of both eyes. Remember I'm rotating the *camera's* and not the spectator's eyes (they are still more or less right in front of the screen, no matter how the camera's are rotated).I think it is a small problem related to the viewing angle difference from your 2 eyes to the screen, and does not actually have anything to do with rotation of the camera's.

But I can't imagine it to be really noticable unless you're *really* close to the screen.

I think it all comes down to this (I'll quote myself here):

Quote

In an ideal world, you'd wear contact lenses with little monitors in them that would track the rotation of both eyes and adjust the view ports accordingly. But that's not really viable yet, so we're stuck with one 2D screen with a fixed view that we somehow want to translate to a stereoscopic image.

But I find it all quite mind boggling so I'm not ruling out I'm making a mistake somewhere, but I think (especially interactive) stereoscopic projection using just one screen and with no tracking of the spectator's eyes is a trade off no matter what, caused by the fact that the cameras' orientations will differ from those of the spectator's eyes.

You could say that the inward rotation of the camera's should be same as that of the spectator's eyes, but then you still have the same problem as soon as you look at something at a different depth from the screen.

Getting back to Cameron's 3D camera, I think the fact that the lenses can be 'toe-ed in' doesn't mean he's making the same mistake as I did.I guess it's still possible (and actually quite likely) that rotating the lenses inward will still film with an 'off axis' perspective by exposing the film (or CCD or whatever) at an angle. That'd probably even be the best way to film 'off axis'.

What glasses are you using ? (And how effective are they ?)With a quick search, I found thoses (sorry in french, it is the second ones)

I'm using the cheap paper ones.My understanding is that the plastic ones (the 2nd one in the link) are doing also some focus correction on the red glass so those might be easier on the eyes and give a sharper image.

As for ghosting, I found that the monitor makes most difference here. I see a lot of ghosting on laptop screens (to the point that the 3D effect becomes almost ineffective), but on my CRT monitor and LCD TV there's much less ghosting and the effect is quite good there.

Now I wonder how to do the optimized anaglyph. Of course, there is allways the shader solution. May be I can use the GL_COLOR matrix but I only heard bad thing about it ? Accumaltion buffer ? Something else ?

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org