When you select a smartphone profile you essentially select the position on screen of where the center of each eye should be.

So for example for an iPhone 5 the screen has a width of 90mm. If the distance between your eyes (IPD) is 60mm, that the center of each side by side image should be about at around 66% from the center of the screen (where 0% is in the center, and 100% is the edge of the display). On an iPhone 6 Plus or a Samsung Note, the screen has a width of 160mm, so the center of each eye should be at around 37% of the screen.

To illustrate this mathematically:

center of eye position on screen = IPD / screen width;(you can multiply that with 100 if you want a percentage)

The A and B settings are for different distances between your eyes: A is for adults (65mm) and B is for kids (45mm). Now of course some adults might have a smaller IPD (interpupillary distance), in which case the B settings might work better. It can go the other way around too.

I could use the default IPD I have for each HMD and allow for an adjustment override to get the same effect.

I want to make sure my understanding is correct before I make any framework changes...

1) if I adjust my IPD setting from the default of 65, to 452) Than the distance between my cameras should shrink3) And the final rendered output would be centered closer together and smaller in size, because it would start to cross the middle point.

I was thinking some more and what is the Min/Max output dimensions in MM when using the Altergaze. When I look at other apps, they always seems to render to the full screen size, but the Rift App produces Square images, so some of the top and bottom render space is black.

The distance of the two cameras inside the virtual environment doesn't matter that much. In fact the two cameras can even overlap or be twice the distance they should be inside Unity / whatever platform your using. Moving the cameras side by side just amplifies or diminishes the stereoscopic effect.

The 65mm / 45mm refers to the distance the center of the two cameras in the real world. The best way to illustrate this is with a simple photo rather than a 3D environment.

Here is an image to illustrate this. Please note: if you just use the same image SBS, the distance between the center of the two images is at 65mm. And it has to be at 65mm regardless of how big the screen is.

Now the problem is that this rule would work fine if the center of the lenses would be exactly where the center of your eyes is. Otherwise you also have to deal with a shift of the image that happens in the process of magnification / distortion. This shift will be towards the center of your phone if the distance between your eyes is higher than the distance between the 2 lenses, and outwards if it's the other way around.

So for example, on Altergaze the distance between the two lenses is 52mm. So if the person viewing the screen has an IPD of 65mm, then the distance between the two images on the screen needs to a bit lower than 65. For an IPD of 45mm, because the image shifts outwards when magnified, the distance between the two images on the screen needs to be a bit higher than 45.

If you want to approximate it I would just use 50mm (instead of 65mm) and 47mm (instead of 45mm). This is for Altergaze of course, for Cardboard where the distance between the two lenses is different, the shift will be different. The magnification value of the lenses also affect on how great the shift is.

Hope this makes sense. I wrote a long article about the way I calculated the stereoscopy in relation to the screen, but never published it I have to go over the math again and do some more tests. I will do so in the next couple of weeks before making the SDK public and include all this information with it.

Thanks, That makes way more sense now, because I was looking at the A/B output and they didn't seem that far apart, but now it makes sense because of the HMD warping.

I've been using the Cardboard for now and I think I've been crossing my eyes while using it, which now makes sense.

I don't know if I mentioned before, but with my app you can render to a Virtual Device, so on your iPad Mini you can render to a iPhone 5 screen size.

In context of my app

I think for user's without a HMD, those who have to cross their eyes, I could still offer the current render strategy where I fill the screen without any color/distortion effects. But maybe I can offer an option to square the eyes instead of filling the entire height.- Adjust Camera IPD (Default to 65, but 0 - 80)- No User IPD (Screen is Centered)- Full/Square Fill Modes- FOV (Default 80, Custom 60-110)- No Color Correction- No Distortion Correction

One thing I haven't seen mentioned yet, but should the eyes be square? Right now i'm just using the full eye area, but it's not perfectly square. I'm thinking I may need to find the X scaler to adjust the outputted eye's mesh to be square.

"As the New Year is fresh in mind, the dominant question in mind of everyone is, “How will you make 2015 a great year?”We are also working on another console game (plus an announcement for it). Keep watching our site for updates soon. (\__/)(='.'=)(")_(")