I wear corrective lenses mostly to correct crossed vision. The crossed vision prevents me from perceiving depth without the lenses. Is the SDK going to have any mechanism for correcting for this kind of vision problem? It should be a simple matter of applying an X/Y transform on the rendered image in the headset.

This is not something that is handled by the SDK. You are free to create your own stereo renderer with whatever corrections you may want. The SDK just handles device specific things (like the head-tracking).

I kind of think this is a mistake. A lot of posts on the board have to do with various configuration and calibration settings that are per user. This is just another. The SDK already includes utilities to produce StereoEyeParams instances for the left and right eye. Suggesting that every company using the SDK should then implement their own mechanism for customizing and then applying individual divergence correction seems like a non-starter. The SDK should represent both the toolkit for anyone wanting to work with VR displays, and a best practices model for those who don't want to use the SDK directly.

I agree with jherico on this one. I have a hearing loss and I honestly wouldn't know what I would do if subtitles/captioning weren't so prolific due to standardization. Seems like this is perhaps the same sort of thing, but for vision?

I started to worry about this after learning about the large variation in human eyesight that is apparent just by reading all of the various articles about it on Wikipedia.

If we took an abstract approach of providing settings in a centralized way to all Oculus applications we run into the problem of the settings not being all inclusive (some people will have conditions that cannot be corrected for by what the settings allow) and of different applications not fully implementing all of the possible settings (lack of man power and testing).

If we take a less abstract and decentralized approach of requesting that all applications make their parameters and shaders open so they can be modified then we have a kind of Tower of Babel, but it should be possible for any kind of correction to be applied as long as it is supported by an engine.

Another solution is to provide an standard SDK that everybody uses so that all of the testing can be centralized, but there is always a serious problem of Not Invented Here syndrome.

I like the idea that all Oculus applications should let their shading pipeline be overridden in some standard way. This would let even a person with a completely unique problem have a customized configuration written for them and it would work across all applications. I could imagine a person making money by consulting with people that had visual problems and creating customized solutions for them.

I'm just brainstorming. Maybe there is something more obvious I'm missing.