Thursday, October 01, 2009

Sony announced the development of a single lens 3D camera technology capable of recording 3D images of fast-moving subject matter such as sports, at 240fps. A prototype camera incorporating this technology is to be demonstrated at at "CEATEC JAPAN 2009" in Chiba city, Japan, from October 6.

In existing 3D camera systems have separate lenses for the left and right eyes. However, when operating the zoom and focus functions of such systems, a complex technology is required to ensure that each camera lens is closely coordinated, and there are no discrepancies in the optical axis, image size, and focus. The human eye is very sensitive to differences in the size and rotational movement of dual images, as well as any vertical misalignment or difference in image quality. The introduction of a single lens system is said to resolve any issues that may occur as a result of having different optical characteristics for each eye.

The optical scheme of the camera is shown below:

The main idea appears to be that the images from left and right sides of the single lens are split, so one sensor gets the image from the left half of the lens, while the other sensor gets the right half image.

It looks to me that Sony could make much more compact design on the same principle, if it used a single stereo image sensor, such as one described in a recent Kodak patent application US20090219432. Kodak relies on exotic lenticular microlens in this invention, but the same idea can be easily implemented with regular microlens, although the exact reference to this implementation escapes me.

I am still an amateur when it comes to 3D vision systems, but one thing has struck me as remarkable. It seems the human sense of depth is rotational invariant around the line of sight. That is, the sense of depth is the same if you are standing up, or lying on your side. That would seem to be a major issue for couch potatoes.

Someone in Japan (I had best not name names) mentioned to me that images taken with a horizontal axis and displayed assuming a horizontal axis get messed up by the brain if you lay on your couch watching 3D TV instead of sitting up.

I'd guess that Microsoft Project Natal which is supposed to win the war for our living rooms, can use its face recognition capabilities to identify a person lying on his side. Then it can rotate the TV screen to maintain the 3D effects.