Immersive virtual reality is becoming an increasingly powerful tool for studying visuo-spatial perception in moving observers. However, the validity of results depends critically on an accurate calibration of the visual display.

We have developed a system for calibrating a head mounted display (HMD) using camera calibration techniques (Gilson et. al., J. Neuroscience Methods, 173, 2008). The method represents a significant advance over previous methods requiring subjective judgements by someone wearing the HMD (e.g. SPAAM, Tuceryan et al., Presence-Teleop. Virt., 11, 2002). Here, we report two refinements: (i) an extension to non-see-through HMDs and (ii) the modelling and reduction of non-linear distortions.

We placed a camera inside a stationary HMD and recorded a chequerboard image generated by the HMD. Without moving the camera, the HMD was then removed and images taken of tracked objects. The chequerboard vertices permit object image locations to be translated to HMD coordinates. The position and orientation of the HMD and world objects were recorded by a 6 degrees-of-freedom tracking system. We used standard camera calibration techniques to recover the optical parameters of the HMD (not the camera) and hence derive appropriate software frustums for rendering virtual scenes in the binocular HMD. These include the aspect ratio and angular subtense of the display, the location of the optic centres and the 3D orientation of each display as well as non-linear distortions. We calibrated and tested on separate sets of data, to assess the generalizability of each calibration.

Our calibration method yields reprojection errors of around 3 pixels. This generalizes well to other data sets with reprojection errors typically less than 6 pixels, or less than 15 pixels for non-see-through HMDs. Additionally, modelling non-linear distortions in the HMD image can further reduce reprojection errors by as much as 30%.