<p>Passive monocular 3-D position sensing is made possible by a new calibration scheme that relates depth to focus blur through a composite lens and aperture model. The calibration technique enables the recovery of absolute 3-D position coordinates from image coordinates and measured focus blur. A geometric model of the camera's position and orientation in space is used to transform the camera's imaging coordinates into world coordinates. The relationship between the world coordinate system and the screen coordinate system which includes the amount of focus blur, is developed by modeling the camera imaging arrangement. The modeling proceeds first through the perspective view from a pinhole camera located anywhere in space. The camera's lens and aperture system is investigated to find the relationship between depth and focus blur. The aspect ratio of the frame image is considered. Position accuracies comparable to those in stereo based vision systems are possible without the need for solving the difficult point of correspondence problem.</p>