Today's large and high-resolution displays coupled with
powerful graphics hardware offer the potential for highly
realistic 3D virtual environments, but also cause
increased target acquisition difficulty for users
interacting with these environments. We present an
adaptation of semantic pointing to object picking in 3D
environments. Essentially, semantic pointing shrinks
empty space and expands potential targets on the screen by
dynamically adjusting the ratio between movement in visual
space and motor space for relative input devices such as
the mouse. Our implementation operates in the image-space
using a hierarchical representation of the standard
stencil buffer to allow for real-time calculation of the
closest targets for all positions on the screen.