Cognitive Sciences Stack Exchange is a question and answer site for practitioners, researchers, and students in cognitive science, psychology, neuroscience, and psychiatry. It's 100% free, no registration required.

I've seen a few neuroscience accounts of visual navigation and many A.I. projects, but no psychologically plausible accounts that actually solve the computational problem (i.e. produce a working model).

Obviously I realize that to ask for a general cognitive model of visual navigation would be too much, but surely someone has done some work on a small sub-domain (e.g. driving, navigating in video games, navigation in specific animals).

Any computational account along these lines would be appreciated, though a connectionist account would be preferred.

In a nutshell, our visual system combines knowledge about the task (e.g. color of the search target) and external audio/visual stimuli (saliency) to control our gaze inside a scene.
Quite insightful about how gaze control works during a task is the paper of Land and Hayhoe on gaze during a sandwich making task: http://cvcl.mit.edu/SUNSeminar/LandHayhoe_eye_actions_VR01.pdf

Navigation is a task that requires a step-by-step approach. I am not aware of any visual navigation model, but I believe that the basic principles are the same. I hope it helps!

Hi Javier, it's great to have someone who has written a PhD on the topic provide an answer. Would you be able to summarise how your thesis answers the question or alternatively quote any relevant passages (e.g., your abstract)? Also, is the full-text PDF available anywhere on the internet?
–
Jeromy Anglim♦Dec 2 '12 at 7:34