Although generic visual search in the real world involves integrating information over multiple fixations, most research has focused on single-fixation tasks where stimuli are presented briefly. At least two factors have held back progress in understanding visual processing in more natural search tasks: (1) the difficulty of precisely controlling and manipulating the stimulus on the retina and (2) the lack of a Bayesian ideal-observer theory for multiple-fixation (extended) visual search. To enable precise stimulus control in extended search tasks, we developed “gaze-contingent” software (see http://svi.cps.utexas.edu/) that allows real-time control of the content of video displays relative to the observer's current gaze direction (measured with an eye tracker). In our first experiment, we measured search time and eye movements while subjects searched for Gabor targets in 1/f noise. We varied parametrically: target spatial frequency, noise contrast, and rate of fall-off in display resolution from the point of fixation. This experiment provides quantitative data on how much information can be removed from the periphery (how much foveation can be tolerated) without affecting search time or the pattern of eye movements. We find that the shape of the function describing search time versus degree of foveation is dependent upon target spatial frequency, but is (interestingly) independent of noise contrast. To provide an appropriate benchmark against which to evaluate search performance, and to provide a starting point for developing models of search performance, we derived the ideal observer for visual search in broadband noise, where the ideal searcher is constrained by an arbitrary function describing sensitivity across the retina and by some level of internal noise. We compare the eye movements and performance of ideal and real observers. More detailsR of the ideal visual searcher are given in another presentation (Najemnik et al. VSS 2003).