Everyday search tasks are performed in contextually rich environments that offer numerous high-level cues for likely target location. Chun & Jiang (1998) studied such constraints using simple search stimuli, and Henderson, Weeks, & Hollingworth (1999) reported effects of semantic consistency on search. Still, the question of how scene-based constraints affect search behavior remains largely unexplored. We addressed this question by having subjects search for the presence or absence of a blimp, helicopter, or jeep in a pseudorealistic mountainous desert scene. Consistent with pre-existing scene expectations, the blimp appeared only in the sky, the jeep only on the ground, and the helicopter appeared as often in the sky as it did on the ground. Importantly, subjects were not instructed as to these contingencies, but were instead left to devise their own search strategies. There were 6 objects per scene with at least one object of each type present in each scene. Object color was manipulated to avoid duplicate items. Subjects (n = 11) were shown a semantically-defined target (e.g., “Red Blimp”) for 1 second followed by a search scene. Analysis of TP trials revealed that scene-constrained (SC) targets (blimp, jeep) were detected 265 ms faster and acquired with 1.04 fewer eye movements compared to the scene-unconstrained (SU) target (helicopter). In the case of SC targets, we also found that ∼75% of the initial saccades landed in the target-consistent region and that subjects spent a greater proportion of their total search time in these regions. Interestingly, analysis of the SU target data revealed a high percentage of initial saccades to the sky region, suggesting that eye movements were guided by pre-existing scene-constraints rather than learned probability matching. Smaller effects were found in the TA data. We conclude that subjects can use scene-based contextual constraints to guide their search, and that this information is available to the initial eye movements in a scene.