Models developed to predict human search behavior in natural environments rank a potential eye movement target based upon at least 3 factors: bottom-up salience (salience), similarity to the search target (relevance), and environmental features that predict where the object might be found (context). A model that combines all 3 of these factors provides accurate prediction (~90%) of eye fixations when human subjects perform a naturalistic search task - searching for pedestrians in images of urban landscapes (Ehringer et al. 2009). It is essential to develop similar models to predict search behavior in the rhesus monkey, the preeminent model for investigating the neural mechanisms involved in eye movement control. For these experiments, 2 monkeys were trained to perform a pedestrian search task identical to the task used for human subjects. Monkey eye movement behavior was then compared to the predictions of the same models developed by Ehringer and colleagues to predict human behavior. Salience, relevance, and context models were all predictive of monkey eye fixations, and the combined model was accurate to a level that approached that for human behavior (~80%). A novel finding of these experiments is the suggestion that rhesus monkeys use scene context to guide their search. We attempted to disrupt the influence of scene context on search by testing the monkeys with an inverted set of the same images. Surprisingly, the monkeys were able to locate the pedestrian at a rate similar to that for upright images (68% upright; 64% inverted). Image inversion did not affect the predictive power of the salience model. Predictions of the relevance and context models, however, were near chance for the inverted images. The predictive power of these models for monkey search behavior informs future studies to understand the neural mechanisms responsible for eye movement control during search in natural environments.