Classic visual search often involves a single target in artificial, static displays. However, real world search tasks can involve looking for multiple instances of multiple types of targets (hybrid foraging) among distractors. In this study, we investigated how the human "search engine" performs hybrid foraging while actively navigating through a 3D city terrain. We also investigate whether augmented reality, in the form of navigational cues, provide a benefit in this kind of complex, real-world search. In this videogame-style task, observers memorized either 4, 8, or 16 target objects and were given two competing tasks: navigate to the endpoint with a time deadline and collect as many targets as possible. Navigation cues were either an 'arrow' presented at corners of streets directing them to the endpoint, or a 'waypoint' cue, numerically indicating the distance to the endpoint, with the number decreasing if they moved in the correct direction. In our analysis, we focused on the cost of memory load, type of navigational cues, and the pattern of target selection (rate at which targets were picked, collecting multiple instances of a target type or 'runs'). We found that navigational cues hindered search performance. Observers who were not given any navigational cues picked up more targets than those given navigational cues (None: 0.492 targets/second, Arrow: 0.467, Waypoint: 0.393, p < .02). The rate at which targets were picked decreased as memory load increased (p < .01) and when navigational cues were provided (p < .01). The number of runs decreased significantly as the memory load increased but was not significantly different between the navigation conditions. These results provide a first look into a complex search task in a dynamic display and how the human search engine copes with navigational cues while performing visual search.