Theorizing about visual search has focused on whether search mechanisms operate over a single feature map (efficient) or over the conjunction of maps (inefficient). But why is it harder to search a conjunction of two maps than one map? We propose that information pursuit models can account for this difference at the algorithmic level. Information pursuit algorithms query successively smaller segments of space for the presence of a signal, but without estimating the signal's position within the relevant space. This contrasts with models that search narrowly and then passively accumulate signal to improve location estimates. Information pursuit models are optimal because they always acquire the maximum Shannon information possible within a limited search time; they also gain information at a constant rate throughout a search episode. Crucially, to operate efficiently, information pursuit requires a single input signal —in visual search, one feature map. We propose that efficient search relies on information pursuit, whereas inefficient search relies on alternative algorithms. In support, we report several efficient search experiments with limited exposures (starting at 17ms) after which observers clicked their best estimate of the target's position. We then characterized the microgenesis of an observer's knowledge of the target's position in terms of entropy (positional uncertainty), which declined at a constant rate and was fit better by an information pursuit model than a control model. Moreover, by modeling these exposure-limited trials, we accurately predicted reaction time distributions measured in a standard visual search procedure, as well as the shallow search slope typical of efficient search. Thus an information pursuit algorithm can explain the small amount of inefficiency that is characteristic of efficient search. Overall, our experiments, modeling, and mathematical derivations make explicit at the algorithmic level the kind of processing that can evolve rapidly with the inputs from a feature map.