The goal of our work is to understand the role of low-, mid-, and high- spatial frequency bands in face recognition. Using stimuli containing only partial spatial frequency information (e.g., low resolution images), our experiments seek to titrate out the contribution of the different bands by examining under what circumstances the loss of information causes a failure in recognition. It is believed that featural information is carried by the high spatial frequencies (sf) while configural information resides in the lower sf bands. This predicts that featural changes would be harder to detect than configural ones as the sf content in an image shifts towards the low frequencies. However, contrary to expectations, we find that the detectability of both configural and featural changes degrades at the same rate across this transformation. We additionally find that reaction times for recognition are higher for low-resolution images, and that observers' tolerance to image degradations is enhanced by familiarity with the individuals depicted in the images.

The pattern of results so far suggests that the human visual system might use an iterative process and prior experience with faces to compensate for missing information. This led us to implement a computational technique for information “fill-in” using a database of calibrated faces as “prior knowledge.” Relying on statistical dependencies between different parts of the image, the information missing in a given image due to, say, occlusion or blurring, can be reconstructed from the database patch-by-patch. This technique may serve as a model of cognitive processes underlying top-down influences on image analysis.