Much of the research on face perception has focused on the processes involved in classifying or recognizing individual faces. We examined the stimulus properties that invoke these processes, by asking which image characteristics are necessary to classify a stimulus as a face. These requirements are not well defined but are presumably weak, for vivid faces can often by seen in random or natural images such as cloud or rock formations. To characterize possible facial configurations, we measured where observers perceived faces in semi-random and naturalistic images defined by symmetric 1/f noise. Images were grayscale with an rms contrast of 0.35, and were symmetric about the vertical midline. In these stimuli many faces can be perceived along the vertical axis, and appear stacked at multiple scales, reminiscent of totem poles. Subjects identified which faces they saw, and marked the center and outline of the face parts. These drawings were analyzed to examine the distribution of properties defining the faces. This analysis confirms the importance of stimulus dimensions such as symmetry, orientation, and contrast polarity in face perception, and reveals the relative salience and characteristics of features and their configurations. In particular, seeing a face required seeing eyes, and these were largely restricted to dark regions in the images. Other features were more subordinate and showed relatively little bias in polarity. In further measurements we also characterize the influence of chromatic variations in the images. Notably, many faces were rated as cleary defined, suggesting that once an image area is coded as a face it is reinterpreted and perceptually completed. We examine this process by asking how the same image areas are classified when salient facial features are present or absent. Collectively these measurements help to reveal the basic perceptual templates underlying the initial stages of face coding.