The face space hypothesis suggests that individual faces are encoded as points in a multidimensional space, whose dimensions are formed based on experience with faces (Valentine, 1991). Approaches based on Principal Component Analysis (PCA) have been widely used to extract dimensional information from faces in developing automated face recognition algorithms (Turk & Pentland, 1991) and in recent investigation of the psychological properties of the face space dimensions (Said & Todorov, 2011). However, there has not been any evidence showing that humans learn dimensional information from experience with faces in a way similar to PCA. In the current study, we set up a multidimensional stimulus space with synthetic faces that capture the major shape information in real faces. Adult participants (N = 10) studied a set of 16 synthetic faces sampled from this multidimensional stimulus space, and subsequently performed an old/new face recognition task with the distracter faces being 16 faces from an non-overlapping region of this stimulus space relative to the 16 studied faces. In addition, participants also judged 3 faces representing the average and two directions of the first principal component (the eigenfaces) of the studied faces. Participants learned the target faces well, as demonstrated by a high hit rate (.74) and a low false alarm rate (.12). However, they mistakenly reported that they had previously seen the average face and the eigenfaces of the studied faces and did so at a rate (.98, .95, .97 for the average face and two eigenfaces, respectively) even higher than their rate of correct reports for the learned faces (ps <.01). The findings suggest that human adults implicitly learn the average and several principal components from experience with faces, offering direct evidence for the formation of face space dimensions.