463DALeopoldAJO'TooleTVetterVBlanz2001-01-00148994Nature NeuroscienceWe used high-level configural aftereffects induced by adaptation to realistic faces to investigate visual representations underlying complex pattern perception. We found that exposure to an individual face for a few seconds generated a significant and precise bias in the subsequent perception of face identity. In the context of a computationally derived 'face space,' adaptation specifically shifted perception along a trajectory passing through the adapting and average faces, selectively facilitating recognition of a test face lying on this trajectory and impairing recognition of other faces. The results suggest that the encoding of faces and other complex patterns draws upon contrastive neural mechanisms that reference the central tendency of the stimulus category.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published5Prototype-referenced shape encoding revealed by high-level aftereffects15017154221501715421773KADeffenbacherJJohansonTVetterAJO'Toole2000-10-0072811731182Memory and CognitionUsing a crossover recognition memory testing paradigm, we tested whether the effects on face recognition of the memorability component of face typicality (Vokey & Read, 1992, 1995) are due primarily to the encoding process occurring during study or to the retrieval process occurring at test. At study, faces were either veridical in form or at moderate (Experiment 1) or extreme (Experiment 2) levels of caricature. The variable of degree of facial caricature at study was crossed with the degree of caricature at test. The primary contribution of increased memorability to increased hit rate was through increased distinctiveness at study. Increased distinctiveness at test contributed to substantial reductions in the false alarm rate, too. Signal detection analyses confirmed that the mirror effects obtained were primarily stimulus/memory-based, rather than decision-based. Contrary to the conclusion of Vokey and Read (1992), we found that increments in face memorability produced increments in face recognition that were due at least as much to enhanced encoding of studied faces as they were to increased rejection of distractor faces.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published9The face typicality-recognizability relationship: encoding or retrieval locus?1501715422743VBlanzAJO'TooleTVetterHAWild2000-08-00829885891PerceptionWe created a 'face space' using a laser-scan representation of faces. In this space, a caricature can be made by moving a face away from the average face, along the line connecting the particular face to the average face. Here, we move the face along this line in the other direction, proceeding through the mean and 'out the other side'. This results in a face that is 'opposite', in a computational sense, to the original face. We morphed several faces into their anti-faces and sampled the morph trajectory in five discrete steps. We then collected similarity ratings from human participants for all possible pairs of morphed faces to determine how the distances in the 'physical face space' related to the distances in the 'psychological face space'. The data indicate that there is a perceptual discontinuity of face identity as the face crosses over to the 'other side of the mean'. We consider these results in the context of face-space models of human face processing.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published6On the other side of the mean: The perception of dissimilarity in human faces150171542215017154212103AJO'TooleTPriceTVetterJCBartlettVBlanz1999-12-00118919Image and Vision ComputingRecent work in the psychological literature has indicated that attractive faces are in some ways “average” [J.H. Langlois, L.A. Roggman, Attractive faces are only average, Psychological Science, 1(2) (1990) 115–121] and that the apparent age of a face can be related to its proximity to the average of a computationally derived “face space” [A.J. O'Toole, T. Vetter, H. Volz, E.M. Salter, Three-dimensional caricatures of human heads: distinctiveness and the perception of facial age, Perception, 26 (1997) 719–732]. We examined the relationship between facial attractiveness, age, and “averageness”, using laser scans of faces that were put into complete correspondence with the average face [T. Vetter, V. Blanz, Estimating coloured 3D face models from single images: an example based approach, in: H. Burkhardt, B. Neumann (Eds.), Proceedings of the Fifth European Conference on Computer Vision, Freiburg, Germany, 1998, pp. 499–513]. This representation enabled selective normalization of the 3D shape versus the surface texture map of the faces. Shape-normalized faces, created by morphing the texture maps from individual faces onto the average head shape, and texture-normalized faces, created by morphing the average texture onto the shape of each individual face, were judged by human subjects to be both more attractive and younger than the original faces. The study shows that relatively global, psychologically meaningful attributes of faces can be modeled very simply in face spaces of this sort.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/pdf210.pdfpublished103D shape and 2D surface textures of human faces: the role of “averages” in attractiveness and age15017154222093AJO'TooleTVetterVBlanz1999-09-00183931453155Vision ResearchWe measured the three-dimensional shape and two-dimensional surface reflectance contributions to human recognition of faces across viewpoint. We first divided laser scans of human heads into their two- and three-dimensional components. Next, we created shape-normalized faces by morphing the two-dimensional surface reflectance maps of each face onto the average three-dimensional head shape and reflectance-normalized faces by morphing the average two-dimensional surface reflectance map onto each three-dimensional head shape. Observers learned frontal images of the original, shape-normalized. or reflectance-normalized faces, and were asked to recognize the faces from viewpoint changes of 0, 30 and 60°. Both the three-dimensional shape and two-dimensional surface reflectance information contributed substantially to human recognition performance, thus constraining theories of face representation to include both types of information.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/pdf209.pdfpublished10Three-dimensional shape and two-dimensional surface reflectance contributions to face recognition: an application of three-dimensional morphing150171542218833KADeffenbacherTVetterJJohansonAJO'Toole1998-10-00102712331243PerceptionA standard facial caricature algorithm has been applied to a three-dimensional (3-D) representation of human heads, those of Caucasian male and female young adults. Observers viewed unfamiliar faces at four levels of caricature -- anticaricature, veridical, moderate caricature, and extreme caricature -- and made ratings of attractiveness and distinctiveness (experiment 1) or learned to identify them (experiment 2). There were linear increases in perceived distinctiveness andlinear decreases in perceived attractiveness as the degree of facial caricature (Euclidean distance from the average face in 3-D-grounded face space) increased. Observers learned to identify faces presented at either level of positive caricature more efficiently than they did with either uncaricatured or anticaricatured faces. Using the same faces, 3-D representation, and caricature levels, O'Toole, Vetter, Volz, and Salter (1997, Perception 26 719 - 732) had shown a linear increase in judgments of face age as a function of degree of caricature. Here it is concluded that older-appearing faces are less attractive, but more distinctive and memorable than younger-appearing faces, those closer to the average face.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published10Facial aging, attractiveness, and distinctiveness15017154222663TVetter1998-06-00228102116International Journal of Computer VisionImages formed by a human face change with viewpoint. A new technique is described for synthesizing images of faces from new viewpoints, when only a single 2D image is available. A novel 2D image of a face can be computed without explicitly computing the 3D structure of the head. The technique draws on a single generic 3D model of a human head and on prior knowledge of faces based on example images of other faces seen in different poses. The example images are used to ’’learn‘‘ a pose-invariant shape and texture description of a new face. The 3D model is used to solve the correspondence problem between images showing faces in different poses.
The proposed method is interesting for view independent face recognition tasks as well as for image synthesis problems in areas like teleconferencing and virtualized reality.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/pdf266.pdfpublished14Synthesis of Novel Views from a Single Face Image15017154223643MJJonesPSinhaTVetterTPoggio1997-12-00127991994Current BiologyPerceptual tasks such as edge detection, image segmentation, lightness computation and estimation of three-dimensional structure are considered to be low-level or mid-level vision problems and are traditionally approached in a bottom–up, generic and hard-wired way. An alternative to this would be to take a top–down, object-class-specific and example-based approach. In this paper, we present a simple computational model implementing the latter approach. The results generated by our model when tested on edge-detection and view-prediction tasks for three-dimensional objects are consistent with human perceptual expectations. The model's performance is highly tolerant to the problems of sensor noise and incomplete input image information. Results obtained with conventional bottom–up strategies show much less immunity to these problems. We interpret the encouraging performance of our computational model as evidence in support of the hypothesis that the human visual system may learn to perform supposedly low-level perceptual tasks in a top–down fashion.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published3Top–down learning of low-level vision tasks15017154223883TVetterNFTroje1997-09-0091421522161Journal of the Optical Society of America AHuman faces differ in shape and texture. Image representations based on this separation of shape and texture information have been reported by several authors [for a review, see Science 272, 1905 (1996)]. We investigate such a representation of human faces based on a separation of texture and two-dimensional shape information. Texture and shape were separated by use of pixel-by-pixel correspondence among the various images, which was established through algorithms known from optical flow computation. We demonstrate the improvement of the proposed representation over well-established pixel-based techniques in terms of coding efficiency and in terms of the ability to generalize to new images of faces. The evaluation is performed by calculating different distance measures between the original image and its reconstruction and by measuring the time that human subjects need to discriminate them.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published9Separation of texture and shape in images of faces for image coding and synthesis15017154223873TVetterTPoggio1997-07-00719733742IEEE Transactions on Pattern Analysis and Machine IntelligenceThe need to generate new views of a 3D object from a single real image arises in several fields, including graphics and object recognition. While the traditional approach relies on the use of 3D models, we have recently introduced [1], [2], [3] simpler techniques that are applicable under restricted conditions. The approach exploits image transformations that are specific to the relevant object class, and learnable from example Views of other "prototypical" objects df the same class. In this paper, we introduce such a technique by extending the notion of linear class proposed by Poggio and Vetter. For linear object classes, it is shown that linear transformations can be learned exactly from a basis set of 2D prototypical views. We demonstrate the approach on artificial objects and then show preliminary evidence that the technique can effectively "rotate" high-resolution face images from a single 2D view.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published9Linear object classes and image synthesis from a single example image15017154223763AJO'TooleTVetterHVolzEMSalter1997-06-00626719732PerceptionA standard facial-caricaturing algorithm was applied to a three-dimensional representation of human heads. This algorithm sometimes produced heads that appeared 'caricatured'. More commonly, however, exaggerating the distinctive three-dimensional information in a face seemed to produce an increase in the apparent age of the face--both at a local level, by exaggerating small facial creases into wrinkles, and at a more global level via changes that seemed to make the underlying structure of the skull more evident. Concomitantly, de-emphasis of the distinctive three-dimensional information in a face made it appear relatively younger than the veridical and caricatured faces. More formally, face-age judgments made by human observers were ordered according to the level of caricature, with anticaricatures judged younger than veridical faces, and veridical faces judged younger than caricatured faces. These results are discussed in terms of the importance of the nature of the features made more distinct by a caricaturing algorithm and the nature of human representation(s) of faces.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published13Three-dimensional caricatures of human heads: distinctiveness and the perception of facial age15017154223753AJO'TooleTVetterNFTrojeHHBülthoff1997-01-001267584PerceptionThe sex of a face is perhaps its most salient feature. A principal components analysis (PCA) was applied separately to the three-dimensional
(3-D) structure and graylevel image (GLI) data from laser-scanned human heads. Individual components from both analyses captured information
related to the sex of the face. Notably, single projection coefficients characterized complex differences between the 3-D structure of male and female
heads and between male and female GLI maps. In a series of simulations, the quality of the information available in the 3-D head versus GLI data for
predicting the sex of the face has been compared. The results indicated that the 3-D head data supported more accurate sex classification than the GLI
data, across a range of PCA-compressed (dimensionality-reduced) representations of the heads. This kind of dual face representation can give insight
into the nature of the information available to humans for categorizing and remembering faces.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/Perception-1997-26-75_375[0].pdfpublished9Sex classification is better with three-dimensional head structure than with image intensity information.150171542218843TVetterAHurlbertTPoggio1995-05-0035261269Cerebral CortexThis report describes the main features of a view-based model of object recognition. The model does not attempt to account for specific cortical structures; it tries to capture general properties to be expected in a biological architecture for object recognition. The basic module is a regularization network (RBF-like; see Poggio and Girosi, 1989; Poggio, 1990) in which each of the hidden units is broadly tuned to a specific view of the object to be recognized. The network output, which may be largely view independent, is first described in terms of some simple simulations. The following refinements and details of the basic module are then discussed: (1) some of the units may represent only components of views of the object—the optimal stimulus for the unit its “center,” is effectively a complex feature; (2) the units' properties are consistent with the usual description of cortical neurons as tuned to multidimensional optimal stimuli and may be realized in terms of plausible biophysical mechanisms; (3) in learning to recognize new objects, preexisting centers may be used and modified, but also new centers may be created incrementally so as to provide maximal view invariance; (4) modules are part of a hierarchical structure—the output of a network may be used as one of the inputs to another, in this way synthesizing increasingly complex features and templates; (5) in several recognition tasks, in particular at the basic level, a single center using view-invariant features may be sufficient.
Modules of this type can deal with recognition of specific objects, for instance, a specific face under various transformations such as those due to viewpoint and illumination, provided that a sufficient number of example views of the specific object are available. An architecture for 3D object recognition, however, must cope- to some extent—even when only a single model view is given. The main contribution of this report is an outline of a recognition architecture that deals with objects of a nice class undergoing a broad spectrum of transformations—due to illumination, pose, expression, and so on- by exploiting prototypical examples. A nice class of objects is a set of objects with sufficiently similar transformation properties under specific transformations, such as viewpoint transformations. For nice object classes, we discuss two possibilities: (1) class-specific transformations are to be applied to a single model image to generate additional virtual example views, thus allowing some degree of generalization beyond what a single model view could otherwise provide; (2) class-specific, view-invariant features are learned from examples of the class and used with the novel model image, without an explicit generation of virtual examples.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published8View-based Models of 3D Object Recognition: Invariance to Imaging Transformations150171542218853TVetterTPoggio1994-10-0048443453Spatial VisionAccording to the 1.5 views theorem (Ullman and Basri, 1991; Poggio, 1990) recognition of a specific 3D object (defined in terms of pointwise features) from a novel 2D view can be achieved from at least two 2D model views (or each object, for orthographic projection). In this note we discuss how recognition can be achieved from a single 2D model view by exploiting prior knowledge of an object's symmetry. We prove that for any bilaterally symmetric 3D object one non-accidental 2D model view is sufficient for recognition since it can be used to generate additional "virtual" views. We also prove that for bilaterally symmetric objects the correspondence of four points between two views determines the correspondence of all other points. Symmetries of higher order allow the recovery of Euclidean structure from a single 2D view.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/pdf1885.pdfpublished10Symmetric 3D objects are an easy case for 2D object recognition15017154226793TVetterTPoggioHHBülthoff1994-01-00141823Current BiologyBackground: Human observers can recognize three-dimensional objects seen in novel orientations, even when they have previously seen only a relatively small number of different views of the object. How our visual system does this is a key problem in vision research. Recent theories and experiments suggest that the human visual system might store a relatively small number of sample two-dimensional views of a three-dimensional object, and recognize novel views by a process of interpolation between the stored sample views. These sample views may be collected during a training phase as the visual system familiarizes itself with the object.Results Here, we investigate whether constraints on the shapes of objects commonly encountered in the real world can reduce the number of training views required for recognition of three-dimensional objects. We are particularly concerned with the constraint of object symmetry. We show that if an object is bilaterally symmetrical, then additional ‘virtual views’ can automatically be generated from one sample view by symmetry transformations. These virtual views should make it more easy to recognize novel views of a symmetric than an asymmetric object, when a single sample view has been seen. Recognition should be particularly facilitated when the novel views are close to the virtual view. We present psychophysical results that bear out these predictions.Conclusion Our results show that the human visual system can indeed exploit symmetry to facilitate object recognition, and support the model for object recognition in which a small number of two-dimensional views are remembered and combined to recognize novel views of the same object. These results raise questions about how symmetry is recognized, and symmetry transformations implemented, in real, biological neural networks.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/pdf679.pdfpublished5The importance of symmetry and virtual views in three-dimensional object recognition150171542218797BWHwangVBlanzTVetterSWLeeBarcelona, Spain2000-09-0084284515th International Conference on Pattern Recognition (ICPR 2000)This paper proposes a method for face reconstruction that makes use of only a small set of feature points. Faces can be modeled by forming linear combinations of prototypes of shape and texture information. With the shape and future information at the feature points alone, we can achieve only an approximation to the deformation required. In such an underdetermined condition, we find an optimal solution using a simple least square minimization method. As experimental results, we show well-reconstructed 2D faces even from a small number of feature points.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/pdf1879.pdfpublished3Face Reconstruction from a Small Number of Feature Points15017154221687BWHwangVBlanzTVetterSWLeeSeoul, South Korea2000-01-00311317IEEE International Workshop on Biologically Motivated Computer Vision (BMCV 2000)This paper proposes a method for face reconstruction that makes use of only a small set of feature points. Faces can be modeled by forming linear combinations of prototypes of shape and texture information. With the shape and texture information at the feature points alone, we can achieve only an approximation to the deformation required. In such an under-determined condition, we find an optimal solution using a simple least square minimization method. As experimental results, we show well-reconstructed 2D faces even from a small number of feature points.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published6Face reconstruction using a small set of feature points15017154222237CWallravenVBlanzTVetterBonn, Germany1999-09-0040541221. DAGM-Symposium (DAGM 1999)The recovery of the threedimensional structure of faces with conventional stereo methods still proves difficult.
In this paper we introduce a higher order constraint based on linear object classes, which supplies a standard stereo algorithm with prior knowledge of the general structure of faces. This constraint has been learned by exploiting the similarities between 200 faces in a database and is represented in a morphable face model. This combined approach has been tested and compared against an already
existing method for estimating depth information using only prior knowledge and against the standard stereo algorithm.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/DAGM-1999-Wallraven.pdfpublished73D-reconstruction of faces: Combining stereo with class-based knowledge150171542218787VBlanzTVetterLos Angeles, CA, USA1999-00-0018719426th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '99 )In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models
can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an "unlikely" appearance.
Starting from an example set of 3D face models, we derive a
morphable face model by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated
matching algorithms. We show 3D face reconstructions from single images and their applications for photo-realistic image manipulations. We also demonstrate face manipulations according to complex parameters such as gender, fullness of a face or its distinctiveness.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/pdf1878.pdfpublished7A Morphable Model for the Synthesis of 3D Faces150171542218817TVetterVBlanzFreiburg, Germany1998-06-004995135th European Conference on Computer VisionIn this paper we present a method to derive 3D shape and surface texture of a human face from a single image. The method draws on a general flexible 3D face model which is “learned” from examples of individual 3D-face data (Cyberware-scans). In an analysis-by-synthesis loop, the flexible model is matched to the novel face image.
From the coloured 3D model obtained by this procedure, we can generate new images of the face across changes in viewpoint and illumination. Moreover, nonrigid transformations which are represented within the flexible model can be applied, for example changes in facial expression.
The key problem for generating a flexible face model is the computation of dense correspondence between all given 3D example faces. A new correspondence algorithm is described which is a generalization of common algorithms for optic flow computation to 3D-face data.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published14Estimating coloured 3D face models from single images: An example based approach15017154223857TVetterVBlanzStirling, UK1998-00-00310326NATO Advanced Study Institute on Face Recognition: From Theory to Applications 1997When only a single image of a face is available, can we generate new images of the face across changes in viewpoint or illumination? The approach presented in this paper acquires its knowledge about possible image changes from other faces and transfers this prior knowledge to a novel face image. In previous work we introduced the concept of linear object classes (Vetter and Poggio, 1997; Vetter, 1997): In an image based approach, a flexible image model of faces was used to synthesize new images of a face when only a single 2D image of that face is available. In this paper we describe a new general flexible face model which is now learned from examples of individual 3D-face data (Cyberware-scans). In an analysis-by-synthesis loop the flexible 3D model is matched to the novel face image. Variation of the model parameters, similar to multidimensional morphing, allows for generating new images of the face where viewpoint, illumination or even the expression is changed. The key problem for generating a flexible face model is the computation of dense correspondence between all given example faces. A new correspondence algorithm is described, which is a generalization of existing algorithms for optic flow computation to 3D-face data.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published16Generalization to novel views from a single face image1501715422VetterB19977TVetterVBlanzErlangen, Germany1997-11-004350Conference 3D Image Analysis and Synthesis '97nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7Generalization to Novel Views from a Single Face Image15017154223867TVetterMJJonesTPoggioSan Juan , Puerto Rico1997-06-004046IEEE Conference on Computer Vision and Pattern Recognition (CVPR 1997)Flexible models of object classes, based on linear combinations of prototypical images, are capable of matching novel images of the same class and have been shown to be a powerful tool to solve several fundamental vision tasks such as recognition, synthesis and correspondence. The key problem in creating a specific flexible model is the computation of pixelwise correspondence between the prototypes, a task done until now in a semiautomatic way. In this paper we describe an algorithm that automatically bootstraps the correspondence between the prototypes. The algorithm -which can be used for 2D images as well as for 3D models-is shown to synthesize successfully a flexible model of frontal face images and a flexible model of handwritten digits.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published6A bootstrapping algorithm for learning linear models of object classes15017154223847TVetterMünchen, Germany1997-04-00143146IEEE International Conference of Acoustics, Speech, and Signal Processing (ICASSP 1997)A new technique is described for recognizing faces from new viewpoints. From a single 2D image of a face synthetic images from new viewpoints are generated and compared to stored views. A novel 2D image of a face can be computed without knowledge about the 3D structure of the head. The technique draws on prior knowledge of faces based on example images of other faces seen in different poses and on a single generic 3D model of a human head. The example images are used to learn a pose-invariant shape and texture description of a new face. The 3D model is used to solve the correspondence problem between images showing faces in different poses. The performance of the technique is tested on a data set of 200 faces on known orientation for rotations up to 90°.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/pdf384.pdfpublished3Recognizing faces from a new viewpoint15017154223837TVetterMonaco1997-02-00131138Imagina 97nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7Automated face morphing and image based modeling of faces150171542218897TVetterTPoggioHamburg, Germany1996-12-00182185Workshop der GI-Fachgruppe 1.0.4 Bildverstehennonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published3Ein "Bootstrapping Algorithmus" zum Erlernen eines linearisierten Objektklassen-Modells1501715422Vetter19967TVetterKillington, VT, USA1996-10-002227Second International Conference on Automatic Face and Gesture Recognition (FG 1996)A new technique is described for synthesizing images of faces from new viewpoints, when only a single 2D image from a known viewpoint is available. A novel 2D image of a face can be computed without knowledge about the 3D structure of the head. The technique draws on prior knowledge of faces based on example images of other faces seen in different poses and on a single generic 3D model of a human head. The example images are used to learn a pose-invariant shape and texture description of a new face. The 3D model is used to solve the correspondence problem between images showing faces in different poses. Examples of synthetic “rotations” over 24° based on a training set of 100 faces are shown.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published5Learning novel views to a single face image150171542218927TVetterHeidelberg, Germany1996-09-0016116818. DAGM-SymposiumImages formed by a human face change with viewpoint. A new technique is described for synthesizing images of faces from new viewpoints, when only a single 2D image is available. A novel 2D image of a face can be computed without knowledge about the 3D structure of the head. The technique draws on prior knowledge of faces based on example images of other faces seen in different poses and on a single generic 3D model of a human head. The example images are used to learn a pose-invariant shape and texture description of a new face. The 3D model is used to solve the correspondence problem between images showing faces in different poses. Examples of synthetic “rotations” over 24° based on a training set of 100 faces are shown.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7Novel Views of a Single Face Image15017154224457VBlanzBSchölkopfHHBülthoffCBurgesVVapnikTVetterBochum, Germany1996-07-002512566th International Conference on Artificial Neural NetworksTwo view-based object recognition algorithms are compared: (1) a heuristic algorithm based on oriented filters, and (2) a support vector learning machine trained on low-resolution images of the objects. Classification performance is assessed using a high number of images generated by a computer graphics system under precisely controlled conditions. Training- and test-images show a set of 25 realistic three-dimensional models of chairs from viewing directions spread over the upper half of the viewing sphere. The percentage of correct identification of all 25 objects is measured.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/pdf445.pdfpublished5Comparison of view-based object recognition algorithms using realistic 3D models150171542218917TVetterBochum, Germany1996-07-007157196th International Conference on Artificial Neural NetworksA new technique is described for synthesizing images of faces from new viewpoints, when only a single 2D image is available. A novel 2D image of a face can be computed without knowledge about the 3D structure of the head. The technique draws on prior knowledge of faces based on example images of other faces seen in different poses and on a single generic 3D model of a human head. The example images are used to learn a pose-invariant shape and texture description of a new face. The 3D model is used to solve the correspondence problem between images showing faces in different poses. Examples of synthetic "rotations" over 24 degree based on a training set of 100 faces are shown.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published4Learning novel views to a single face image150171542218907TVetterTPoggioCambridge UK1996-04-006526594th European Conference on Computer VisionThe need to generate new views of a 3D object from a single real image arises in several fields, including graphics and object recognition. While the traditional approach relies on the use of 3D models, we exploit 2D image transformations that are specific to the relevant object class and learnable from example views of other ldquoprototypicalrdquo objects of the same class.
For linear object classes we show that linear transformations can be learned exactly from a basis set of 2D prototypical views. We demonstrate the approach on artificial objects and then show preliminary evidence that the technique can effectively ldquorotaterdquo high-resolution face images from a single 2D view.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7Image synthesis from a single example image150171542218947TVetterNFTrojeBielefeld, Germany1995-09-0011812517. DAGM-SymposiumHuman faces differ in shape and texture. This paper describes a representation of grey-level images of human faces based on an automated separation of two- dimensional shape and texture. The separation was done using the point correspondence between the different images, which was established through algorithms known from optical flow computation. A linear description of the separated texture and shape spaces allows a smooth modeling of human faces. Pictures of faces along the principal axes of a small data set of 50 faces are shown. We also show face reconstructions based on this small example set.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7Separation of texture and two-dimensional shape in images of human faces15017154226697AJO'TooleHHBülthoffNFTrojeTVetterZürich, Switzerland1995-06-00326331International Workshop on Automatic Face- and Gesture RecognitionWe describe a computational model of face recog­
nition that makes use of the overlapping texture and
shape information visible in different views of faces.
The model operates on view dependent data from
three­dimensional laser scans of human heads, which
were registered onto a three­dimensional head model.
We show that the overlapping visible regions of heads
can support accurate recognition even with pose dif­
ferences of as much as 90 degrees (full face to profile
view) between the learning and testing view.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/pdf669.pdfpublished5Face Recognition across Large Viewpoint Changes150171542218957TVetterFreiburg, Germany1994-10-001. Fachtagung der Gesellschaft für Kognitionswissenschaften (KogWis94)nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0An early vision model for 3D object recognition1501715422151346TVetterMJJonesTPoggio1997-03-001997-03-00A Bootstrapping Algorithm for Learning Linear Models of Object ClassesnonotspecifiedA Bootstrapping Algorithm for Learning Linear Models of Object Classes1501715422151446AJO'TooleTVetterHVolzEMSalter1997-03-001997-03-00As we get older, do we get more distinct?nonotspecifiedAs we get older, do we get more distinct?1501715422150746VBlanzMJTarrHHBülthoffTVetter1996-11-001996-11-00What object attributes determine canonical views?nonotspecifiedWhat object attributes determine canonical views?1501715422150546NFTrojeTVetter1996-10-001996-10-00Representations of human facesnonotspecifiedRepresentations of human faces1501715422148946TVetter1996-02-001996-02-00Synthesis of novel views from a single face imagenonotspecifiedSynthesis of novel views from a single face image1501715422148746AJO'TooleTVetterHHBülthoffNFTroje1995-12-001995-12-00The role of shape and texture information in sex classificationnonotspecifiedThe role of shape and texture information in sex classification1501715422147746TVetterNFTroje1995-04-001995-04-00A separated linear shape and texture space for modeling
two-dimensional images of human facesnonotspecifiedA separated linear shape and texture space for modeling
two-dimensional images of human faces1501715422147846TVetterTPoggio1995-04-001995-04-00Linear Object Classes and Image Synthesis from a Single
Example ImagenonotspecifiedLinear Object Classes and Image Synthesis from a Single
Example Image1501715422146846AJO'TooleHHBülthoffNFTrojeTVetter1995-01-001995-01-00Face Recognition across Large Viewpoint ChangesnonotspecifiedFace Recognition across Large Viewpoint Changes150171542290646NKLogothetisTVetterAHurlbertTPoggio1994-04-001111994-04-00View-based Models of 3D Object Recognition and Class-specific InvariancesnonotspecifiedView-based Models of 3D Object Recognition and Class-specific Invariances150171542270346TVetterTPoggioHHBülthoff1992-12-001992-12-003D Object Recognition: Symmetry and Virtual Viewsnonotspecified3D Object Recognition: Symmetry and Virtual ViewsO039TooleLVB20017AJO'TooleDALeopoldTVetterVBlanzSarasota, FL, USA2001-12-00332First Annual Meeting of the Vision Sciences Society (VSS 2001)Prototype referenced adaptation effects were found among face stimuli in a computationally derived multidimensional face space based on a 3D morphing algorithm. Individual faces can be described as points or vectors in this space. An “identity trajectory” connecting a face to the average of all faces, defines a gradient of face individuality. Anti-caricatures lie along the “identity” trajectory between an individual face and the average face. “Anti-faces” lie along this trajectory, but on the “other side of the mean”. While anti-caricatures look like less distinctive versions of the original face, anti-faces have the appearance of an entirely different individual. For example, faces with light complexions and light eyes yield anti-faces with dark complexions and dark eyes, and faces with roundish shapes yield anti-faces with a gaunt, skinny appearance. We found that pre-exposure to an “anti-face”, specifically facilitated the identification of briefly presented anti-caricatures along the same trajectory, while diminishing performance for other non-colinear faces. The perceptual bias following anti-face adaptation was strong enough to cause systematic mislabeling of the average face as the face complement to the pre-exposed anti-face. Additional experiments showed that the adaptation effect survived a range of translations in size and retinal location between pre-exposure and test. Combined, the results suggest that the subordinate perception and recognition of faces, and perhaps other objects, may draw upon contrastive neural mechanisms that reference the central tendency of the stimulus category.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-332Prototype-referenced shape perception : Adaptation and after-effects in a multidimensional face space1501715422150171542118807AJO'TooleVBlanzTVetterHAWildLos Angeles, CA, USA1999-11-002340th Annual Meeting of the Psychonomic SocietyWe created a “face space” using a laser scan representation of faces. In this space, a caricature can be made by moving a face away from the average, along the line connecting it to the average. If we go in the other direction, we can move the face through the mean and out the other side.
We call the result of this process an “antiface” because it is an opposite, in a computational sense, to the original face. We morphed faces into their antifaces and sampled the transition in five discrete steps. We then collected similarity ratings for all possible pairs of morphed faces.
The data revealed a perceptual discontinuity of face identity as the face crosses over to the other side of the mean. We consider these results in the context of face space models of human face processing.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-23On the other-side of the mean: Perceiving the dissimilarity of faces15017154223167AIRuppertsbergTVetterHHBülthoffTübingen, Germany1999-02-00512. Tübinger Wahrnehmungskonferenz (TWK 99)Burt & Perrett (1997) showed that subject’s judgment of gender and expression were more influenced by the left than by the right side of the face (viewer’s perspective). We
investigated whether recognition performance differs for faces rotated to the right or to the left. In the learning stage, subjects were asked to study 10 frontal views of 3D-Cyberware head scans with their respective names for ten minutes. Immediately after they were tested in a naming task, where a face was shown on the computer screen and subjects had to press the corresponding name key on the keyboard. When their error rate was lower than 5% over the last 30 trials they started the actual experiment. At that stage they had named each face at least three times. In a delayed-match-to-sample task subjects were presented a frontal view of a face for 100ms, followed by a mask for 500ms, and finally a side view (+/- 30 and 60 deg) of a face for again 100ms. The task was to assess whether
the two views depicted the same person or not. Subjects were asked to respond as fast as possible and their response time and error were recorded In Exp. 1 we found an effect of orientation as expected. But there is a significant difference between the direction of rotation. Subjects made more errors when the faces looked to the left (viewer’s perspective) than when they looked to the right. This was found for familiar and unfamiliar faces. In Exp.2 we made the heads symmetrical to exclude any effect of the face asymmetry. In the learning stage, the pictures were replaced by pictures of symmetrical faces. For the familiar faces we found the same result as in Exp. 1. But for
unfamiliar symmetrical heads subjects made more errors when the face was turned to the right. In Exp. 3 we studied whether this result is related to differences in hemispherical processing of faces. The side view could now appear at the fixation cross or +/- 2.6 deg to either side of it. Subjects made fewer errors when the side view was presented at fixation. We were not able to find performance differences depending on the side of the visual field, but rather differences depending on the side of the face.
When generalizing to a novel view of a face object-relevant information seems to play a more important role than the specialized processing capabilities of the hemispheres.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/pdf316.pdfpublished-51Asymmetrical face perception with in-depth rotated faces15017154222877IBülthoffFNNewellTVetterTübingen, Germany1999-02-00522. Tübinger Wahrnehmungskonferenz (TWK 99)Zeigt die Bestimmung der Geschlechtszugehörigkeit von Gesichtern die charakteristischen Merkmale der kategorischen Wahrnehmung?
Durch ein automatisiertes 3D-Morph-Verfahren wurden aus 3D-Laser-scans von männlichen und weiblichen Köpfen Misch-Gesichter synthetisiert. Das Morph-Verfahren erlaubt sowohl die Textur als auch die Form eines Gesichtes zu verändern, so daß Pigmentation und Form zwischen männlichen und weiblichen Gesichtern kontinuierlich angepaßt werden können. Andere geschlechtsspezifische Merkmale wie Frisur, Bart, Make-up oder Schmuck wurden weggelassen oder computergraphisch entfernt. Alle Gesichter wurden in frontaler oder seitlicher Ansicht (3/4-view) mit neutralem Gesichtsausdruck präsentiert. Versuchspersonen haben zuerst eine Diskriminationsaufgabe (XAB-Test) durchgeführt und danach wurde die subjektive Geschlechtsgrenze entlang des Morph-Kontinuums in einer Kategorisierungsaufgabe bestimmt.
Es zeigte sich für alle Versuchspersonen die typische Stufenfunktion in der Kategorisierungsaufgabe. Im XAB-Test war es jedoch für die Versuchspersonen nicht einfacher, ein Gesichtspaar zu unterscheiden, das durch die putative kategorische Geschlechtsgrenze getrennt war als für Gesichtspaare an dem mehr weiblichen oder männlichen Ende des Morph-Kontinuums.
Unsere Experimente zeigen, daß das Geschlecht eines Gesichts nicht kategorisch wahrgenommen wird.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/pdf287.pdfpublished-52Geschlechtswahrnehmung von Gesichtern, die durch 3D-Morph-Verfahren erzeugt wurden15017154223277TVetterTübingen, Germany1999-02-00382. Tübinger Wahrnehmungskonferenz (TWK 99)"Can you imagine?"
"Yes, I see ......"
In human language mental imagery seems to be a natural ability. Imagery is often discussed as one of the basic forms of human cognition for the analysis of situations or scenes.
In my talk I will present a computational model for synthesizing new images of a faces, when only a single image of the face is available. New images of the face can be generated across changes in viewpoint, in illumination and in facial expressions? The approach presented acquires its knowledge about possible image changes from other faces and transfers this prior knowledge to a novel face image.
A general flexible face model is "learned" either from examples of images or from 3D-data (Cyberware-scans) of a large dataset of faces. In an analysis-by-synthesis loop the flexible face model is matched to the novel face image, thereby parameterizing the novel image in terms of the known face model. Variation of the model parameters, similar to multidimensional morphing, allows for generating new photorealistic images of the face.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-38Machine analysis and synthesis of face images150171542218877KADeffenbacherJJohansonTVetterAJO'TooleDallas, TX, USA1998-11-0039th Annual Meeting of the Psychonomic Societynonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0The face typicality-recognizability relationship: encoding or retrieval locus1501715422BulthoffNVB19987IBülthoffFNNewellTVetterHHBülthoffOxford, UK1998-08-0012721st European Conference on Visual PerceptionWe investigated whether the judgment of face gender shows the typical characteristics of categorical perception. As stimuli we used images of morphs created between pairs of male/female 3-D head laser scans. In experiment 1, texture and shape were morphed between both faces. In experiment 2, either the average texture of all faces was mapped onto the shape continuum between the two faces or we mapped the texture continuum between each face pair onto an average shape face. Thus, either the shape or the texture remained constant in any one condition. The subjects viewed these morphs first in a discrimination task (XAB) and then in a categorisation task which was used to locate the subjective gender boundary between each male/female face pair. Although we found that subjects could categorise the face images by their gender in the categorisation task and that texture alone is a better gender indicator than shape alone, the subjects did not discriminate more easily between face images situated at the category boundary in any of our discrimination experiments. We argue that we do not perceive the gender of a face categorically and that more cues are needed to decide the gender of a person than those provided by the faces only.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-127Gender perception of 3-D head laser scans150171542211237IBülthoffFNNewellTVetterHHBülthoffOxford, UK1998-08-0012721st European Conference on Visual PerceptionWe investigated whether the judgment of face gender shows the typical characteristics of categorical perception. As stimuli we used images of morphs created between pairs of male/female 3-D head laser scans. In experiment 1, texture and shape were morphed between both faces. In experiment 2, either the average texture of all faces was mapped onto the shape continuum between the two faces or we mapped the texture continuum between each face pair onto an average shape face. Thus, either the shape or the texture remained constant in any one condition. The subjects viewed these morphs first in a discrimination task (XAB) and then in a categorisation task which was used to locate the subjective gender boundary between each male/female face pair. Although we found that subjects could categorise the face images by their gender in the categorisation task and that texture alone is a better gender indicator than shape alone, the subjects did not discriminate more easily between face images situated at the category boundary in any of our discrimination experiments. We argue that we do not perceive the gender of a face categorically and that more cues are needed to decide the gender of a person than those provided by the faces only.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-127Gender perception of 3D head laser scans150171542218867AIRuppertsbergTVetterHHBülthoffFort Lauderdale, FL, USA1998-05-00173Annual Meeting of the Association for Research in Vision and Ophthalmology (ARVO 1998)nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/pdf1886.pdfpublished-173A Face Specific Similarity Measure for Image Coding and Synthesis150171542211257FNNewellIBülthoffTVetterHHBülthoffFort Lauderdale, FL, USA1998-05-00173Annual Meeting of the Association for Research in Vision and Ophthalmology (ARVO 1998)nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-173Effects of shape and texture on the perceptual categorization of gender in faces150171542211247IBülthoffFNNewellTVetterHHBülthoffFort Lauderdale, FL, USA1998-05-00171Annual Meeting of the Association for Research in Vision and Ophthalmology (ARVO 1998)nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-171Is the gender of a face categorically perceived?15017154222867AIRuppertsbergTVetterHHBülthoffTübingen, Germany1998-02-001081. Tübinger Wahrnehmungskonferenz (TWK 98)Welche Bereiche eines Gesichtes berücksichtigen wir besonders, wenn wir die Ähnlichkeit von Gesichtern vergleichen? Vetter und Troje (JOSA A, 14, 2152-2161) haben ein korrespondenz-basiertes Gesichtskodierungssystem eingeführt, bei dem die Bildinformation in Texturinformation und in Forminformation unterteilt wird. Mit Hilfe einer Hauptachsentransformation auf der separaten Textur und Form kann eine Basis gefunden werden, in der jedes andere Gesicht darstellbar ist. Um dieses Kodierungssystem derart zu verbessern, daß es der menschlichen Ähnlichkeitswahrnehmung entspricht, haben wir
eine spezifische Gewichtung des Formraumes eingeführt. Des Weiteren haben wir die Schwelle des gerade noch erkennbaren Unterschieds (JND) zwischen Rekonstruktion und Original bestimmt. In Versuch 1 wurden drei verschiedene Gewichtungen des Formraumes vorgenommen. Gewichtung 1 berücksichtigte jeden Bildpunkt im Gesicht, Gewichtung 2 berücksichtigte die Augen, die Nase, den Mund und die Gesichtskontur und Gewichtung 3 dieselben Gebiete wie Gewichtung 2 bis auf die Gesichtskontur. Den
Versuchspersonen wurden drei Gesichter gleichzeitig präsentiert. Das obere Gesicht war das Originalgesicht, das Bild links und rechts unten waren Rekonstruktionen des oberen Bildes. Die Versuchspersonen mußten angeben, welches der beiden unteren Bilder dem oberen ähnlicher war. In einem Viertel der Durchgänge war eines der beiden unteren Bilder das Original. Wir wollten sicherstellen, das die Versuchspersonen das Original noch immer sicher herausfinden konnten. Die Antwort und die Reaktionszeit wurden aufgezeichnet.
In Versuch 2 wurden Regionen der Gewichtung 2 blockweise getestet. Dazu wurde auf jeder Region eine Hauptachsentransformation gerechnet und danach jede
Region mit zunehmender Anzahl an Hauptachsen rekonstruiert. In einem ähnlichen Versuchsparadigma wie oben beschrieben mußten die Versuchspersonen angeben, in welchem der beiden unteren Bilder die Originalgesichtsregion gezeigt war. Die Fehlerrate und Reaktionszeit wurden aufgezeichnet. In Versuch 1 haben die Versuchspersonen in über 20% der Fälle Rekonstruktionen für das Original gehalten. Die mittlere Reaktionszeit lag bei 2,8s. Ansonsten bevorzugten sie Rekonstruktionen mit Gewichtung 2. Die
Versuchspersonen konnten zwischen Rekonstruktionen mit Gewichtung 1 und 2, 2 und 3 unterscheiden, aber nicht zwischen 1 und 3. Hier lag die mittlere Reaktionszeit bei 3,5s.
Es bestand keine Abhängigkeit zwischen der Gewichtung und der Reaktionszeit. In Versuch 2 lag der Anteil korrekter Antworten nie unter 75%. Die Leistung variierte am stärksten
für den Mund, dann die Nase und ganz wenig für die Augen. Diese Unsicherheit drückte sich auch in den Reaktionszeiten aus. Die mittlere Reaktionszeit für die Augen war am kürzesten (5,4s), für den Mund am längsten (6,8s). Wenn wir Ähnlichkeit zwischen Gesichtern begutachten, berücksichtigen wir in besonderem Maße die Regionen
der Augen, der Nase, des Mundes und der Gesichtskontur. Diese Regionen machen nur ca. 1/5 der gesamten Gesichtsfläche aus, und dadurch kann ein Gesicht derart rekonstruiert werden, daß es dem Original ähnlicher sieht als eine Rekonstruktion, die jeden Bildpunkt im Gesicht beinhaltet. Die Kodierungseffizienz wird also deutlich gesteigert.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-108Evaluation eines gesichtsspezifischen Ähnlichkeitsmaßes150171542218887AJO'TooleTVetterHVolzEMSalterPhiladephia, PA, USA1997-11-0038th Annual Meeting of the Psychonomic Societynonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published03D Facial Caricatures: Distinctiveness and the Perception of Face Age15017154224017KADeffenbacherTVetterJJohansonAJO'ToolePhiladephia, PA, USA1997-11-0038th Annual Meeting of the Psychonomic Societynonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Relation of facial caricature to aging, attractiveness and distinctiveness effects15017154224177AJO'TooleTVetterHVolzEMSalterFort Lauderdale, FL, USA1997-05-00Annual Meeting of the Association for Research in Vision and Ophthalmology (ARVO 1997)Purpose: The aim of this study was to produce facial caricatures of 3D laser scans of human heads, using algorithms that exaggerate the distinctive information in faces. These algorithms operate by comparing a face's "feature" dimensions to those of an average face, and by exaggerating the feature dimensions that are unusual for the face. When applied to the 2D configural features in faces (e.g., distance between eyes, nose length, etc.), this algorithm produces more distinctive versions of the faces. Methods: We applied this algorithm to 60 pointwise-corresponded 3D heads and found, to our surprise, that the most salient effect of the algorithm was to increase the apparent age of the face. Empirically, 10 human observers estimated the ages of the veridical faces, two levels of caricature, and one level of anti-caricature, and we measured the error of these estimates. Results: We found a highly reliable effect of caricature level on age estimate error, with errors ordered from youngest to oldest for all 10 observers as follows: anti-caricatures, veridicals, level one, and level two caricatures. Face age in this last case was overestimated by an average of 20 years. Discussion: Exaggerating the distinctive 3D information in a face increased the apparent age of the face, both at a local level by exaggerating small facial creases into wrinkles, and at a more global level via changes that made the underlying structure of the skull more evident.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Caricatures of three-dimensional human heads: As we get older do we get more distinct?15017154225837AJO'TooleTVetterNFTrojeHHBülthoffFort Lauderdale, FL, USA1996-04-00840Annual Meeting of the Association for Research in Vision and Ophthalmology 1996Purpose: We compared quality of information available in 3D surface models versus texture maps for classifying human faces by sex. Methods: 3D surface models and texture maps from laser scans of 130 human heads (65 male, 65 female) were analyzed with separate principal components analyses (PCAs). Individual principal components (PCs) from the 3D head data characterized complex structural differences between male and female heads. Likewise, individual PCs in the texture analysis contrasted characteristically male vs. female texture patterns (e.g., presence/absence of facial hair shadowing). More formally, representing faces with only their projection coefficients onto the PCs, and varying the subspace from 1 to 50 dimensions, we trained a series of perceptrons to predict the sex of the faces using either the 3D or texture data. A "leave-one-out" technique was applied to measure the gen-eralizability of the perceptron's sex predictions. Results: While very good sex generalization performance was obtained for both representations, even with very low dimensional subspaces (e.g., 76.1% correct with only one 3D projection coefficient), the 3D data supported more accurate sex classification across nearly the entire range of subspaces tested. For texture, 93.8% correct sex generalization was achieved with a minimun subspace of 20 projection coefficients. For 3D data, 96.9% correct generalization was achieved with 17 projection coefficients. Conclusions: These data highlight the importance of considering the kinds of information available in different face representations with respect to the task demands.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/ARVO-1996-OToole.pdfpublished-840Classifying faces by sex is more accurate with 3D shape information than with texture15017154226717VBlanzTVetterHHBülthoffMJTarrTübingen, Germany1995-08-0011912018th European Conference on Visual PerceptionWe investigated preferred or canonical views for familiar and three-dimensional nonsense objects using computer-graphics psychophysics. We assessed the canonical views for objects by allowing participants to actively rotate realistically shaded three-dimensional models in real-time. Objects were viewed on a Silicon Graphics workstation and manipulated in virtual space with a three-degree-of-freedom input device. In the first experiment, participants adjusted each object to the viewpoint from which they would take a photograph if they planned to use the object to illustrate a brochure. In the second experiment, participants mentally imaged each object on the basis of the name and then adjusted the object to the viewpoint from which they imagined it. In both experiments, there was a large degree of consistency across participants in terms of the preferred view for a given object. Our results provide new insights on the geometrical, experiential, and functional attributes that determine canonical views under ecological conditions.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published1What object attributes determine canonical views?15017154226907TVetterTPoggioHHBülthoffSarasota, FL, USA1993-05-001081Annual Meeting of the Association for Research in Vision and Ophthalmology (ARVO 1993)Purpose: Our ability to detect bilateral and skewed symmetry is well known. We provide evidence on psychophysical and theoretical grounds that this ability facilitates recognition of symmetric 3D objects. Methods: In psychophysical experiments with 25 paid subjects we tested recognition performance for novel views of 80 shaded wire objects shown previously only from a single view. Generalization fields were plotted by measuring recognition hit rate in horizontal, vertical and oblique directions on the viewing sphere. Results: In a first experiment we compared recognition of symmetric objects with nonsymmetric objects. The generalization ability from a given "model" view of an object to novel viewing directions (range ±90°) increased from 64% average recognition rate for nonsymmetric objects to 77% for symmetric objects. In additional experiments on viewpoint generalization for symmetric objects we found several peaks in the generalization fields. These findings are consistent with our theoretical results on recognition of bilaterally symmetric objects. The peaks observed in the generalization fields of symmetric objects are predicted by the "virtual views" (that can be generated by exploting the symmetry property) together with a network model that successfully accounts for human recognition of generic 3D objects. Conclusions: The agreement of the experimental results with the theoretical predictions support the assumption that our visual system is capable of exploiting symmetry as prior information in object recognition.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-1081Recognition of symmetric 3D objects1501715422189310NFTrojeTVetterBlanzV2006VBlanzTVetter2003-04-24The present invention relates to a method for image processing, in particular to the manipulation (detecting, recognizing and/or synthesizing) of images of three-dimensional objects, as e. g. human faces, on the basis of a morphable model for image synthesis. Furthermore, the present invention relates to an image processing system for implementing such a method.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/publishedMethod and device for the processing of images based on morphable models1501715422