Abstract

In previous experiments, we provided further evidence that 3-D face stimuli can be learnt and recognized across haptic and visual modalities. Our results suggested information transfer across modalities to be asymmetric due to differences in visual versus haptic face processing (ie, configural vs featural). To test this hypothesis, we designed two experiments investigating a visual, haptic and cross-modal face-inversion effect: Experiment 1 used an old/new recognition task in which three upright faces were learnt visually followed by three visual test-blocks (one with upright and two with inverted faces) and one haptic test-block with inverted faces. We found a strong inversion effect for visually learnt faces (visual-upright: d'=2.07, visual-inverted: d'=0.6, haptic-inverted: d'=0.52). When we exchanged learning and testing modalities in Experiment 2 (haptic learning of upright faces followed by one haptic-upright, two haptic-inverted and one visual-inverted test-blocks), we failed to find an inversion effect for haptically learnt faces (haptic-upright: d'=1.45, haptic-inverted: d'=1.75, visual-inverted: d'=1.16). Whereas visual face processing thus operates configurally, haptic processing seems to rely on featural information.