Infant perception of audiovisual synchrony in fluent speech

No metrics data to plot.

The attempt to load metrics for this article has failed.

The attempt to plot a graph for these metrics has failed.

The full text of this article is not currently available.

Brill’s MyBook program is exclusively available on
BrillOnline Books and Journals. Students and scholars affiliated with an
institution that has purchased a Brill E-Book on the BrillOnline platform
automatically have access to the MyBook option for the title(s) acquired by the
Library. Brill MyBook is a print-on-demand paperback copy which is sold at a
favorably uniform low price.

It is known that perception of audio–visual (A–V) temporal relations is affected by the type of stimulus used. This includes differences in A–V temporal processing of speech and non-speech events and of native vs. non-native speech. Similar differences have been found early in life, but no studies have investigated infant response to A–V temporal relations in fluent speech. Extant studies (Lewkowicz, 2010) investigating infant response to isolated syllables have found that infants can detect an A–V asynchrony (auditory leading visual) of 666 ms but not lower. Here, we investigated infant response to A–V asynchrony in fluent speech and whether linguistic experience plays a role in responsiveness. To do so, we tested 24 monolingual Spanish-learning and 24 monolingual Catalan-learning 8-month-old infants. First, we habituated the infants to an audiovisually synchronous video clip of a person speaking in Spanish and then tested them in separate test trials for detection of different degrees of A–V asynchrony (audio preceding video by 366, 500 or 666 ms). We found that infants detected A–V asynchronies of 666 and 500 ms and that they did so regardless of linguistic background. Thus, compared to previous results from infant studies with isolated audiovisual syllables, here we found that infants are more sensitive to A–V temporal relations inherent in fluent speech. Furthermore, given that responsiveness to non-native speech narrows during the first year of life, the absence of a language effect suggests that perceptual narrowing of A–V synchrony detection has not completed by 8 months of age.

It is known that perception of audio–visual (A–V) temporal relations is affected by the type of stimulus used. This includes differences in A–V temporal processing of speech and non-speech events and of native vs. non-native speech. Similar differences have been found early in life, but no studies have investigated infant response to A–V temporal relations in fluent speech. Extant studies (Lewkowicz, 2010) investigating infant response to isolated syllables have found that infants can detect an A–V asynchrony (auditory leading visual) of 666 ms but not lower. Here, we investigated infant response to A–V asynchrony in fluent speech and whether linguistic experience plays a role in responsiveness. To do so, we tested 24 monolingual Spanish-learning and 24 monolingual Catalan-learning 8-month-old infants. First, we habituated the infants to an audiovisually synchronous video clip of a person speaking in Spanish and then tested them in separate test trials for detection of different degrees of A–V asynchrony (audio preceding video by 366, 500 or 666 ms). We found that infants detected A–V asynchronies of 666 and 500 ms and that they did so regardless of linguistic background. Thus, compared to previous results from infant studies with isolated audiovisual syllables, here we found that infants are more sensitive to A–V temporal relations inherent in fluent speech. Furthermore, given that responsiveness to non-native speech narrows during the first year of life, the absence of a language effect suggests that perceptual narrowing of A–V synchrony detection has not completed by 8 months of age.