Auditory feedback

Auditory feedback (AF) is an aid used by humans to control speech production and singing. It is assumed that auditory feedback, alongside other feedback mechanisms such as somatosensory feedback and visual feedback, helps to verify whether the current production of a passage of speech or singing is in accord with a person's acoustic-auditory intention. From the viewpoint of movement sciences and neurosciences the acoustic-auditory speech signal can be interpreted as the result of movements (skilled actions) of speech articulators (the lower jaw, lips, tongue, etc.), and thus auditory feedback can be interpreted as a feedback mechanism controlling skilled actions in the same way that visual feedback controls limb movements (e.g. reaching movements).

Contents

Auditory feedback is an important aid during speech acquisition by toddlers, who use it to control the learning of speech items. These are typically produced by a communication partner (e.g. caretaker) and heard by the toddler, who subsequently tries to imitate them.[1][2] After speech acquisition (i.e. in the case of adults), it is assumed[by whom?] that auditory feedback is not used as intensively. This is also assumed to happen for other feedback mechanisms, such as somatosensory feedback in the case of speech.

However, the well-known delayed auditory feedback experiment indicated that auditory feedback becomes important during speech production even for adults, if the auditory perception pathway is altered.[3] A further well-known effect, which underlines the importance of auditory feedback throughout a person's lifetime, is that the production of sounds such as sibilant fricatives (like /s/) begins to deteriorate in deafened adults.[citation needed]

Because auditory feedback needs more than 100 milliseconds before a correction occurs at the production level,[4] it is a slow correction mechanism in comparison with the duration (or production time) of speech sounds (vowels or consonants). Thus auditory feedback is too slow to correct the production of a speech sound in real time. It has been shown, however, that auditory feedback is capable of changing speech-sound production over a series of trials (i.e. adaptation by relearning; see e.g. perturbation experiments done with the DIVA model: neurocomputational speech processing). 10 minutes is typically sufficient for a nearly-full adaptation.

A new headphone called Forbrain is based on auditory feedback principles. It uses a bone conductor and a series of dynamic filters to correct the perception of one's own voice, thus helps improving concentration, attention, speech, coordination, and other sensory functions. It has been awarded by the BETT Show in 2015 in the category "ICT Special Educational Needs Solutions"