The critical ability of the brain to automatically detect rare, salient auditory events amongst common ones is well-illustrated experimentally by the mismatch negativity (MMN), an event-related response elicited in oddball experiments which is larger for rare (“deviant”) acoustic events than for frequently repeated ones. Recent evidence, however, indicates that amongst the deviant stimuli, familiar items (e.g., words) produce larger MMN responses than unfamiliar ones (e.g., pseudowords). While several mechanistic models can explain the former (change detection, or
“CD-MMN”) data, no computational account exists for the latter. To explain these findings, we propose that the brain response to sounds includes two components: one resulting from short-term memory processes (neuronal adaptation, lateral inhibition), producing the CD-MMN, and one reflecting re-activation of long-term memory (LTM) traces for familiar sensory material, underlying the latter (long-term memory, LT-MMN) effects. Taking language as our working domain, we implemented a neurobiologically grounded neural-network model of the language-related brain areas, modelling both short- (adaptation, inhibition) and long-term (Hebbian synaptic plasticity) cortical mechanisms. After teaching the network a limited set of artificial “words”, we simulated MMN responses in it, modulating strength of adaptation and inhibition.
While both of these mechanisms produced CD-MMN effects, adaptation-only networks failed to replicate LT-MMN data. The present model of memory and perception provides the first unifying account for CD- and LT-MMN neurophysiological data, elucidates the role of putative mechanisms underlying long- and short-term memory components, and shows how
inhibition- (but not adaptation)-based accounts can explain brain responses reflecting auditory change detection