A variety of metaphors inspired by contemporary developments and issues in physics are identified as potentially helpful to theory and experiments directed at the mental lexicon. The developments are very much in respect to systems regarded as complex from the perspective of established physical explanation. The issues are primarily those associated with the context dependencies of properties and functions broadly evident in natural systems at both macroscopic and microscopic scales. Ideally, the metaphors may bring new questions, methods, principles, and formalisms to bear on the investigation of the mental lexicon. Minimally, they should enhance appreciation for the scientific challenges posed by the mental lexicon’s diverse structures and functions.

In semantic categorization, nonwords that are neighbors of exemplars (e.g., turple in an animal categorization task) cause interference, but neighbors of nonexemplars (e.g., tabric) do not. This can be explained in a cascaded activation model in which the decision process selectively monitors activation in a category-relevant semantic feature unit. However, it is shown that this is true only for some categories. With the broad category “Physical Object”, interference is produced by nonwords based on both exemplars (e.g., himmer) and nonexemplars (e.g., travity). However, no interference is produced when the category is changed to “Animal”. This shows that only some semantic feature units can be monitored. It is proposed that what is being monitored are not in fact semantic features per se, but rather links to semantic fields defined on the basis of patterns of lexical co-occurrence.

Two semantic variables, concreteness and morphological family size, were examined in a single word and a primed lexical decision task. Single word recognition latencies were faster for concrete relative to abstract targets only when morphological family size was small. The magnitude of morphological facilitation for primes related by inflection was greater than by derivation although both revealed a very similar interaction of concreteness and family size. In summary, concreteness influenced morphological processing so as to produce slower decision latencies for small family abstract than concrete words both in a single word and in a morphologically primed context. However, magnitudes of facilitation in isolation from baselines provided an incomplete account of morphological processing.

Against longstanding assumptions in the psycholinguistics literature, we argue for a model of morphological complexity that has all complex words assembled by the grammar from lexical roots and functional morphemes. This assembly occurs even for irregular forms like gave. Morphological relatedness is argued to be an identity relation between repetitions of a single root, distinguishable from semantic and phonological relatedness. Evidence for the model is provided in two MEG priming experiments that measure root activation prior to lexical decision. Both regular and irregular allomorphs of a root are shown to prime the root equally. These results are incompatible both with connectionist models that treat all morphological relatedness as similarity and with dual mechanism models in which only regular forms involve composition.

This auditory lexical decision study shows that cohort entropies, conditional root uniqueness points, and morphological family size all contribute to the dynamics of the auditory comprehension of prefixed words. Three entropy measures calculated for different positions in the stem of Dutch prefixed words revealed facilitation for higher entropies, except at the point of disambiguation, where we observed inhibition. Morphological family size was also facilitatory, but only for prefixed words in which the conditional root uniqueness point coincided with the conventional uniqueness point. For words with early conditional disambiguation, in contrast, only the morphologically related words that were onset-aligned with the target word facilitated lexical decision.

We investigated the performance of two connectionist neural networks with different architectures to explore the degree of learning in generating the past participle form of Italian verbs on the basis of phonological characteristics. The networks were trained to generate the past participle form of verbs from different inflected input forms. We examined the degree of learning relative to the type of inflection given as input, the type of suffix produced, the classification of each verb according to the thematic vowel, the regularity of the stem and of the suffix. The networks were able to learn both regular and irregular forms, but the effect of regularity depended on the distributional properties of the conjugation to which a verb belongs, and on information provided by the input.