Tools

"... The Gradual Learning Algorithm (Boersma 1997) is a constraint ranking algorithm for learning Optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and Smol ..."

The Gradual Learning Algorithm (Boersma 1997) is a constraint ranking algorithm for learning Optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and Smolensky (1993, 1996, 1998, 2000), which initiated the learnability research program for Optimality Theory. We argue that the Gradual Learning Algorithm has a number of special advantages: it can learn free variation, deal effectively with noisy learning data, and account for gradient wellformedness judgments. The case studies we examine involve Ilokano reduplication and metathesis, Finnish genitive plurals, and the distribution of English light and dark /l/.

"... In a series of papers, Petra Hendriks, Helen de Hoop, and Henriette de Swart Lave applied optimality theory (OT) to semantics. These authors argue that there is a fundamental difference between the form of OT as used in syntax on the one hand and its form as used in semantics on the other hand. Wher ..."

In a series of papers, Petra Hendriks, Helen de Hoop, and Henriette de Swart Lave applied optimality theory (OT) to semantics. These authors argue that there is a fundamental difference between the form of OT as used in syntax on the one hand and its form as used in semantics on the other hand. Whereas in the first case OT takes the point of view of the speaker, in the second case the point of view of the hearer is taken. The aim of this paper is to argue that the proper treatment of OT in natural language interpretation has to take both perspectives at the same time. A conceptual framework is established that realizes the integration of both perspectives. It will be argued that this framework captures the essence of the Gricean maxims and gives a precise explication of Atlas &amp; Levinson&apos;s (1981) idea of balancing between informativeness and efficiency in natural language processing. The ideas are then applied to resolve some puzzles in natural language interpretation. 1

...g this lag, a kind of bootstrap mechanism seems to apply that depends crucially on the robustness of comprehension, possibly by using a technique called robust interpretative parsing (Smolensky 1996; =-=Tesar & Smolensky 2000-=-). Consequently, when it comes to relate the two perspectives within a bidirectional OT, we have to acknowledge the close interrelation between them in the OT learning algorithm. In summary, harmony t...

"... The study of phonotactics (e.g., the ability of English speakers to distinguish possible words like blick from impossible words like *bnick) is a central topic in phonology. We propose a theory of phonotactic grammars and a learning algorithm that constructs such grammars from positive evidence. Our ..."

The study of phonotactics (e.g., the ability of English speakers to distinguish possible words like blick from impossible words like *bnick) is a central topic in phonology. We propose a theory of phonotactic grammars and a learning algorithm that constructs such grammars from positive evidence. Our grammars consist of constraints that are assigned numerical weights according to the principle of maximum entropy. Possible words are assessed by these grammars based on the weighted sum of their constraint violations. The learning algorithm yields grammars that can capture both categorical and gradient phonotactic patterns. The algorithm is not provided with any constraints in advance, but uses its own resources to form constraints and weight them. A baseline model, in which Universal Grammar is reduced to a feature set and an SPE-style constraint format, suffices to learn many phonotactic phenomena. In order to learn nonlocal phenomena such as stress and vowel harmony, it is necessary to augment the model with autosegmental tiers and metrical grids. Our results thus offer novel, learning-theoretic support for such representations. We apply the model to English syllable onsets, Shona vowel harmony, quantity-insensitive stress typology, and the full phonotactics of Wargamay, showing that the learned grammars capture the distributional generalizations of these languages and accurately predict the findings of a phonotactic experiment.

"... Recent experimental work indicates that by the age of ten months, infants have already learned a great deal about the phonotactics (legal sounds and sound sequences) of their language. This learning occurs before infants can utter words or apprehend most phonological alternations. I will show that t ..."

Recent experimental work indicates that by the age of ten months, infants have already learned a great deal about the phonotactics (legal sounds and sound sequences) of their language. This learning occurs before infants can utter words or apprehend most phonological alternations. I will show that this early learning stage can be modeled with Optimality Theory. Specifically, the Markedness and Faithfulness constraints can be ranked so as to characterize the phonotactics, even when no information about morphology or phonological alternations is yet available. Later on, the information acquired in infancy can help the child in coming to grips with the alternation pattern. I also propose a procedure for undoing some learning errors that are likely to occur at the earliest stages. There are two formal proposals. One is a constraint ranking algorithm, based closely on Tesar and Smolensky’s Constraint Demotion, which mimics the early, “phonotactics only” form of learning seen in infants. I illustrate the algorithm’s effectiveness by having it learn the phonotactic pattern of a simplified language modeled on Korean. The other proposal is that there are three distinct default rankings for phonological constraints: low for ordinary Faithfulness (used in learning phonotactics); low for Faithfulness to adult forms (in the child’s own production system); and high for output-to-output correspondence constraints.

"... The project presented here seeks to explain observed regularities in the direction of place assimilation. The best known among these is the fact that assimilation proceeds regressively in intervocalic clusters composed of alveolars, palato-alveolars, labials or velars. This fact is consistent with a ..."

The project presented here seeks to explain observed regularities in the direction of place assimilation. The best known among these is the fact that assimilation proceeds regressively in intervocalic clusters composed of alveolars, palato-alveolars, labials or velars. This fact is consistent with a variety of interpretations, some of which are discussed below. However the

...e distribution. This assumption is defended in section 9. The unoriginal second assumption is that constraint rankings in an Optimality Theoretic model of phonology can be indexed to phonetic scales (=-=Prince and Smolensky 1993-=-): the rankings we will discuss are those of correspondence constraints and the scales these rankings are indexed to are scales of perceived similarity. The idea is that if two contrasts a-b and x-y a...

...he input, as a kind of learning procedure, in situations where the choice of input is otherwise under-determined, by the principle of Lexicon Optimization (Ito et al. 1995, Prince and Smolensky 1993, =-=Tesar and Smolensky 1998-=-). Harmonic evaluation also 8sselects the base in OO faithfulness (Benua 1995, 1997), and it plays a similar role in systems of multiple optimization (Wilson 1997). Though it is, in principle, a strai...