The process of argument selection, while seemingly completely productive,is restricted in actual usage. Though we cannot enumerate all possibleobjects of a verb like drink or eat, it's easy to observe empirically thatarguments of drink are fewer, more repetitive and less prone to innovationthan those of eat in virtually any sample of usage data. Once a certainsample size has been reached, it becomes difficult to observe novelmaterial for any argument selection process, but how quickly this point isreached differs in different constructions. Though there are clearpragmatic reasons why this may be, some cases seem arbitrary. Why dospeakers of English seem to shake so many different things but jog so few(other than [someone's] memory)? Why should near synonyms likestart/begin/commence exhibit a significantly different likelihood ofadmitting novel objects? What makes speakers generate different sizedvocabularies for verbal complements of help to [VERB] vs. help [VERB]? Thisthesis is dedicated to answering such questions by positing productivity, aconcept developed for morphological word formation, as the quantifiableproperty responsible for these differences, and determining its theoreticalstatus in syntactic argument selection in two ways.

First, I will show that speakers have implicit knowledge of how productivedifferent constructions are. The sense of productivity meant here comprisesmultiple dimensions: most central is the likelihood to produce novelregular forms, a quantity estimated in the sense of Baayen's (1993)Potential Productivity in morphology. However other aspects of what I callthe 'Productivity Complex' (PC) are closely related, with attested andprojected vocabulary sizes also playing an important role. By comparingthese properties of constructions, I propose a multidimensional model ofproductivity in syntax. My main line of argumentation will show that: 1.empirical productivity estimates for the same construction behaveconsistently within and across datasets; 2. that for many reasons,productivity cannot be accounted for on semantic grounds without resortingto 'per head semantic classes' (criticized by Dowty 1991), not least ofwhich because (near) synonymous constructions and lexical heads haveidiosyncratic productive behavior; and 3. many partially filled exponentsof productive patterns have a saturated vocabulary which must be stored inthe mental lexicon to account for the data. Knowledge of productivitycomplements speakers' knowledge of preferred arguments (cf.Stefanowitsch/Gries 2003), and explains differential generation of novelforms.

Second, I will attempt to integrate findings into a theory of howproductivity is acquired from input. Work in Construction Grammar has shownhow skewed distributional properties are more readily acquired by bothchildren and adults (Casenhiser/Goldberg 2005). Constructions with few veryfrequent types and many infrequent ones (a typical LNRE distribution,Baayen 2001) are acquired more quickly and extended more easily byspeakers. In line with these findings, I show that productive syntacticconstructions exhibit these properties and are acquired for productive useby speakers, who reproduce a similar distribution, possibly feeding intodiachronic developments in productivity. Finally, since productivity islinked to regularity, I suggest that 'being a rule of grammar' is in fact ascalar property, corresponding to the extensibility of a construction aspredicted by the PC model which emerges from its representation in themental lexicon.