Previous accounts of hedges assume that they cause language to become vague or fuzzy (Lakoff 1973); however, hedges can actually sharpen numerical concepts by giving explicit information about approximation, especially where bare numbers appear misleadingly round or precise. They can also tell hearers about the direction of approximation (greater or less than). This article provides a first empirical account of interactions between hedging and rounding in numerical expressions. We demonstrate that hedges occur more commonly with round numbers than with non-round (...) ones. However, we also provide evidence from user studies that in the absence of hedges, readers interpret round numbers as approximations and non-round ones as precise; and that placing a hedge before a round number has no effect on its interpretation, whereas placing it before a non-round number shifts people’s interpretations from precise towards approximate. We attempt to explain this conundrum. (shrink)

This response discusses the experiment reported in Krahmer et al.’s Letter to the Editor of Cognitive Science. We observe that their results do not tell us whether the Incremental Algorithm is better or worse than its competitors, and we speculate about implications for reference in complex domains, and for learning from ‘‘normal” (i.e., non-semantically-balanced) corpora.

A substantial amount of recent work in natural language generation has focused on the generation of ‘‘one-shot’’ referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We test this hypothesis by eliciting referring expressions from human subjects and computing the similarity between the expressions elicited and the ones generated by algorithms. It turns out that (...) the success of the IA depends substantially on the ‘‘preference order’’ (PO) employed by the IA, particularly in complex domains. While some POs cause the IA to produce referring expressions that are very similar to expressions produced by human subjects, others cause the IA to perform worse than its main competitors; moreover, it turns out to be difficult to predict the success of a PO on the basis of existing psycholinguistic findings or frequencies in corpora. We also examine the computational complexity of the algorithms in question and argue that there are no compelling reasons for preferring the IA over some of its main competitors on these grounds. We conclude that future research on the generation of referring expressions should explore alternatives to the IA, focusing on algorithms, inspired by the Greedy Algorithm, which do not work with a fixed PO. (shrink)

This paper takes as its starting point the problem of characterizing, in a precise way, situations in which two people collaborate to achieve a common goal. It is suggested that collaboration is normally based on an apparently paradoxical state of mind which I call “mutual intention”. Mutual intention is a concept belonging to the same family as Lewis's and Schiffer's “mutual knowledge”. These concepts have the paradoxical feature that they require, for their definition, an infinite series of propositions of the (...) form X knows p, where X is a single agent and p is a proposition. The source of these infinite series is traced, and it is shown that they can be represented in a plausible and enlightening way by means of a recursive notation. Finally, three applications of the concept of “mutual intention” are given: in the semantic analysis of certain sentences with plural subjects; in the analysis of agreement and related speech acts; and in the clarification of the phenomenon of “implicit agreement”. (shrink)