(This tutorial written by Noah Goodman and Eli Bingham, following Kao,
Wu, Bergen, and Goodman (2014).)

"My new kettle cost a million dollars."

Hyperbole – using an exagerated utterance to convey strong opinions – is
a common non-literal use of language. Yet non-literal uses of langauge
are impossible under the simplest RSA model. Kao, et al, suggested that
two ingredients could be added to ennable RSA to capture hyperbole.
First, the state conveyed by the speaker and reasoned about by the
listener should include affective dimensions. Second, the speaker only
intends to convey information relevant to a particular topic, such as
“how expensive was it?” or “how am I feeling about the price?”;
pragmatic listeners hence jointly reason about this topic and the state.

As in the simple RSA example, the inferece helper Marginal takes an
un-normalized stochastic function, constructs the distribution over
execution traces by using Search, and constructs the marginal
distribution on return values (via HashingMarginal).

The domain for this example will be states consisting of price (e.g. of
a tea kettle) and the speaker’s emotional arousal (whether the speaker
thinks this price is irritatingly expensive). Priors here are adapted
from experimental data.

Now we define a version of the RSA speaker that only produces relevant
information for the literal listener. We define relevance with respect
to a Question Under Discussion (QUD) – this can be thought of as
defining the speaker’s current attention or topic.

To implement this as a probabilistic program, we start with a helper
function project, which takes a distribution over some (discrete)
domain and a function qud on this domain. It creates the
push-forward distribution, using Marginal (as a Python decorator).
The speaker’s relevant information is then simply information about the
state in this projection.

How does this listener interpret the uttered price “10,000”? On the one
hand this is a very unlikely price a priori, on the other if it were
true it would come with strong arousal. Altogether this becomes a
plausible hyperbolic utterence:

“It cost fifty dollars” is often interpretted as costing around 50 –
plausibly 51; yet “it cost fiftyone dollars” is interpretted as 51 and
definitely not 50. This assymetric imprecision is often called the
pragmatic halo or pragmatic slack.

We can extend the hyperole model to capture this additional non-literal
use of numbers by including QUD functions that collapse nearby numbers
and assuming that round numbers are slightly more likely (because they
are less difficult to utter).

In [10]:

#A helper to round a number to the nearest ten:defapprox(x,b=None):ifbisNone:b=10.div=float(x)/brounded=int(div)+1ifdiv-float(int(div))>=0.5elseint(div)returnint(b)*rounded#The QUD functions we consider:qud_fns={"price":lambdastate:State(price=state.price,arousal=None),"arousal":lambdastate:State(price=None,arousal=state.arousal),"priceArousal":lambdastate:State(price=state.price,arousal=state.arousal),"approxPrice":lambdastate:State(price=approx(state.price),arousal=None),"approxPriceArousal":lambdastate:State(price=approx(state.price),arousal=state.arousal),}defqud_prior():values=qud_fns.keys()ix=pyro.sample("qud",dist.Categorical(probs=torch.ones(len(values))/len(values)))returnvalues[ix]defutterance_cost(numberUtt):preciseNumberCost=10.return0.ifapprox(numberUtt)==numberUttelsepreciseNumberCostdefutterance_prior():utterances=[50,51,500,501,1000,1001,5000,5001,10000,10001]utteranceLogits=-torch.tensor(list(map(utterance_cost,utterances)),dtype=torch.float64)ix=pyro.sample("utterance",dist.Categorical(logits=utteranceLogits))returnutterances[ix]

In the above hyperbole model we assumed a very simple model of affect: a
single dimension with two values (high and low arousal). Actual affect
is best represented as a two-dimensional space corresponding to valence
and arousal. Kao and Goodman (2015) showed that extending the affect
space to these two dimensions immediately introduces a new usage of
numbers: verbal irony in which an utterance corresponding to a
high-arousal positive valence state is used to convey a high-arousal but
negative valence (or vice versa).