A traditional method of distinguishing language from gesture relies on m
ode of communication: language is produced from the mouth or written
on a page, while gesture occurs on the hands. However, decades of r
esearch have conclusively shown that sign languages, while produced in the
same mode as traditional gestures, have all of the complexity and structure
of other natural languages. This raises new questions such as how one draw
s the line between language and gesture when signing, and even if there is
such a line. Sign language classifier predicates walk this line most
precariously, combining aspects of language and gesture to convey analog s
patial information. In this talk I discuss a semantic analysis of cla
ssifiers that likens them to quotative predicates (“say”,
“be like”, “mutter”, etc.) in both introducing ico
nic event modification by way of demonstration. Under this view, the focus
is on iconic aspects of spoken language quotation while requiring no separa
te iconic machinery for signed languages: both make use of the same system,
and written quotation is just one highly restricted case. By viewing quota
tion and other demonstrational language through the lens of sign languages,
the relationship between language and gesture gains nuance beyond merely t
he mode of communication.