Title: How to refer like a god: aligning speaker-dependent meaning through formal distributional representationsAbstract: ​ Reference -- the ability to talk about things -- is one of the most fundamental features of human language. Still, we are far from understanding how this ability comes about. In formal semantics, reference is theoretically well-defined but fails to provide a generic account of learnability. In data-driven approaches such as distributional semantics, the very notion of entity and event remains undefined, making those accounts unsuitable to even conceptualise reference. In this talk, I will hypothesise omniscient individuals with a fully shared lexicon and formal semantic model, showing that in such a 'godly' setup, reference can trivially be defined as perfect semantic alignment between interlocutors. Turning to a more realistic setting, I will then propose that distributional semantics provides the necessary tools to infer speaker-dependent models that are as close as possible to that idealisation. The suggested framework requires a tight integration of formal and distributional accounts at the representational level, whilst capitalising on the learning algorithms specific to distributional approaches.

11:00 - 11:45 Invited Talk (Presenter: Grzegorz Chrupała)

​Title: Linguistic interpretability in neural models of grounded language learningAbstract: Modeling language learning with neural networks and analyzing the nature of the emerging representations have a long tradition going back to the seminal papers by Elman in the early 1990s. In this talk I will present the modern take on this enterprise. Specifically I will focus on the setting where language is learned in a visually- grounded scenario, from naturalistic text or speech coupled with visually correlated input. This task of learning language in a multisensory setting, with weak and noisy supervision, is of interest to scientists trying to understand the human mind as well as to engineers trying to build smart conversational agents or robots.

I will discuss what representations recurrent neural network models learn in this setting and present analytics tools to better understand them. I will show to what extent they encode structures posited by linguists such as phonemes, words and constructions, and explore the role of network depth in the encoding of these different levels of linguistic abstraction.

Research carried out in collaboration with Afra Alishahi, Ákos Kádár, Lieke Gelderloos and Marie Barking.