FIRST WORKSHOP ON NATURAL LANGUAGE INTERFACES (CHALLENGES AND PROMISES)

Held in conjunction with ACL 2020 July 10th, Seattle, Washington

Introduction

Natural language interfaces (NLIs) have been the "holy grail" of human-computer interaction and information search for decades. However, early attempts in building NLIs to databases did not achieve the expected success due to limitations in language understanding capability, extensibility and explainability, among others. The last 5 years have seen a major resurgence of NLIs in the form of virtual assistants, dialogue systems, and semantic parsing and question answering systems. The horizon of NLIs has also been significantly expanding beyond databases to, e.g., knowledge bases, robots, Internet of Things, Web service APIs, and more.

This has been driven by a number of profound revolutions: (1) In the big data era, and as digitalization continues to grow, there is a rapidly growing demand for interfaces that connect users to the ever-expanding data sources, services and devices in the computing world. NLIs represent a very promising technology to accomplish that as they provide users with a unified way to interact with the entire computing world using language, their natural way of communication, and (2) the renaissance and development of deep learning have brought us from rule and feature engineering to a world of neural architecture and data engineering, promising better language understanding, adaptability and scalability. As a result, many commercial systems like Amazon Alexa, Apple Siri, and Microsoft Cortana, as well as academic studies on NLIs to a wide range of backends have emerged in recent years.

Many research communities have been advancing NLI technologies in recent years: NLP and machine learning, data management and databases, programming language, human-machine interaction, among others. This workshop aims to bring together researchers and practitioners from related communities to review the recent advances and revisit the challenges that led to the failure of earlier NLI systems, and discuss what the remaining challenges are and what to expect in the short- and long-term future.

Call For Papers

This workshop aims to bring together researchers and practitioners from different communities related to NLIs. As such, the workshop welcomes and covers a wide range of topics around NLIs, including (non-exclusively):

Linguistic analysis and modeling. What are the linguistic characteristics of human-machine interaction via NLIs? How to develop better models to accommodate and leverage such characteristics?

Interactivity, continuous learning, and personalization. How to enable NLIs to interact with users to resolve the knowledge gaps between them for better accuracy and transparency? Can NLIs learn from interactions to reduce human intervention over time? How can NLIs (learn to) be customized and adapt to user preferences? Interaction design, faithful generation, learning from user feedback, online learning.

Data collection and crowdsourcing. Modern machine learning models are data-hungry while data collection for NLIs are particularly expensive because of the domain expertise needed for formal meaning representation and grounding. How to collect data for NLIs at scale with low cost?

Scalability, adaptability, and portability. How to construct NLIs that can reliably and efficiently operate at a large scale (e.g., on billion-scale knowledge graphs)? How to construct NLIs that can simultaneously support multiple inter-connected domains of possibly different nature? How to transfer knowledge learned from existing domains to help learning in new domains?

Explainability and trustworthiness. How to make the reasoning process and the results explainable and trustworthy to users? How to help users understand how an answer is obtained or a command is executed?

Privacy. How to ensure NLIs are compliant with privacy constraints? How to train, monitor, and debug NLIs within the compliance boundary?

Evaluation and user study. How to systematically evaluate different usability aspects of an NLI as perceived by users? What are the protocols for conducting a reproducible user study? Whether there is significant gap between in vitro and in vivo evaluation and how to bridge that?

Submission Guidelines

We welcome two types of papers: regular workshop papers and cross-submissions. Only regular workshop papers will be included in the workshop proceedings. All submissions should be in PDF format and made through the Softconf website set up for this workshop (https://www.softconf.com/acl2020/nli/).

In line with the ACL main conference policy, camera-ready versions of papers will be given one additional page of content.

Regular workshop papers: Authors should submit a long paper of up to 8 pages, with unlimited pages for references (references only; appendix should be included in the main text and counted towards the page limit), or a short paper of up to 4 pages, with unlimited pages for references, following the ACL 2019 formatting requirements (see the ACL 2020 Call For Papers for reference: https://acl2020.org/calls/papers/). The reported research should be substantially original. All submissions will be reviewed in a single track, regardless of length. Accepted papers will be presented as posters, and best papers may be given the opportunity for a brief talk to introduce their work. Reviewing will be double-blind, and thus no author information should be included in the papers; self-reference that identifies the authors should be avoided or anonymised. Accepted papers will appear in the workshop proceedings.

Cross-submissions: In addition to previously unpublished work, we also solicit papers on relevant topics that have appeared in a non-NLP venue (e.g., workshop or conference papers at NeurIPS/ICML/AAAI/SIGKDD/ICRA/VLDB/WWW/SIGIR/ISWC/SIGCHI, among others). Accepted cross-submissions will be presented as posters, with an indication of original venue, but will not be included in the workshop proceedings. Cross-submissions are ideal for related work which would benefit from exposure to the NLI audience. Submission length is determined by the original venue. Interested authors should submit their papers in PDF format through the NLI Softconf website (https://www.softconf.com/acl2020/nli/), with a note on the original venue. Papers in this category do not need to follow the ACL format and selection will be solely determined by the organising committee.

Important Dates

Workshop Paper Due Date:April 6, 2020 April 20, 2020 (deferred due to the pandemic)

Notification of acceptance:May 4, 2020 May 11, 2020

Camera-ready papers due:May 18, 2020 May 22, 2020

Workshop date: July 10, 2020

Keynote Speakers

Joyce Chai is a Professor in the Electrical Engineering and Computer Science Department at the University of Michigan. Previously when she was a Professor at Michigan State University, she was awarded the William Beal Outstanding Faculty Award in 2018. She holds a Ph.D. in Computer Science from Duke University. She was a Research Staff Member at IBM T. J. Watson Research Center. Her research interests include natural language processing, situated dialogue agents, human-robot communication, artificial intelligence, and intelligent user interfaces. Her recent work is focused on situated language processing to facilitate natural communication with robots and other artificial agents. She served as Program Co-chair for the Annual Meeting of the Special Interest Group in Dialogue and Discourse (SIGDIAL) in 2011, the ACM International Conference on Intelligent User Interfaces (IUI) in 2014, and the Annual Meeting of the North America Chapter of Association of Computational Linguistics (NAACL) in 2015. She received a National Science Foundation CAREER Award in 2004 and the Best Long Paper Award from the Annual Meeting of Association of Computational Linguistics (ACL) in 2010.

Monica Lam is a Professor in the Computer Science Department at Stanford University since 1988. She is the faculty director of the Open Virtual Assistant Lab (OVAL). She received a B.Sc. from University of British Columbia in 1980 and a Ph.D. in Computer Science from Carnegie Mellon University in 1987. Monica is a Member of the National Academy of Engineering and Association of Computing Machinery (ACM) Fellow. She is a co-author of the popular text Compilers, Principles, Techniques, and Tools (2nd Edition), also known as Dragon book. She received an NSF Young Investigator award in 1992, the ACM Most Influential Programming Language Design and Implementation Paper Award in 2001, an ACM SIGSOFT Distinguished Paper Award in 2002, and the ACM Programming Language Design and Implementation Best Paper Award in 2004. She was the author of two of the papers in "20 Years of PLDI--a Selection (1979-1999)", and one paper in the "25 Years of the International Symposia on Computer Architecture". She received the University of British Columbia Computer Science 50th Anniversary Research Award in 2018.

Percy Liang is an Associate Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His research spans machine learning and natural language processing, with the goal of developing trustworthy agents that can communicate effectively with people and improve over time through interaction. Specific topics include question answering, dialogue, program induction, interactive learning, and reliable machine learning. His awards include the IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), and a Microsoft Research Faculty Fellowship (2014).

Luke Zettlemoyer is an Associate Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, and a Research Scientist at Facebook. His research focuses on empirical methods for natural language understanding, and involves designing machine learning algorithms and building large datasets. Honors include multiple paper awards, a PECASE award, and an Allen Distinguished Investigator Award. Luke received his PhD from MIT and was a postdoc at the University of Edinburgh.