Session:「Explaining and explainable systems」

論文アブストラクト：
Massive Open Online Course (MOOC) platforms have scaled online education to unprecedented enrollments, but remain limited by their rigid, predetermined curricula. To overcome this limitation, this paper contributes a visual recommender system called MOOCex. The system recommends lecture videos across different courses by considering both video contents and sequential inter-topic relationships mined from course syllabi; and more importantly, it allows for interactive visual exploration of the semantic space of recommendations within a learner's current context. When compared to traditional methods (e.g., content-based recommendation and ranked list representations), MOOCex suggests videos from more diverse perspectives and helps learners make better video playback decisions. Further, feedback from MOOC learners and instructors indicates that the system enhances both learning and teaching effectiveness.

CraftML: 3D Modeling is Web Programming

論文アブストラクト：
We explore web programming as a new paradigm for programmatic 3D modeling. Most existing approaches subscribe to the imperative programming paradigm. While useful, there exists a gulf of evaluation between procedural steps and the intended structure. We present CraftML, a language providing a declarative syntax where the code is the structure. CraftML offers a rich set of programming features familiar to web developers of all skill levels, such as tags, hyperlinks, document object model, cascade style sheet, JQuery, string interpolation, template engine, data injection, and scalable vector graphics. We develop an online IDE to support CraftML development, with features such as live preview, search, module import, and parameterization. Using examples and case studies, we demonstrate that CraftML offers a low floor for beginners to make simple designs, a high ceiling for experts to build complex computational models, and wide walls to support many application domains such as education, data physicalization, tactile graphics, assistive devices, and mechanical components.

Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda

論文アブストラクト：
Advances in artificial intelligence, sensors and big data management have far-reaching societal impacts. As these systems augment our everyday lives, it becomes increasing-ly important for people to understand them and remain in control. We investigate how HCI researchers can help to develop accountable systems by performing a literature analysis of 289 core papers on explanations and explaina-ble systems, as well as 12,412 citing papers. Using topic modeling, co-occurrence and network analysis, we mapped the research space from diverse domains, such as algorith-mic accountability, interpretable machine learning, context-awareness, cognitive psychology, and software learnability. We reveal fading and burgeoning trends in explainable systems, and identify domains that are closely connected or mostly isolated. The time is ripe for the HCI community to ensure that the powerful new autonomous systems have intelligible interfaces built-in. From our results, we propose several implications and directions for future research to-wards this goal.

I Lead, You Help but Only with Enough Details: Understanding User Experience of Co-Creation with Artificial Intelligence

論文アブストラクト：
Recent advances in artificial intelligence (AI) have increased the opportunities for users to interact with the technology. Now, users can even collaborate with AI in creative activities such as art. To understand the user experience in this new user--AI collaboration, we designed a prototype, DuetDraw, an AI interface that allows users and the AI agent to draw pictures collaboratively. We conducted a user study employing both quantitative and qualitative methods. Thirty participants performed a series of drawing tasks with the think-aloud method, followed by post-hoc surveys and interviews. Our findings are as follows: (1) Users were significantly more content with DuetDraw when the tool gave detailed instructions. (2) While users always wanted to lead the task, they also wanted the AI to explain its intentions but only when the users wanted it to do so. (3) Although users rated the AI relatively low in predictability, controllability, and comprehensibility, they enjoyed their interactions with it during the task. Based on these findings, we discuss implications for user interfaces where users can collaborate with AI in creative works.