In order to enable a social agent to behave in a believable and realistic way, it needs a wide range of information in the form of both low-level value-based data as well as high-level semantic knowledge. In this work we propose a system that puts a virtual reality layer between the real world and an agent's knowledge representation. This mirror world allows the agent to use its abstract representation of the environment and inferred events as an additional source of knowledge when reasoning about the real world. Additionally, users and developers can use the mirror world, with its visualized data and highlighting of the agent's reasoning, for further understanding of the agent's behavior, debugging and testing, or the simulation of additional sensor input.

Some recent approaches in context-aware systems deploy on-tologies to benefit from their expressive power and extensive availability within the World Wide Web. However, wide-ranging ontologies tend to grow into large and complicated constructs, thereby making them difficult to maintain or reason on. In this work, we present a lightweight system design that combines the advantages of minimal, distributed or modularized ontologies with the computational power of a state-of-the-art real-time interactive system. Therefore, we introduce a simple data-structure called blueprints, that describes various reasoning operations to allow the dynamic integration of domain-specific knowledge for time-critical tasks, e.g. in multi-agent systems. Following this concept, we formulate four major use cases which are described through exemplary problems and proposed solutions. The presented design aims at porta-bility and adaptability, while maintaining real-time capability.

Today's physics engines mainly simulate classical mechanics and rigid body dynamics, with some late advances also capable of simulating massive particle systems and some approximations of fluid dynamics. An accurate numerical simulation of complex non-mechanical processes in real-time is beyond the state-of-the-art in the respective fields. This article illustrates an alternative approach to a purely numerical solution. It uses a semantic representation of physical properties and processes as well as a reasoning engine to model cause and effect between objects, based on their material properties. Classical collision detection is combined with semantic rules to model various physical processes, e.g., in the areas of thermodynamics, electrodynamics, and fluid dynamics as well as chemical processes. Each process is broken down into fine-grained sub-processes capable of approximating continuous transitions with discretized state changes. Our system applies these high-level state descriptions to low-level value changes, which are directly mapped to a graphical representation of the scene. We demonstrate our framework's ability to support multiple complex, causally connected physical and chemical processes by simulating a Goldberg machine. Our performance benchmarks validate its scalability and potential application for entertainment or edutainment purposes.

This paper introduces an interactive surface concept for Mixed Reality (MR) tabletop games that combines a variable (LCD and/or projection) screen configuration with the detection of finger touches, in-air gestures, and tangibles. It is low-cost and minimally requires an ordinary table, a TV screen, and a Kinect v2 sensor. Existing applications can easily be connected by being compliant to standards. The concept is intended to foster further research on collaborative tabletop situations, not limited to games, but also in- cluding learning, meetings, and social interaction.

Degens, N., Endrass, B., Hofstede, G.J., Beulens, A., Andr'e, E.: What I See is Not What You Gettextquoteright: Why Culture-Specific Behaviours for Virtual Characters Should Be User-Tested Across Cultures.AI and Society. (2014).