For any engineering system to be trusted, it must be safe. One of the most significant challenges facing designers of next generation robots is how to make them safe and trustworthy. Robots designed to share human workspaces and physically interact with humans must be safe, yet guaranteeing safe behaviour is extremely difficult because the robot’s human-centred working environment is, by definition, complex and unpredictable. It becomes even more difficult if the robot is also capable of learning and adapting. In this short talk I will argue that a key technology for tackling this problem exists. It is the robot simulator. But we need to use simulators in a radical new way. By incorporating a simulation of a robot, and its working environment, inside the robot’s control system, we can enable the robot to ask ‘what if’ questions about the consequences of its own actions. This approach would provide the system with a level of functional self-awareness, and might be the best way to build future adaptive systems that are safe and trustworthy.

The sensorimotor learning capacity allows humans to efficiently learn to use new tools and control tasks. To realize this ability in complex systems such as robots is still elusive. One promising way to approach this problem is to transfer human sensorimotor skills to robots using a human-in-the-control-loop setup. The idea is to consider the target robotic platform as a tool that can be controlled by a human. Provided with an intuitive interface for controlling the robot, the human learns to perform a given task using the robot. After sufficient learning, the skilled control of the robot by the human provides learning data points that can be used to obtain an autonomous controller so that the robot can perform the task without human guidance. The feasibility of this framework is supported by the neuroscientific findings on body schema and has been shown to work for several robot skill synthesis scenarios. From an engineering point of view, the approach relies on techniques from teleoperation and machine learning, and has the same goals with robot learning by demonstration. The key difference is that the proposed framework includes the human in the control loop and employs the human brain as the adaptive controller to accomplish a given task. Once the control proficiency has been attained, the data generated by the human performance allows the human policy to be transferred to the robot. In this talk, I will introduce the human-in-the-loop framework and outline the current challenges to be tackled for facilitating a wide impact for the development of adaptive systems for complex environments.

While natural language processing operational systems are getting more and more popularity and reaching out more type of users for instance individual users through the SIRI personal assistant application on the iPhones or the S-Translator on the Samsung Galaxy phones providing travelers with translation tools, or professional users like technical documentation translators relying on automated translation accuracy and fluency to accelerate their translation jobs through post-editing : these systems are facing one of the most challenging problem: the variety of the users input and the variety of their expectations. Think about Siri personal assistant whose goal is to handle seamlessly a huge variety of pronunciation, voice tone, elocution speed – or the S-Translator having to deal with any type of "chat" variants each user will invent to communicate – or the human translator expecting the system to follow its own specific style and terminology requirement. Usually trained on huge volume of data, these systems need to consistently adapt to each specific user using tiny bit of implicit feedbacks they would get from their usage of the system. Beyond their actual overall performance, and based on a specific application, we will show why such systems need to integrate adaptive features to keep being qualified and used as language assistants.

Joanna Bryson is associate professor at the University of Bath (England) in the Computer Science department.

The principle scientific passion of Prof. Bryson is understanding human behaviour, human culture and natural intelligence more broadly. Main methodology for doing this is designing intelligent systems to model and test scientific theories. She builds theories of intelligence into cognitive systems — working AI models. Most (but not all) of the research of Joanna Bryson has focused on the unintentional and non-linguistic aspects of human intelligence: understanding primate behaviour, action selection, how consciousness, religion, economics (particularly costly punishment), language and other social behaviours have evolved in humans and (where appropriate) other non-primate social species, including even bacteria. Applications belong to a variety of domains besides science, including cognitive robotics, computer game characters and intelligent environments / "smart homes".

What will it take until mobile domestic assistants are as ordinary as washing machines? Both the disciplines or robotics (as demonstrated by industrial robotics) and of machine learning and vision (as demonstrated by driverless cars) have advanced to the point that reliability is not the main issue. Current problems are 1) systems integration and 2) the business case. Starting with the latter, I suggest “domestic” robots will invade as PCs did, first by replacing low-wage workers not completely, but by making other low-wage workers more productive. The PC spread when it became cheaper than the annual salary of a secretary, and it enabled a lead secretary to do the work of two or three. Once popular, PCs became even cheaper, and were then adopted to private ownership. Currently, mobile, human-sized industrial robots with force-control cost about €30,000. We need to make these into viable domestic assistants and office cleaners, under the direction of lead cleaners. Such robots will only be accepted if they simplify the management task of the supervisor over human assistants on a sufficient number of tasks. For example, a robot must learn the requirements and affordances of a particular house – what it is allowed to touch, what it must clea. It should also be able to generalise – it should recognise similar objects and offer to clean them similar ways, and eventually not need to ask. In the final few minutes, I will also review the UK’s robot ethics code, the EPSRC Principles of Robotics.