You are here

Human-Robot Interaction

User study on Human-Robot-Interaction for recovering form industrial assembly problems

In the considered scenario, humans and robots work together in the same workspace but they perform different assembly tasks. While the robot is performing his tasks, problems like blocking of workpieces during a mating operation can arise. Now the question which is the best way of interaction so that a human can help a robot for recovering form problematic situations. Therefore a user study with 31 participants is performed for comparing typical input devices which are available in industrial scenarios. Manual control pendants often contain a space mouse and a keyboard. Beside this it is possible to use the robot itself as an input device by applying a kinesthetic teaching approach. Additional information about the study itself, the results and details of the used assembly task are given by a video and a paper as part of the the conference RO-MAN 2017.

Gesture based robot control

In every day life large parts of the population are used to gesture based input e.g. by smartphones. For this reason, it is an obvious thought to use gesture based input for controlling a robot.

First, a low-cost tactile surface sensor has been developed and deployed onto an industrial manipulator. Thereby it is feasible to execute touch gesture on the robot itself. Thus the interpretation of a performed gesture is not limited to the gesture itself. Also additional information, like the robot joint on which the gesture has been performed, can be used during the interpretation.

Different approaches for touch gesture recognition and robot command generation have been developed. The first one is based on invariant geometrical gesture parameters and machine learning classifiers. This appraoch has been published on the conference ETFA 2016. Additionally a more advanced approach has been developed. Therefore the number of hand-crafted features has been reduced by introducing a compact touch gesture representation. The representation is invariant w.r.t. translation, rotation, geometric and time scaling. By using this representation, the amount of needed samples for training a classifier could be significantly reduced. This approach is covered by a paper presented as part of the conference ICRA 2017. A demonstration of this approach is given by a video.

Currently, a comparison of a classic robot programming approach and a gesture based one is being carried out.

User study on Kinesthetic Teaching in assembly operations

Kinesthetic teaching is widely regarded as an intuitive approach to robot programming. Much research has focused on pick-and-place tasks while demanding assembly tasks have received less attention so far. Therefor this user study with 78 participants focused on assembly scenarios. The following aspects are covered by the study:

Effective physical Human-Robot Interaction (pHRI) requires the robotic system to be easy to handle, intuitive, and adaptive to human habits and preferences. It is imperative that the interaction is smooth, intuitive and ergonomically well designed. Unpredictable variations in lot sizes, production volume and product cycle poses a challenge for flexible automation. As opposed to conventional industrial robotics where the robots are programmed to accomplish a fixed and repetitive task, current scenarios demand flexible robotic systems where the robot acts as a tool to assist the human worker by collaboration with humans. Instead of consolidating the robots role as a tool that performs tasks on command it is mutually beneficial to take advantage of the human's cognitive and perceptual skills to transfer the knowledge from human operator to the robot.

Though it is conspicuous that the pHRI will improve the flexibility and the productivity by taking advantage of the human's cognitive and perceptual skills, yet it is unclear how this interaction could be made more ergonomical and a pleasant experience for the user. The main stumbling block in achieving a good interaction is the substantial variation in human interaction forces which are depended on erratic factors coupled with unpredictable human behavior. In addition to this, each operator will have different physical capabilities like the maximum and minimum interaction forces and varying body proportions like height, arm length. The personal preference of the operator, like the favorites posture, distance they keep from the task and which hand they use for primary task are all needed to be considered while designing a controller or a task.

These aspects motivated our current research on adaptive stiffness control schemes and variable impedance control schemes which can tackle the uncertainties arising from Human-robot Interaction and take pHRI to the next level.

A Comparison Study on Personalization and Task Specificity in Human Robot Collaboration

A user study with 49 participants was conducted to validate the importance of personalized adaptive control modes. A new control schemes was introduced, the new scheme is a Personalized Adaptive Stiffness controller for physical Human-Robot Interaction which is calibrated for each user's force profile hence the control mode is personalized for every single user who uses it. The user study compares the new scheme to conventional fixed stiffness or gravitation compensation controllers on the 7-DOF KUKA LWR IVb by employing two typical joint-manipulation tasks. The experiments suggest that for simpler tasks a standard fixed controller may perform sufficiently well and that respective task dependency strongly prevails over individual differences. In the more complex task, quantitative and qualitative results clearly show differences between the different control modes and a both a performance gains and a user preference for the Personalized Adaptive Stiffness controller. Additionally the importance of considering task specific parameters and human specific parameters while designing of control modes for pHRI is validated. The study yielded important results than forms basis for improvements in HRI , the results and details of experiments can be read from paper published in the conference RO-MAN 2017.