Latest Blog Post

The Cobot Experience: Changliu Liu & The Difference Between Technology and Fantasy

Carnegie Mellon's Changliu Liu talks cobot safety, the importance of having realistic expectations of what cobots can and can't do and her vision of a manufacturing world after cobots.

Changliu Liu is an assistant professor at Carnegie Mellon University's prestigious Intelligent Control Lab. Her research interests include human-robot interaction (HRI) in manufacturing environments, cobot safety and enhancing human-robot collaboration (HRC) through control and planning of robot behavior and learning and prediction of human behaviors. With a special focus on HRI and cobot safety in manufacturing, Liu also envisions and is helping to build a future beyond cobots.

Liu kindly agreed to be interviewed about her research and the importance of having realistic expectations of collaborative robots.

The Interview

EC: Humans and robots interact in all sorts of scenarios, from elder care to surgery and bomb disposal. With so many opportunities for researchers, what attracted you to the study of human-robot interaction in manufacturing?

CL: From my earliest college studies in mechanical engineering, I've been fascinated by the idea of robots with human-like intelligence that can function as true colleagues. Developing such robots involves understanding how humans and robots interact with each other.

The best place to study these interactions is in industry and manufacturing, where automation has enjoyed the most popularity and success.

I'm also very interested to see how technology and society will evolve following the introduction of robots with human-like intelligence.

EC: 'Human-like intelligence' seems a world away from today's industrial robots and cobots...

CL: Currently, we are not worried that robots are too smart. We are worried that robots are not smart at all. [Laughs.]

EC: True. [Laughs.] But aren't the limited capabilities of cobots (e.g. around payload and speed and force) what makes them safe to work with?

CL: In terms of current technology, yes. But in terms of their future capabilities, no. Today's cobots are safe. But to make them safe, they are designed not to be as capable or fast as traditional industrial robots. Understanding this trade-off between functionality and safety is key to having a realistic understanding of cobots.

EC: What makes for a realistic understanding of cobots?

CL: One of our goals as a research group is to make sure that people are not over optimistic about what cobots can do. End users shouldn't view cobots as 'superhumans.' At the same time, people should not be afraid of cobots in terms of personal safety risk.

EC: Are end users generally best served by viewing their cobot as a colleague, a tool, a form of prosthesis, or some other category?

CL: I think it's best for end-users to see cobots as something between a colleague and a tool. Cobots are definitely more colleague-like than traditional robots, which are typically seen as tools, but cobots are not colleagues to the same level as a human. So, something in-between.

EC: What can be done to build realism and trust between humans and cobots?

CL: The first step is training and education. End users need to understand how the robot works. Humans workers need to have a realistic understanding of the limitations and capabilities of cobots, so that they understand what the cobot can and can't do.

The second step is to get on and work with the cobot for a while before reaching judgment. In this context, cobots can be thought of as tools. You only get used to any tool after playing with it multiple times. If you don't spend time gaining firsthand experience with it, I don't think trust will develop.

****

Here, Changliu Liu is training and testing a safety controller for cobots...

****

EC: Trust and maintaining a sense of personal safety go together. How do you approach cobot safety issues?

CL: We divide end-user protection into two main stages. The first is 'interactive safety.' This is the stage where we aim to prevent a collision from happening. In this stage, cobots should be smart enough to correctly perceive human motion and smartly plan and control its own motion accordingly.

FANUC's CR series of cobots, for example, come with force and speed limiting. An additional software package allows human integrators and end users to specify a soft or virtual fence for the cobot. This invisible, software-defined fence allows you to limit the motion of the cobot, so you can instruct it to slow down when a human enters the area.

If the first stage of end user safety fails and a collision between human and robot occurs, we enter the second stage ‘intrinsic safety’. Here it is crucial to ensure that limited force is applied to the human --certainly not enough to create any risk of severe injury.

So, cobots are safe, but all types of safety need to be defined in certain contexts and the application is key.

EC: A cobot handling a sharp blade, for example, presents a greater risk than a cobot that's handling small plastic parts?

CL: Exactly.

EC: How important are ISO guidelines and standards when it comes to cobot safety?

CL: They are very important because they specify exactly how much contact speed and force is allowed when there is a collision between a human and a robot in different scenarios. Standards help cobot makers and provide end users with a sense of security. Cobot manufacturers are doing very well in getting their robots to satisfy ISO standards.

EC: What advice do you have for end users, especially those that have not experienced a cobot up close before?

CL: To really get a good understanding of cobots, you have to spend time with them. There are opportunities for enthusiastic end users to gain hands-on experience of cobots outside of manufacturing facilities too, through cobot makers and integrators offering participation in human subject tests and through university and technical college open houses and demos.

****

In our second video from the Intelligent Control Lab, a cobot yields and then quickly returns to collaborative behavior as a human behaves somewhat randomly in its vicinity...

****

EC: When asked if there was one thing that could send robotics in a completely new direction, Universal Robots co-founder and CTO Esben Østergaard replied “liquid steel.” You also envision a world far beyond today's cobots, with your work focusing on the software side, on the “human-like intelligence” that will enable enhanced collaboration between manufacturing workers and robots in future. What's going on with cobot advocates?!

CL: I think cobots will do really well in manufacturing over the next few decades, but I don't think cobots, like the ones we find in factories today, are the ultimate end-goal for industrial automation. When robots eventually get more intelligent they will be able to handle all the jobs without collaborating with humans. Cobots will prove to be a great solution for the next 20 or 30 years, until more versatile robots can take over.

EC: 20-30 years is a long time in terms of manufacturing technologies though, isn't it?

CL: Yes. And during those years, cobots will learn more from their human colleagues until automation finally takes over.

EC: How are you and your colleagues helping to design cobots with human-like intelligence?

CL: We're looking at ways to make cobots autonomous and human-like so they can be like a real colleague to human workers.

To make this happen we need cobots to have a high level of intelligence; not just the intelligence to slow down and stop when a human gets too close, but the intelligence to even more effectively interact and collaborate with humans on different industrial tasks.

So, we need a lot of data, especially data about normal human behavior in factories. We work a lot on trying to predict human behavior and safely controlling robot motion based on those predictions, using deep-learning methods.

To build models of human behaviors using a data driven approach, we ask human subjects to come to the lab and perform the different types of motions that they would perform in production lines. We record those motions and use deep-learning methods to learn the behavior pattern. We then apply this model in real time prediction and motion control.

(Note: The interview was edited for length and clarity. It was conducted for educational purposes and the views expressed therein are those of the expert and do not necessarily reflect those of Robotiq. )

Leave a comment

A freelance robotics writer since 2006, Emmet is an Economist contributor, and a regular contributor to Robotics Business Review and Robotics Trends. His writing on robots has also appeared in Wired, BBC Future, BBC Focus magazine, Space Quarterly, and numerous other outlets.