The Future of Cobots: Adaptive Thought Control

Cobots are now so integrated into business that it’s something of a cliche to compare this new generation of automation to the clumsy, cage-bound robotics common two decades ago. Moving machines from sequestered danger zones to shared workspaces is now nothing new, and the future of cobots will likely depend on integrating humans and cobots safely and effectively.

To understand the future of cobots and predict the next generation of automation, we need to look forward rather than back. In this context, it’s worth noting that the central idea behind cobots is likely to stand the test of time: by allowing human beings and machines to work together in the same space, we harness the creativity and problem solving of people and the repeatability and tireless precision of robotics.

Business takeaways:
• The future of cobots will see widespread usage among manufacturers of all sizes, as prices continue to become affordable
• Better machine learning, and the ability to recognize certain human actions will bring increased safety to cobots
• Cobots won’t contribute to increased unemployment, as they serve to assist human workers rather than take away their jobs

In tasks like the final assembly of cars, judgment and decision-making are necessary, but a range of repetitive, predictable tasks remain, opening space for cobots to work alongside human beings. In short, despite their popularity, cobots have only begun to populate the world of work.

The Growing Market for Cobots

Led by Universal Robots, Rethink Robotics, and FANUC, by Barclay’s estimate, the cobot market will be worth a staggering $3.1 billion by 2020. This is driven in no small part by declining prices for these systems, which have fallen by a steady 3-5% per year. With an average price in 2015 of about $28,000, the expected price of a cobot will drop to a surprising average of roughly $17,500 by 2025.

Though critics have been skeptical of sales numbers, it’s impossible to ignore these unit prices when placed against the background of increasingly expensive labour. When compared to the rising hourly cost of labour, which has reached approximately $12 in the U.S. and £7.40 in the U.K., increased interest in automation is a sure thing.

Moreover, the range of tasks this new generation of automation can tackle is impressive. Whether handling fragile samples in a laboratory, small items for packaging, or assuming the truly heavy lifting on assembly lines, the future of cobots will see humans working with these systems, rather than being replaced by them, a fact sure to ease the fears of human employees.

A great example is DHL. In April, 2017, it announced that its Tennessee facility had partnered with Locus Robotics to assist its pickers in order fulfillment. Rather than pushing a bin or cart, Locusbots will work alongside their human companions, helping them locate and collect items for shipment. DHL expects this to increase picker speed and decrease mistakes, leading to greater efficiency in their supply chain. For their human compatriots, cobots will make their work easier and less taxing.

Making Cobots Safer

As the success of automation has illustrated, though, it’s not enough to be broadly useful; our mechanical coworkers need to be safe as well if they’re to share space with people. And as expected, the broad implementation of cobotics spurred critical attention to safety. ISO/TS 15066 requires that this new generation of automation include profound safety measures, and the two provisions most implemented in cobotics are power and force limiting and speed and separation monitoring.

But for cobots to be truly safe, and for people to work more efficient with them, this isn’t enough.

Recent research from Przemyslaw Lasota and Julie Shah at MIT has illustrated that by teaching cobots to recognise the actions of their human coworkers, automation can avoid actions that might confound the people working alongside it. In these tests, using a PhaseSpace motion capture system coupled with force limitation and separation monitoring, an ABB IRB-120 cobot was able to sense and adjust to the presence of its human companions, making their shared work both faster and safer.

This research demonstrated that working in tandem with robots that were able to recognise and predict human behaviour also allowed the human employees to feel safer and more satisfied with their interaction with the machines.

No less importantly, however, a cobot’s flesh and blood coworkers need to be able to predict the motions of the machines with whom they work to ensure they can work efficiently beside them. Even the safest system can slow work with unpredictable movements and extreme ranges of motion.

The future of cobots is more human-like

Unfortunately, given the hyper-flexibility of automation, this suggests that future cobot designs might more carefully mimic ordinary human movement. Potentially, the future might see smarter, more human cobots.

Lasota and Shah’s work represents a tremendous step in the right direction, and we believe that such adaptive interaction will become ubiquitous in the next few years. But advances in neurotechnology are promising much greater things.

For nearly a decade, researchers have been successfully reading the tiny electrical signals that constitute the brain’s thoughts. Early experiments demonstrated that images could be recreated solely by reading the brain’s activity, and more recent tests reveal a startling capacity to control machines with nothing more tangible than thought.

The key to this tech is the brain-computer interface (BCI), a non-invasive wearable that measures brain activity and translates it into a language a computer can understand. In the world of robotics, this tech may soon allow prosthetic limbs that function much as their biological counterparts do, driven only by their users thought to move.

New control mechanisms

As far fetched as this may sound, another team of researchers at MIT have recently published a paper revealing the results of a closed-loop test of the ability of Rethink Robotic’s Baxter cobot to learn from a human observer’s observation of its errors.

Perhaps as a natural part of our ability to learn through trial and error, our brains produce a an error-related potential (ErrP) signal when we see a mistake being made, whether it’s our own or someone else’s. This tiny signal is detected by a wearable BCI within milliseconds, immediately providing feedback to the cobot.

In carefully constructed tests, the MIT team found that Baxter’s performance improved by as much as 20% merely by responding to an observer’s ErrPs.

A significant advantage this approach to machine learning offers is that it’s the robot that’s adapting to us and not the other way round. Our natural, involuntary response drives the change in the robot’s behaviour. As this technology advances, we think it’s likely that future generations of cobots will work alongside human beings who direct their movements and refine their performance unconsciously, all the while remaining focussed on the tasks at hand.

The future of cobots, then, is far more adaptive and human-centric than current designs. Eventually, human coworkers may wear BCIs that direct the machines with whom they share work, correcting and teaching them without conscious effort.

International keynote speaker, trend watcher and futurist Richard van Hooijdonk offers inspiring lectures on how technology impacts the way we live, work and do business. Over 420,000 people have already attended his renowned inspiration sessions, in the Netherlands as well as abroad. He works together with RTL television and presents the weekly radio program ‘Mindshift’ on BNR news radio. Van Hooijdonk is also a guest lecturer at Nyenrode and Erasmus Universities.