Are you curious about what your future robotic assistants will look like? My bet is that by the time you buy your very first robotic butler, it will have a friendly head on it that moves. In fact, it would be a good idea to make robots with heads if they are intended to share spaces and objects with people. That’s because …

I don’t know about you, but if something has a head I assume it has thoughts. When watching a movie I stare at the character’s face because I want to know what they feel. So for me a head’s a pretty important thing. If I’m going to talk with a robot, …

As a robot animator I can attest to the fact that robots don’t “need” heads to be treated as social entities. Research has shown that people will befriend a stick as long as it moves properly. We have a long-standing habit of anthropomorphizing things that aren’t human by attributing to them human-level personality traits or internal motivations based on cognitive-affective architectures that just aren’t there. …

As a robot animator I can attest to the fact that robots don’t “need” heads to be treated as social entities. Research has shown that people will befriend a stick as long as it moves properly [1].

We have a long-standing habit of anthropomorphizing things that aren’t human by attributing to them human-level personality traits or internal motivations based on cognitive-affective architectures that just aren’t there. Animators have relied on the audience’s willingness to suspend disbelief and, in essence, co-animate things into existence: from a sack of flour to a magic broom. It’s possible to incorporate the user’s willingness to bring a robot to life by appropriately setting expectations and being acutely aware of how the context of interaction affects possible outcomes.

In human lifeforms, a head usually circumscribes a face, whereas in a robot a face can be placed anywhere. Although wonderfully complex, in high degree of freedom (DOF) robot heads, the facial muscles can be challenging to orchestrate with sufficient timing precision. If your robot design facilitates expression through the careful control of the quality of motion rendered, a head isn’t necessary in order to communicate essential non-verbal cues. As long as you consider a means for revealing the robot’s internal state, a head simply isn’t needed. A robot’s intentions can be conveyed through expressive motion and sound regardless of its form or body configuration.

Are you curious about what your future robotic assistants will look like?

My bet is that by the time you buy your very first robotic butler, it will have a friendly head on it that moves. In fact, it would be a good idea to make robots with heads if they are intended to share spaces and objects with people. That’s because the head is a really expressive part of our body we naturally use (a lot) to convey essential information to each other. Robots will need to do the same if they are going to hang out with us soft-tissued human beings at our homes and offices.

For example, when people are attending to something, they tend to be looking at the thing they are attending to. People also look at the direction they are headed when they walk, and make eye contact when they talk. People nod with their head when they want to show agreement about what is being said. Without these nonverbal cues from the head interacting with each other would be much more difficult, because we wouldn’t know what each other are doing.

Rodney Brooks, a pioneer in robotics and now Chairman and CTO of Rethink Robotics, had this in mind when he built Baxter. Although Baxter’s arms are as bulky-looking as its traditional industrial robotics predecessors, one of the innovative components of it is the fact that it features a moving head that makes its interaction with not-so-trained users very intuitive

If robots are to do meaningful things around us in a safe manner, it’s essential that we know what the robot is attending to, where it is headed, and what it is about to do – a lot of which a robot head can help with. That way, we won’t have to be a roboticist to know when it is safe to be around a robot holding on to a giant knife to make you cucumber salad. <optionally embed cucumber slicing robot

I don’t know about you, but if something has a head I assume it has thoughts. When watching a movie I stare at the character’s face because I want to know what they feel. So for me a head’s a pretty important thing. If I’m going to talk with a robot I’d like it to have some kind of discernable head. It’s a useful thing if you want people to have warm fuzzy feelings about your robot. Its useful if people are interfacing with the robot.

Simply: a head allows a face, and a face allows interface.

So a head’s only needed if the robot has to interface with people (or other headed animals, say). A head is a design feature but the main function of an android is its form: it has to look like humans. Giving it a head is a function-follows-form decision. Wasn’t it Hunter S. Thompson who wrote, “Kill the head and the body will die?” Well, this should not be the case for military robots. The beheaded design can be improved. Saying that all robots need to have faces is like saying all animals need to have gills. For the deadly, dangerous, and downright dastardly work that robots today need to perform, like gastro-intestinal surgery, or military surveillance, a head won’t do much more than get stuck or blown off.

A head, like hands or a face, is a design decision that’s best left for the robots working directly with humans.

The obvious answer to this question is “No: there are lots of robots without heads.” It’s not even clear that social robots necessarily require a head, as even mundane robots like the Roomba are anthropomorphized (taking on human-like qualities) without a head. A follow-up question might be, “How are heads useful?” For humans, the reasons are apparent: food intake, a vessel for our brain, a locus for sensors (eyes and ears), and high-bandwidth communication via expression. What about for robots …?

Food intake: Probably not.

Computational storage: Again, probably not.

Location for sensors: Indeed, the apex of a robot is a natural, obstacle-free vantage point for non-contact sensors. But a “head” form factor is not a strict requirement.

Emotion and expression: Ah, the real meat of this question… “Do robots need to express emotion?”

This is a funny question to ask someone who once (in)famously advocated for either (A) extremely utilitarian designs: “I want my eventual home robot to be as unobtrusive as a trashcan or dishwasher”, or (B) designs unconstrained by the human form factor: “Why not give robots lots of arms (or only one)? Why impose human-like joint limits, arm configurations, and sensing? We can design our own mechanical lifeforms!”

My views have softened a bit over time. Early (expensive) general-purpose home robots will almost certainly have humanoid characteristics and have heads with the ability to express emotions (i.e. be social) — if nothing else, to appeal to the paying masses. And these robots will be useful: doing my laundry, cleaning my dishes, and cooking my meals. In the early attempts, I will still find their shallow attempts at emotion mundane and I will probably detest the sales pitches about “AI” and “robots that feel.” But as the emotional expressions become more natural and nuanced, and the robots become more capable, I will probably warm up to the idea myself.

TL;DR: No, many robots do not need heads. Even social robots may not need heads, but (whether I want them to or not) they probably will, because paying consumers will expect it.

Policy is really about long-term thinking — a process we should do but don’t do for various reasons. Though China is a notable exception, very few governments make long-term planning a priority.

Corporations are more disciplined and less prevailed upon by conflicting interests than governments; hence long-term planning is a regular part of their management practice. But corporations have neither ethics nor loyalties, and often do marginally (if not outright) immoral things to preserve the profitability of the company over the welfare of the community and workforce.

Economic policy may not jump to mind as a hot topic for roboticists, but it is a fundamental and influential driver behind the failure or success of the robotics community as a whole. After all, economic policy is what’s behind how governments set their interest rates, determine their budgets, enforce their rules for the labour market and deal with questions of national ownership.

This month we asked Robotics by Invitation panel members Rich Mahoney and Frank Tobe for their take on what policy-makers need to do to keep economic development apace with important developments in robotics. Here’s what they have to say …

I am not sure how to describe the specifics of what policy makers should do, but I think there are two gaps that policy makers should think about that are associated with the economic development impact of robotics: sufficient funding to support an emerging robotics marketplace; and detailed descriptions of the innovations needed to solve specific problems …

I think the biggest thing happening today is the acceptance of the low-cost Baxter and Universal robots into SMEs and small factories everywhere. Sales will likely be 2% of the total; 5% in 2014 and possibly 15% in 2015. That’s growth! And that’s before the might of the big four robot makers start selling their low-cost entry robots for SMEs …

I am not sure how to describe the specifics of what policy makers should do, but I think there are two gaps that policy makers should think about that are associated with the economic development impact of robotics:

sufficient funding to support an emerging robotics marketplace, and

detailed descriptions of the innovations needed to solve specific problems.

[RBI Editors]As an active robotics investor, a leading authority on the business of robotics, and the author of The Robot Report and Everything Robotic, you are at the pulse of the field’s economic development. In a nutshell, what’s happening in robotics today?

[Frank Tobe]
I think the biggest thing happening today is the acceptance of the low-cost Baxter and Universal robots into SMEs and small factories everywhere. Sales will likely be 2% of the total; 5% in 2014, and possibly 15% in 2015. That’s growth! And that’s before the big four robot makers start selling their low-cost entry robots for SMEs. This has more near-term promise than unmanned aerial or ground vehicles in agriculture and elsewhere. These co-robots are proving that we need more high-tech people and fewer low-skilled people in this globally competitive economy.

‘Bean-counting’ is a dull but necessary component of every grant proposal; it helps to keep our plans realistic, doable and accountable. But what if we weren’t tied to grants and budgets? Would it change the way we approach our work?

This month we asked the Robotics by Invitation panel to tell us what kind of research they would undertake if money weren’t an obstacle. Here’s what Mark Tilden and Illah Nourbakhsh have to say …

Community empowerment through massive robotic sensing. There is no question we live in a world that is changing. Pollutants are changing the dynamics of the air we breathe, the water we drink and even the soil on which we live. Yet the power to measure pollution …

So robotics research is excellent for those with ADHD – the field’s problem and feature is it’s not just anything, it’s everything that’s techno fun. However every now and again there’s something that skitters, flops, pronks, spins, walks, tumbles, or bounces across the desk that could really use … a brain.

So the short answer is I’d put (other people’s) money into researching affordable competent minds that could help organize any mechanical body, sensor or environment they are given. Small, quick, cheap, and with a voice interface so I can encourage it to effectiveness without a million keystrokes. Power on and it asks “Hello, what is my name?”

There is no question we live in a world that is changing. Pollutants are changing the dynamics of the air we breathe, the water we drink and even the soil on which we live. Yet the power to measure pollution, measure human behavior (including Emergency Room visits) and correlate the values is held tightly by government and corporate players. They have the money to focus on sensors and values that make their case, and they have the marketing skills to then present those values in the best possible light for reelection and for corporate profit.

But in fact those most touched by a changing world are ordinary citizens, and it is the citizen who has the potential to make decisions that immediately impact health and future legislation, from what neighborhood to live in to which politician to elect. Robotic sensing technologies are rapidly becoming less expensive, and with the right infusion of research I believe we could develop the networking, data visualization and interaction smarts to have global, publicly accessible information about all sources of pollution. This would empower citizens and communities to make far more informed decisions, and to fight biased information presentations with their own re-interpretation of source data. This will take new innovation in sensing technologies, networking, Big Data storage, search, retrieval and evaluation.

It is the stuff of robotics, through and through, applied to the deep goal of community empowerment at an international scale.

Well it depends on what you mean by mainstream. For a number of major industry sectors robotics is already mainstream. In assembly-line automation, for instance; or undersea oil well maintenance and inspection. You could argue that robotics is well established as the technology of choice for planetary exploration. And in human culture too, robots are already decidedly mainstream. Make believe robots are everywhere, from toys and children’s cartoons, to TV ads and big budget Hollywood movies. Robots are so rooted in our cultural landscape that public attitudes are, I believe, informed – or rather misinformed – primarily by fictional rather than real-world robots.

But I understand the sentiment behind the question. In robotics we have a shared sense of a technology that has yet to reach its true potential; of a dream unfulfilled.

The question asks what is the single biggest obstacle. In my view some of the biggest immediate obstacles are not technical but human. Let me explain with an example. We already have some very capable tele-operated robots for disaster response. They are rugged, reliable and some are well field-tested. Yet why it is that robots like these are not standard equipment with fire brigades? I see no technical reason that fire tenders shouldn’t have, as standard, a compartment with a tele-operated robot – charged and ready for use when it’s needed. There are, in my view, no real technical obstacles. The problem I think is that such robots need to become accepted by fire departments and the fire fighters themselves, with all that this entails for training, in-use experience and revised operational procedures.

In the longer term we need to ask what it would mean for robotics to go mainstream. Would it mean everyone having a personal robot, in the same we all now have personal computing devices? Or, when all cars are driverless perhaps? Or, when everyone whose lives would be improved with a robot assistant, could reasonably expect to be able to afford one? Some versions of mainstream are maybe not a good idea: I’m not sure I want to contemplate a world in there are as many personal mobile robots, as there are mobile phones now (~4.5 billion). Would this create robot smog, as Illah Nourbakhsh calls it in his new book Robot Futures?

Right now I don’t have a clear idea of what it would mean for robots to go mainstream, but one thing’s for sure: we should be thinking about what kind of sustainable, humanity benefitting and life enhancing mainstream robot futures we really want.

The biggest obstacle to broader adoption of robotics is that only experienced roboticists can develop robotics applications. To make a robot reliably and robustly do something useful, you need a deep understanding of a broad variety of topics, from state estimation to perception to path planning. While few people in the world have this expertise, many people can write software. What we need is more of those software developers involved in the business of developing robotics applications.

I say “applications” to distinguish this work from that of developing new algorithms or core building blocks. Making an analogy to traditional software development, I don’t need to understand how process schedulers, or file systems, or memory managers work in order to develop useful desktop applications. And I don’t need to know the details of DNS, web servers, or web sockets to develop portable web applications. Knowing more about the underpinnings of the system will always be useful, of course. But the key is that, once the building blocks are established, understood, documented, and tutorialized, the barrier has been greatly lowered: you just need to be able to write code.

Beyond just getting more people working with robots, we need better ideas for how robotics technology can be usefully and profitably employed to support people in their everyday lives. My experience in the robotics community over the last 15 years has convinced me that roboticists are pathologically bad at coming up with application ideas. We’re enamored of the technology, which is good in that it motivates us to work hard on important problems. But it also leads us to concentrate on “robotic” solutions to problems, without regard to what people who experience those problems really need. We can fix this problem by adding orders of magnitude more developers to our community, each of whom comes with a new and different perspective. And we can do that by making the development of robotics applications accessible to any competent programmer.

The Android and iOS platforms made it possible for people with no more than a passing understanding of 3G, GPS, or touch screens to build useful, even world-changing mobile applications. We can do the same for robotics. We’re on the right path, with a lot of effort going into open, shared software platforms for robotics. We just need to keep pushing, and to keep the non-robotics engineer in mind when we’re building things.