Let’s stop talking about bad robots and start talking about what makes a robot good. A good or ethical robot must be carefully designed. Good robot design is about much more than just the physical robot, and at the same time good robot design is about ‘less’. Less means no extra features, and in robotics that includes not adding unnecessary interactions. It may seem like a joke, but humanoids are not always the best robots.
‘Less’ is the closing principle of the “10 laws of design” from world famous industrial designer Dieter Rams, and design thinking has informed the discussion around guidelines for good robot design, as ethicists, philosophers. lawyers, designers and roboticists try to proactively create the best possible robots for the 21st century.
Silicon Valley Robotics has launched a Good Robot Design Council and our “5 Laws of Robotics” are:

Robots should not be designed as weapons.

Robots should comply with existing law, including privacy.

Robots are products: and as such, should be safe, reliable and not misrepresent their capabilities.

Robots are manufactured artifacts: the illusion of emotions and agency should not be used to exploit vulnerable users.

It should be possible to find out who is responsible for any robot.

These have been adapted from the EPSRC 2010 “Principles of Robotics” and we greatly thank all the researchers and practitioners who are informing this ongoing topic.
Silicon Valley is at the epicenter of the emerging service robotics industry, robots that are no longer just factory workers but will be interacting with us in many ways, at home, at work, even on holiday.
In 2015, we produced our first Service Robotics Case Studies featuring robotics companies: Fetch Robotics, Fellow Robots, Adept and Savioke. We will shortly release our second report featuring Catalia Health, Cleverpet, RobotLab and Simbe.
Design guidelines can not only create delightful products but can fill the ethical gap in between standards and laws.
After all, if our robots behave badly, we have only ourselves to blame.

We are already living with robots. The future is here, but as William Gibson says, it’s not evenly distributed yet. Or as I like to say, I believe that we often don’t recognize the future when we see it.

How do we recognize robots? We usually look for humanoid robots, the stuff of science fiction. Even the classic robot ‘arm’ is part of a ‘human’. But technically, a robot is simply a machine that ‘senses, thinks and acts’. Even the ISO for industrial robots – the international standard describing industrial robot arms – is somewhat broad in definition. A robot is “an actuated mechanism, programmable in two or more axes, with a degree of autonomy, moving within its environment to perform intended tasks.”

Is a car a robot? Yes. Even without full autonomy, a car consists of many autonomous systems. Elon Musk called the Tesla S ‘a computer in the shape of a car’. But really, it’s a robot.

Is a washing machine a robot? Visually, we would find it hard to think of it as a robot. All the ‘humanoid’ bits are hidden inside a box. Yet the modern washing machine, soon to be a washing-drying-folding machine, is a very sophisticated piece of machinery, sensing, thinking and acting in the environment.

Because we see the world through human eyes, it is very hard for us to see things outside human categories. We divide the world into humans, and things. Robots change everything. As we build robots, we are really reshaping what it means to be human. What does it mean when our devices start to look like humans? What does it mean when they don’t? And what does it mean when we use so many different devices to communicate with other people?

Our technologically blended reality is asynchronous, mediated and indirect. Our technologies allow us to communicate across distance and time, and expand our scale, creating a larger richer world. This is nothing new. Civilization is the story of technology taming space and time.

Since we invented writing, we’ve been able to communicate with other people at a distance, at different times and at larger scale than direct communication. And as we invented reproduction technologies, like the printing press and photography, the scale of our communications increased. This has had a huge impact on the world, reshaping our cultural, religious and political structures.

The last 200 years has seen the introduction of many new communication technologies, telegraph, telephone, radio and television. But one thing they’ve all had in common. Until very recently, we’ve been able to see who is ‘pulling the strings’. The subject or object of communication has been visible or known.

In the last decade, we’ve seen an explosion of information and communication technologies and we’ve gone wireless and unplugged. Internet technologies in the 80s and 90s were supposed to usher in an era of anonymity, but in reality they largely just increased the scale of known communications. And our connections to the devices of communication were much more obvious.

As social beings, our reality is very much defined by our communication technologies. These days, even when we are in the same physical place as other people, we are no longer sharing the same reality. We are experiencing different worlds, as if we were in our own reality bubble.

And even when we are communicating, we are no longer certain to be communicating with other people. Ray Kurzweil predicts that in the future we will mainly communicate with machines and not other people. We will experience this technologically blended reality as an extension of ourselves, as a proxy for other people and has its own ‘alien’ identity.

Sharp’s new Robohon phone, created by Tomotaki Takahashi, is the epitomy of a blended reality device. It acts as an extension of ourselves. It provides a proxy for other people, and it has its own very distinct identity. Our categories of ‘you’, ‘me’ and ‘it’ are more fluid than we think. Robots are blending the me and you into the it.

Heidegger was one of the first to describe technology as an invisible extension of our identity. Heidegger’s hammer is ‘present’ when we look at it and think about it. But when the hammer is in the hand of a builder, then it becomes invisible. The hammer is ‘ready at hand’ when the builder thinks of building, not hammering or the hammer. The tool is well known and the focus is on the task instead. The hammer becomes an extension of our identity, an expression of our intent in the world.

Our technological extensions also augment our senses. A lady with feathers on her hat, as described by Merlau-Ponty, has enhanced her spatial awareness. She has increased her sense of the whereabouts of walls and doorways. Just like the whiskers on a cat, we are augmenting our world with technological whiskers.

Similarly, as technology acts as an extension of others, or a proxy, it also becomes invisible to us. Telepresence robots offer an illusion of real presence and become transparent as technologies. Our focus shifts from the tool to the task. In this case the task is the social interaction. Suitable Technologies even prefer that we don’t call their telepresence devices robots because they want our focus to be on the experience not the device.

Robots are becoming popular and as more of them enter our world, they bring their very own personalities and appearances. But any device with a screen, or speakers and connectivity, is capable of being a gateway for many other people. We can have relationships that are indirect, asynchronous and at scale. Our relationships can be with you, me and it and many mixtures in between.

We are going to see more and more social robots in the service industry, including health, manufacturing and logistics, and in the consumer end, including the home, retail and hospitality. And we are just starting to understand the scope of this blended technological reality with robots.

People enjoy meeting Savioke’s Relay, the robot butler now at 4 hotel chains in California. You can communicate with Relay, although the robot behaves more like R2D2 than C3PO. Relay is functional too. Relay is designed to deliver small items to guest rooms when the front desk staff are busy.

After collecting a lot of feedback, Savioke find that as well as people enjoying their communication with Relay, they also appreciate not having to communicate with a person at a time when they are not feeling social, ie. late at night. The robot starts to become an extension of their wishes, but still has just enough personality to improve the experience.

A robot like Mabu from Catalia Health is acting as a proxy for a doctor or primary health care physician. Mabu will stay in the home of patients on a specialty pharma treatment where Mabu’s AI engages the patient directly in conversation and it’s only the data that is communicated to the doctor. And while Mabu the robot may sit at home, Mabu the app can travel with the patient anywhere.

And Fellow Robots OSHBot is really mixing all our relationships up. OSHBot can act as a simple extension. When you enter the hardware store you can ask the robot for directions and then simply follow the map. Or the robot can autonomously guide you to the correct location inside the store. You can engage the robot in conversations about the parts you’re looking for.

Robots are great at remembering 10,000s of SKUs and where on the shelves they all are. But people are really great at problem solving and understanding complex communications. So if you ask questions like “What sort of glue should I use on a roof tile like…”, then OSHBot can call an expert in for a video call with you. So you can be talking to both the robot and another person.

For the customer, this is just a great shopping experience. But this could change the nature of daily work for the store associates, leaving them free to focus on solving the things they enjoy, with their social and expert knowledge, rather than walking miles of aisles, tracking thousands of small items.

So robots are really augmenting our reality in a multitude of ways. Robots are the embodiment of information. And in our new blended reality, they extend and augment our senses, they are the proxies or avatars for others. And they also have their very own alien identity.

As a child, I wanted to be an astronaut, to explore the universe and to meet aliens, but it turns out the aliens are here, and they can teach us a lot about what it is to be human. In research areas from neuroscience, to biomechanics and psychology, we’re using robots to better understand humans.

Alex Garland’s first feature film as a director, Ex Machina, had its US debut at SxSW on March 14. This stylish idea film explores the Turing Test in a very Pinteresque fashion as a young coder falls in love with an advanced AI. Ex Machina is beautifully framed, but Garland’s stark script succeeds on the strength of the acting from Domhnall Gleeson, Alicia Vikander and Oscar Isaac.

Garland’s writing career launched in 1997 with the best selling novel “The Beach”, which the Times called the Gen X answer to Lord of the Flies. After a string of cult successes like 28 Days Later, Sunshine, Dredd and drafts of Halo and Logans Run, Garland became fascinated with the emerging promise and perils of AI. In the Q&A following the SxSW screening, Garland talked about feeling a zeitgeist, a technological and cultural turning point, compelling him and other film makers and writers to address robots and artificial intelligence.

Although he says he’s on the side of the robots, it’s an uneasy truce. Garland describes his film as the story of ‘two brains torturing each other’. That’s true. In Ex Machina, Tony Stark meets John Searle in a gripping drawing room theater, when a billionaire tech genius recruits a young coder to administer the Turing Test to his secret advanced embodied AI.

And it’s a stark film, there are only 4 characters; 2 men and 2 women. 2 AIs and 2 humans. And which two are the brains? That is supposed to be uncertain, but anyone who has used the Bechdel Test to analyze films or popular culture for gender issues knows exactly where the ‘brains’ are.

The Bechdel Test started as a gender litmus and has become a remarkably useful indicator of power imbalance. The test is named after Amy Bechdel a cartoonist who outlined the rules in a 1985 cartoon. To pass the Bechdel Test, a film has to have two women in it, who talk to each other, about something other than a man.

Sometimes the proviso is added that the women have to have names, because some films can have many women characters, but if the characters are all “girl at checkout” and “girl with gun” then they are just devices to add color or move the action forward. And of course, possession of a name is an important indicator of personhood, or identity awareness, so it’s always one of the first steps to separate the beings from the machines.

Many films seem at first glance to have badass female characters but when put to the Bechdel Test, it becomes clear that they never talk to anyone but the main man, or if they talk to each other, it’s about the main male characters. So really, they have no interiority, no self awareness and are probably going to fail a Turing Test. That’s where I think it would be very interesting if the Turing Test were to meet the Bechdel Test more often.

Garland is also playing games with gender and the alienness of AI in Ex Machina. There is a beautiful scene where Ava, the AI, performs a reverse strip tease, putting on her human body.

But I’m afraid that Ex Machina falls at the final fence, as does just about every other science fiction film I’ve ever seen, aside from Alien. The Bechdel Test is useful for more than examining gender representation. It can be our Turing Test for creating believable alien or artificial life forms. If you look at our filmic or cultural representations of the other or alien, then you have to be struck by the singular nature of them all. From Frankenstein to Big Hero 6, do they have any reality without the central human characters?

No, they are alone. Even Alien is alone. At least in Frankenstein, it is the utter aloneness of the new form that is the whole story. Films that have pushed the envelope are few. And doing a quick mental check, the was left feeling empathy for the ‘others’ in only a couple, like Westworld, BladeRunner and Planet of the Apes, and the books of writers like Brin and Cherryh.

How believable are our ‘other’ AIs and robots? Brad Templeton said that an autonomous vehicle isn’t autonomous until we tell it to go to the office and it decides to go to the beach instead. A life outside of our anthropomorphic story is what’s missing from our AIs, aliens and others. We don’t really care about them or their lives outside of their impact on our own. And this makes us poorer.

The final shot is a haunting homage to Plato’s Cave’ although Garland credits his Director of Photography entirely for it. In The Republic, Plato posed the question, what if humans were born chained to face a cave wall seeing the world only as the shadows passing in front of a fire behind them in the mouth of the cave. Imagine the difference when you see the world, unchained from the cave.

I can’t say more. Go see Ex Machina. And use the Bechdel Test on everything.

Just last week at the Grace Hopper Celebration of Women in Computing, Microsoft’s CEO Satya Nadella gave women some questionable career advice: “It’s not really about asking for the raise, but knowing and having faith that the system will actually give you the right raises as you go along. Because that’s good karma.” The event moderator, Professor Maria Klawe, president of Harvey Mudd College and a Microsoft director, immediately disagreed with Nadella’s advice, suggesting instead that women do their homework on salary levels and practice asking for pay raises. Continue reading “25 women in robotics you need to know about (2014)”→

How do we talk to our machines in the 21st century? From typing to swiping, 20th century interfaces translated the world of switches, gears and punch cards into a language that anyone, from the smallest toddler, could speak.

In the 21st century, the graphical user interface is gaving way to a social user interface. We have multiple devices with inputs ranging from vision, words, speech, touch, gesture, even emotion. While humans have adapted to communicating with devices in very ‘dumb’ ways, we are on the verge of much smarter interfaces, which can make sense of a combination of inputs and even understand the context. The GUI becomes the SUI, a whole social way of interacting with our machines.

Avatars are our forcefeedback loop. We have already started interacting with voice avatars. The next step is visual avatars that include speech, face, body and even simulated emotion. The avatar is translating machines into human language/expression and avatars will be the ‘face’ of our new social user interfaces.

What is already happening today? What are the problems? What are the possibilities?

The 1953 New Yorker cartoon that started the “Take me to your leader” meme showed two aliens newly arrived on earth asking a donkey to, effectively, give them policy guidance. This is exactly what our ‘brave new’ human-robot world looks like. Complex technologies can have profound and subtle impacts on the world and robotics is not only a multidisciplinary field, but one which will have impact on every area of life. Where do we go for policy?

Ryan Calo’s recent report for the Brookings Institute, “The Case for a Federal Robotics Commission”, calls for a central body to address the issue of lack of competent and timely policy guidance in robotics. For example, the US risks falling far behind other countries in the commercial UAV field due to the failure of the FAA to produce regulations governing drones. Calo points out the big gap between policy set at the research level ie. OSTP and at the commercial application end of the scale ie. FAA.

However, with robotics being a technology applicable in almost every domain, there will always need to be multiple governing bodies. One central agency is insufficient. Perhaps the answer lies in central information points, like the Brookings Institute, or Robohub, which provides a bridge between robotics researchers and the ‘rest of the world’. Informed discussion is at the heart of democracy and in a complex technical world, scientists, social scientists and science communicators must lead the debate.

I suggest that our current robotics policy agenda needs to be reformed and better informed. This article provides a review of some recent policy reports and considers the changing shape of 21st century scientific debate. In conclusion, I make several recommendations for change:

The creation of a global robotics policy think tank.

That the CTO of USA and the global equivalents make robotics a key strategy discussion.

That a US Robotics Commission is created – while robotics is an emerging field – to implement a cross disciplinary understanding of this technological innovation and its impacts at all levels of society.

That funding bodies make grants available for cross disciplinary organizations engaged in creating a platform for informed debate on emerging technologies.

The Pew Report and the problem with popular opinion

Much of today’s information comes via the media and popular opinion, from policy, analysis or government groups that are just plain out of touch, or unable to absorb or use information across disciplines. In the worst cases a feedback loop is created, of bad opinions being repeated until they are accepted as truth. Recent reports from the Brookings Institute and the Pew Research Center demonstrate both the good and the bad of current policy debates.

The recent widely reported Pew Research Center Report on “AI, Robotics and the Future of Jobs” highlights the ridiculousness of the situation. The report canvassed more than 12,000 experts sourced from previous reports, targeted list serves and subscribers to Pew’s research, who are largely professional technology strategists. 8 broad questions were presented, covering various technology trends. 1,896 experts and members of the interested public responded to the question on AI and robotics.

The problem is that very few of the respondents have more than a glancing knowledge of robotics. To anyone in robotics, the absence of people with expertise in robotics and AI is glaringly obvious. While there are certainly insightful people and opinions in the report, the net weight of this report is questionable, particularly as findings are reduced to executive summary level comments such as;

“Half of these experts (48%) envision a future in which robots and digital agents have displaced significant numbers of both blue- and white-collar workers – with many expressing concern that this will lead to vast increases in income inequality, masses of people who are effectively unemployable, and breakdowns in the social order.”

These findings are simply popular opinion without basis in fact. However, the Pew Research Center is well respected and considered relevant. The center is a non-partisan organization which provides all findings freely “to inform the public, the press and policy makers”, not just on the internet and future of technology, but on religion, science, health, even the impact of the World Cup.

How do you find the right sort of information to inform policy and public opinion about robotics? How do you strike a balance between understanding technology and understanding the social implications of technology developments?

Improving the quality of public policy through good design

Papers like Heather Knight’s “How Humans Respond to Robots” or Ryan Calo’s “The Case for a Federal Robotics Commission” for the Brookings Institute series on “The Future of Civilian Robotics”, and organizations like Robohub and the Robots Association, are good examples of initiatives that improve public policy debate. At one end of the spectrum, an established policy organization is sourcing from established robotics experts. At the other end, a peer group of robotics experts is providing open access to the latest research and opinions within robotics and AI, including exploring ethical and economic issues.

Heather Knight’s report “How Humans Respond to Robots: Building Public Policy through Good Design” for the Brookings Institute is a good example of getting it right. The Brookings Institute is one of the oldest and most influential think tanks in the world, founded in Washington D.C. in 1916. The Brookings Institute is non-partisan and generally regarded as centrist in agenda. Although based in the US, the institute has global coverage and attracts funding from both philanthropic and government sources including, the govts of the US, UK, Japan, and China. It is the most frequently cited think tank in the world.

Heather Knight is conducting doctoral research at CMU’s Robotics Institute in human-robot interaction. She has worked at NASA JPL and Aldebaran Robotics, she cofounded the Robot Film Festival and she is an alumnus of the Personal Robots Group at MIT. She has degrees in Electrical Engineering, Computer Science and Mechanical Engineering. Here you have a person well anchored in robotics with a broad grasp of the issues, who has prepared an overview on social robotics and robot/society interaction. This report is a great example of public policy through good design, if it does indeed makes its way into the hands of people who could use it.

As Knight explains, “Human cultural response to robots has policy implications. Policy affects what we will and will not let robots do. It affects where we insist on human primacy and what sort of decisions we will delegate to machines.” Automation, AI and robotics is entering the world of human-robot collaboration and we need to support and complement the full spectrum of human objectives.

Knight’s goal was not to be specific about policy but rather to sketch out the range of choices we currently face in robotics design and how they will affect future policy questions, and she provides many anecdotes and examples, where thinking about “smart social design now, may help us navigate public policy considerations in the future.”

Summary: “How Humans Respond to Robots”

Firstly, people require very little prompting to treat machines or personas as having agency. Film animators have long understood just how simple it is to turn squiggles on the screen into expressive characters in our minds and eyes. We are neurologically coded to follow motion and to interpret even objects as having social or intentional actions. This has implications for future human relationships as our world becomes populated with smart moving objects, many studies show that we can bond with devices and even enjoy taking orders from them.

There is also the impact of the “uncanny valley” – a term that describes the cognitive dissonance created when something is almost, but not quite, human. This is still a fluid and far from well-understood effect, but it foreshadows our need for familiarity, codes and conventions around human-robot interactions. Film animators have created a vocabulary of tricks that create the illusion of emotion. So, too, have robot designers, who are developing tropes of sounds, colors, and prompts (that may borrow from other devices like traffic lights or popular culture) to help robots convey their intentions to people.

With regard to our response to robots, Knight draws attention to the fallacy of generalization across cultures. Most HRI or Human-Robot Interaction studies show that we also have very different responses along other axes, such as gender, age, experience, engagement etc. regardless of culture.

Similarly, our general responses have undergone significant change as we’ve adapted to precursor technologies such as computers, the internet and mobile phones. Our willingness to involve computers and machines in our personal lives seems immense, but raises the issues of privacy and also social isolation as well as the more benign prospects of utility, therapy and companionship.

As well as perhaps regulating or monitoring the uses of AI, automation and robots, Knight asks: do we need to be proactive in considering the rights of machines? Or at least in considering conventions for their treatment? Ethicists are doing the important job of raising these issues, ranging from what choices an autonomous vehicle should make in a scenario where all possible outcomes involove human injury, or if we should ‘protect’ machines in order to protect our social covenants with real beings. As Kant said in his treatise on ethics, we have no moral obligation towards animals, and yet our behavior towards them reflects our humanity.

“If he is not to stifle his human feelings, he must practice kindness towards animals, for he who is cruel to animals becomes hard also in his dealings with men.” Kant

This suggests that, as a default, we should create more machines that are machine-like, machines that by design and appearance telegraph their constraints and behaviors. We should avoid the urge to anthropomorphize and personalize our devices, unless we can guarantee our humane treatment of them.

Knight outlines a human-robot partnership framework across three categories: Telepresence Robots, Collaborative Robots and Autonomous Vehicles. A telepresence robot is comparatively transparent, acting as a proxy for a person, who provides the high level control. A collaborative robot may be working directly with someone (as in robot surgery) or be working on command but interacting autonomously with other people (ie. delivery robot). An autonomous vehicle extends the previous scenarios and may be able to operate at distance or respond directly to the driver, pilot or passenger.

The ratio of shared autonomy is shifting towards the robot, and the challenge is to create patterns of interaction that minimize friction and maximize transparency, utility and social good. In conclusion, Knight calls for designers to better understand human culture and practices in order to frame issues for policy makers.

Brookings Institute and NY Times: Creating a place for dialogue

The Brookings Institute also released several other reports on robotics policy directions as part of their series onThe Future of Civilian Robots, which culminated in a panel discussion. This format is similar to the NY Times Room for Debate, which brings outside experts together to discuss timely issues. However, there is a preponderance of law, governance, education and journalist experts on the panels, perhaps because these disciplines attract multidisciplinary or “meta” thinkers.

Is this the right mix? Are lawyers the right people to be defining the policy scope of robotics? Ryan Calo’s contribution to robotics as a law scholar has been both insightful and pragmatic, and well beyond the scope of any one robotics researcher or robot business. However, Calo has made robotics and autonomous vehicles his specialty area and has spent years engaged in dialogue with many robotics researchers and businesses.

Before moving to the University of Washington as Faculty Director of their new Tech Policy Lab, Calo was the Director of Robotics and Privacy at Stanford Law School’s Center for Internet & Society. Calo has an AB in Philosophy from Dartmouth College and a Doctorate in Law, cumme laude, from the University of Michigan. His writings have won best paper at conferences, have been read to the Senate, have provoked research grants, and have been republished in many top newspapers and journals.

Which comes first, the chicken or the egg? As technologies become more complex, can social issues be considered without a deep understanding of the technology and what it can or can’t enable? Equally, is it the technology that needs to be addressed or regulated, or is it the social practices, which might or might not be changed as we embrace new technologies?

It’s not surprising that lawyers are setting the standard for the policy debate, as writing and enacting policy is their bread and butter. But the underlying conclusion seems to be that we need deep engagement across many disciplines to develop good policy.

Summary: “The Case for a Federal Robotics Commission”

When Toyota customers claimed that their cars were causing accidents, the various government bodies involved called on NASA to investigate the complex technology interactions and separate mechanical issues from software problems. Ryan Calo takes the position that robotics, as a complex emerging technology, needs an organization capable of investigating potential future issues and shaping policy accordingly.

Calo calls on the US to create a Federal Robotics Commission, or risk falling behind the rest of world in innovation. Current bodies are ill-equipped to tackle “robotics in society” issues other than in piecemeal fashion. Understanding robotics requires cross-disciplinary expertise, and the technology itself may make possible new human experiences across a range of fields.

“Specifically, robotics combines, for the first time, the promiscuity of data with physical embodiment – robots are software that can touch you.” says Calo.

Society is still integrating the internet and now “bones are on the line in addition to bits”. There may be more victims, but how do we identify the perpetrators in a future full of robots? Law is, by and large, defined around human intent and foreseeability, so current legal structures may require review.

Calo considers the first robot-specific law passed by Nevada in 2011 for “autonomous vehicles”, which defined autonomous activity in a way that included most modern car behaviors, and thus had to be repealed. Where that error was due to a lack of technical expertise, Calo foresees the problem of a new class of behaviors being introduced.

Human driving error accounts for tens of thousands of fatalities. While autonomous vehicles will almost certainly reduce accidents, they might create some accidents that would not have occurred if humans were driving. Is this acceptable?

Calo also describes the ‘underinclusive’ nature of robotics policy, citing the FAA developing regulations for drones, which often serve as delivery mechanism for small cameras. However, the underlying issue of privacy is raised any time small cameras are badly deployed; in trees, on phones, on poles, or planes, or birds, not just in drones.

Other issues raised by Calo include: the impact of high frequency automated activity with real world repercussions; the potential for adaptive, or ‘cognitive’, use of communications frequencies; and potential problems swapping between automated and human control of systems, if required by either malfunction or law.

Calo then describes his vision for a Federal Robotics Commission modeled on similar previous organizations. This FRC would advise other agencies on policy relating to robots, drones or autonomous vehicles, and also advise federal, state and local lawmakers on robotics law and policy.

The FRC would convene domestic and international stakeholders across industry, government, academia and NGOs to discuss the impact of robotics and AI on society, and could potentially file ‘friend of the court’ briefs in complex technology matters.

Does this justify the call for another agency? Calo admits that there is overlap with the National Institute of Standards and Technology, the White House Office of Science and Technology Policy, and the Congressional Research Service. However, he believes that none of these bodies speaks to the whole of the “robotics in society” question.

Calo finishes with an interesting discussion with Cory Doctorow, about whether or not robotics could be considered separate to computers “and the networks that connect them”. Calo posits that the physical harm an embodied system, or robot, could do is very different to the economic or intangible harm done by software alone.

In conclusion, Calo calls for a Federal Robotics Commission to take charge of early legal and policy infrastructure for robotics. It was the decision to apply the First Amendment to the internet, and to immunize platforms for what users do, that allowed internet technology to thrive. And has, in turn, created new 21st century platforms for legal and policy debate.

Robohub – Using 21st century tools for science communication

In the 21st century, science has access to a whole new toolbox of communications. Where 19th century science was presented as theater, in the form of public lectures and demonstrations, 20th century science grew an entire business of showcases, primarily conferences and journals. New communication mediums are now disrupting established science communication.

There is an increasing expectation that science can be turned into a top 500 Youtube channel, like Minute Physics, or an award winning twitter account, like Neil De Grasse Tyson’s @neiltyson which has 2.34 million followers. We are witnessing the rise of MOOCs (multi person open online courses) like the Khan Academy, and Open Access journals, like PLOS, the Public Library of Science.

Berkeley University has just appointed a ‘wikipedian-in-residence’, Kevin Gorman. The ‘wikiepedian-in-residency’ initiative started with museums, libraries and galleries, making information about artifacts and exhibits available to the broader public. This is a first however for a university and the goal is twofold: to extend public access to research that is usually behind paywalls or simply obscure; and to improve the writing, researching and publishing skills of students. Students are encouraged to find gaps in wikipedia and fill them, with reference to existing research.

In between individual experts and global knowledge banks there is space for curated niche content. Robohub is one of the sites that I think can play an integral role in both shaping the quality of debate in robotics and expanding the science communication toolbox. (Yes, I’m deeply involved in the site, so am certainly biased. But the increasing number of experts who are giving their time voluntarily to our site, and the rising amount of visitors, give weight to my assertions.)

Robohub had its inception in 2008 with the birth of the Robots Podcast, a biweekly feature on a range of robotics topics, now numbering more than 150 episodes. As the number of podcasts and contributors grew, the non-profit Robots Association was formed to provide an umbrella group tasked with spinning off new forms of science communication, sharing robotics research and information across the sector, across the globe and to the public.

Robohub is an online news site with high quality content, more than 140 contributors and 65,000 unique visitors per month. Content ranges from one-off stories about robotics research or business, to ongoing lecture series and micro lectures, to inviting debate about robotics issues, like the ‘Robotics by Invitation’ panels and the Roboethics polls. There are other initiatives in development including report production, research video dissemination and being a hub for robotics jobs, crowdfunding campaigns, research papers and conference information.

In lieu of a global robotics policy think tank, organizations like Robohub can do service by developing a range of broad policy reports, or by providing public access to a curated selection of articles, experts and reports.

In Conclusion

“Take me to your leader?” Even if we can identify our leaders, do they know where we are going? I suggest that our current robotics policy agenda needs to be reformed and better informed. This article provides a review of some recent policy reports and considers the changing shape of 21st century scientific debate. In conclusion, I make several recommendations for change:

The creation of a global robotics policy think tank.

I believe that a global robotics policy think tank will create informed debate across all silos and all verticals, a better solution than regulation or precautionary principle.

That the CTO of USA and the global equivalents make robotics a key strategy discussion.

Robotics has been identified as an important global and national economic driver. The responsibility or impetus to bridge silos, preventing both policy and innovation, must come from the top.

That a US Robotics Commission is created – while robotics is an emerging field – to implement a cross disciplinary understanding of this technological innovation and its impacts at all levels of society.

At a national rather than a global level, NASA is stepping in to bridge the gaps between technology developed under the aegis of bodies, like OSTP, NSF, DARPA etc. and the end effector regulatory bodies, like the DOF, DOA, DOT etc. Perhaps a robotics specific organization or division within NASA is called for.

That funding bodies make grants available for cross disciplinary organizations engaged in creating a platform for informed debate on emerging technologies.

Organizations that are cross disciplinary with a global reach are very hard to get funded, as most funding agencies restrict their contributions, either locally or by discipline. A far reaching technology like robotics needs a far reaching policy debate.