Humans, Do You Speak !~+V•&T1F0()?

Software that will let people and robots communicate and plan difficult, complex tasks, such as dismantling a nuclear power plant, is under development at the University of Aberdeen, Scotland. It will translate symbols of mathematical logic into text and vice versa, so humans and robots can share two-way communication in their own respective language.

Researchers at the university's School of Natural and Computing Sciences expect their technology to be used in several industries. These include unmanned exploration of hostile environments, such as the deep sea or the Martian surface; as well as more mundane tasks, such as maintaining and repairing railway lines.

Software that will let people and robots communicate to plan difficult and complex tasks, such as dismantling a nuclear power plant, is being developed at a Scottish university. (Source: Wikimedia Commons/Stefan Kühn)

In these situations, robots could become more autonomous if they could operate for long periods without continuous guidance from humans, as well as make their own decisions after processing data. The problem is, as it stands now, robots can make mistakes that aren't apparent to humans or to themselves, or do things that humans don't understand. In an operation as dangerous and complex as decommissioning a nuclear power plant, the results could be disastrous.

"Evidence shows there may be mistrust when there are no provisions to help a human to understand why an autonomous system has decided to perform a specific task, at a particular time, and in a certain way," said Dr. Wamberto Vasconcelos, senior lecturer in the Department of Computing Science, in a press release. "What we are creating is a new generation of autonomous systems, which are able to carry out a two-way communication with humans. The ability to converse with such systems will provide us with a novel tool to quickly understand, and if necessary correct, the actions of an automated system, increasing our confidence in, and the usefulness of, such systems."

To develop the autonomous robotics systems, the project will use Natural Language Generation (NLG), which translates complex information and data into simple text summaries. The university's School of Natural and Computing Sciences staff includes several NLG researchers.

In NLG, the information and data begin as symbols of mathematical logic. (Some representative logic symbols are shown in this article's headline, in no particular order.) They are automatically transformed into simple text, so that humans and robots can discuss and plan a set of tasks before the robot carries them out.

Later, when the robot is engaged in a task, the human can communicate with it using a keyboard. Humans can ask the robot questions about why it's taking certain actions or making specific decisions, and request justifications for them. Humans can also provide the robot with additional information it can integrate into its plans, suggest alternatives, and point out problems with the robot's chosen course of action.

Vasconcelos said his team hopes the systems they are developing will be applicable not only to robots, but also to mobile phones, "which can interact with a human in useful ways, which up until now haven't been explored."

The research is funded by a £1.1 million (US$1.7 million) grant from the UK's Engineering and Physical Sciences Research Council, a government agency that funds research and training.

I have programmed industrial robots and the closest those robots came to "insight" was knowing that they had to slow-down in order to accurately make a turn. This presents a quandry of sorts when the robot is doing something like putting a sealant along a seal surface, where a larger radius rounded corner is not what is needed. The solution was to bring the robot to a point, then a separate move from that point to the change in direction point, and then start in the new direction. A simple work-around. But if the robot had been able to tell that it needed to do something in order to be able to change direction it may have been easier to figure out. Instead, it was nessesary to read the 4000 page instruction manual.

The problems that will come with attempting to give robots insight is that it may easily lead to giving the robots self-awareness, which would probably lead to robots having emotions, and that could be VERY BAD. That is because robot source code is written by programmers, and programmers are not normal people. We need to always remember that, and beware.

Absolutely, William K. That is a very good description of the issue. Even routine functions can quickly change to ones that require past experience. That is why a lot of experts can operate on "gut feel". They can't explain their correct actions because it is based off of experience of similar occurences. This simply cannot be captured in a program.

ttemple, everything you said is correct about non-autonomous robots. This research team, like several others, is developing intelligent, autonomous robots, something very different. William's comment below, "Human-Robot communications", captures this difference.

The robot is following some program that some human loaded into it. The robot can only do what the programmer told it to do.

So, the human tells the robot what to do (via the program), and then the human says "why are you doing that?". The answer is always the same - "because you told me to".

I would think the obvious solution to this supposed problem is to send all sensor data to a computer that is running the same decision making software as the robot, and watch what the program is doing. (It will be doing what you told it to do - which may or may not be what you thought you told it to do.)

This article somehow makes it sound like the robot has a mind of its own. It doesn't. It can only do what some human told it to do, so why ask it why? The answer is, I'm doing what you told me to do given these sensor values.

William, thanks for clarifying. I agree, when I read the initial report, I thought why the heck hadn't somebody already figured this out and implemented it ages ago? OTOH, I don't think the state of hardware--sensors and processors--and comms tech were available for robots that could take advantage of this "translation" program.

Yes, Ann, the robots have only their sensor information to base decisions on, and that is often not enough to make the very best choice. That was part of the basis for my comments about the value of experienced humans in the loop. Robots lack insight and understanding, they can only make the decisions that they are programmed to make, which may well be safe, but probably not optimum.

Giving the robots more data by allowing accurate communication will certainly offer the potential for better choices, and the concept of communicating that basis for the choices to a human is a good idea that should have been put into practice about 25 years ago.

Mydesign, wireless communication with remote-controlled robots is already used in military, nautical and rescue robots, among many other types, as we've mentioned before:http://www.designnews.com/author.asp?section_id=1386&doc_id=247687http://www.designnews.com/author.asp?section_id=1386&doc_id=242527http://www.designnews.com/author.asp?section_id=1386&doc_id=246206But that does not solve the communication problem. Most robots can only report back very limited types of data. And communication is one way in one direction and then one way in the other direction. It does not allow for full-duplex two-way conversations. Plus, the robots are not intelligent enough, or autonomous enough, to perform the delicate operations of decommissioning a nuclear power plant.

Thanks, William, I think you captured the point of this research in your comments about autonomy. It is aimed at more autonomy in robots, which is why communication has to be much more detailed, and accurate, than it has to date. But inadequacy of the human operator is not the issue: inadequacy and incompleteness of information about why the robot makes the decisions it makes was one of the main spurs to this research. The two-way logic-to-text and text-to-logic communication will also let the human make informed suggestions and provide more data once it understands the situation as reported by the robot.

Ann, I have some other idea for disaster management, where humans can interact with robots via wifi or any other communication channel. This will help for a remote control operation from a master facility to control each and every wing of the nuclear station and safe shut down, in case of disaster.

It won't be too much longer and hardware design, as we used to know it, will be remembered alongside the slide rule and the Karnaugh map. You will need to move beyond those familiar bits and bytes into the new world of software centric design.

People who want to take advantage of solar energy in their homes no longer need to install a bolt-on solar-panel system atop their houses -- they can integrate solar-energy-harvesting shingles directing into an existing or new roof instead.

Kaspersky Labs indicated at its February meeting that cyber attacks are far more sophisticated than previous thought. It turns out even air-gapping (disconnecting computers from the Internet to protect against cyber intrusion) isn’t a foolproof way to avoid getting hacked. And Kaspersky implied the NSA is the smartest attacker.

Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.