Could robots become ‘aware’ of their own limitations?

April 3, 2013

MIT researchers have developed software for robots that enables them to be more “aware” of their own limitations, such as knowing the whereabouts of an object, or its own location within a room.

Most successful robots today tend to be used either in fixed, carefully controlled environments, such as manufacturing plants, or for performing fairly simple tasks such as vacuuming a room,

But carrying out complicated sequences of actions in a cluttered, dynamic environment such as a home will require robots to be more aware of what they do not know, and therefore need to find out, says Leslie Pack Kaelbling, the Panasonic Professor of Computer Science and Engineering at MIT.

That’s because a robot cannot simply look around the kitchen and determine where all the containers are stored, for example, or what you would prefer to eat for dinner. To find these things out, it needs to open the cupboards and look inside, or ask a question.

Uncertainty principles

The system is based on a module called the state estimation component, which calculates the probability of any given object being what or where the robot thinks it is. In this way, if the robot is not sufficiently certain that an object is the one it is looking for, because the probability of it being that object is too low, it knows it needs to gather more information before taking any action,

So, for example, if the robot were trying to pick up a box of cereal from a shelf, it might decide its uncertainty about the position of the object was too high to attempt grasping it. Instead, it would first take a closer look at the object, in order to get a better idea of its exact location, Kaelbling says. “It’s thinking always about its own belief about the world, and how to change its belief, by taking actions that will either gather more information or change the state of the world.”

The system also simplifies the process of developing a strategy for performing a given task by making up its plan in stages as it goes along, using what the team calls hierarchical planning in the now.

“There is this idea in AI that we’re very worried about having an optimal plan, so we’re going to compute very hard for a long time, to ensure we have a complete strategy formulated before we begin execution,” Kaelbling says.

But in many cases, particularly if the environment is new to the robot, it cannot know enough about the area to make such a detailed plan in advance, she says.

Baby steps

So instead the system makes a plan for the first stage of its task and begins executing this before it has come up with a strategy for the rest of the exercise. That means that instead of one big complicated strategy, which consumes a considerable amount of computing power and time, the robot can make many smaller plans as it goes along.

The drawback to this process is that it can lead the robot into making silly mistakes, such as picking up a plate and moving it over to the table without realizing that it first needs to clear some room to put it down, Kaelbling says.

But such small mistakes may be a price worth paying for more capable robots, she says: “As we try to get robots to do bigger and more complicated things in more variable environments, we will have to settle for some amount of suboptimality,” Kaelbling says.

In addition to household robots, the system could also be used to build more flexible industrial devices, or in disaster relief.

Ronald Parr, an associate professor of computer science at Duke University, says much existing work on robot planning tends to be fragmented into different groups working on particular, specialized problems. In contrast, the MIT work breaks down the walls that exist between these subgroups, and uses hierarchical planning to address the computational challenges that arise when attempting to develop a more general-purpose, problem-solving system.

References:

Comments (23)

Slightly off the wall but I read in Cory Doctorow’s Makers about using RFID tags (printed with an ink-jet printer so the cost was basically negligible) to mark one’s possessions.

When teaching/training a robot, wouldn’t it be more helpful to identify the various tools, implements, utensils, containers, etc. with individual tags that a robot could “read” via radio waves within a certain room-sized radius? It would be able to know which containers were in which cupboard, how many packages of corn-flakes were left on the premises, how many clean cups were in the cabinet (or the dishwasher), how many “objects/signals” were currently located on the table or counter. Some products could be associated with a barcode of some sort so the robot could identify and tag them when new products were acquired. Let the objects communicate with the robot instead of the robot having to do all the learning.

It seems like we’re trying to make robots learn organic recognition when we could bypass this and basically skip teaching it English and go straight to binary communication. Less need for ‘recognition capabilities” in the early stages with a capacity for self-learning (all packages of flour have this identifying tag but all packages of flour also have this UPC code so all instances of this UPC code will be flour).

This is the low hanging fruit and the best way to milk the consumer as a cow and as such surely will be pursued!

But imagine all the overhead for normalization (ISO committees), updates, security concerns (imagine a gun impersonated as a fork from RFID hacking), complete imbecility of the robot facing pre RFID furniture or tools, etc. Not a good idea, IMHO

Those are not just robot problems. I also make silly mistakes, such as picking up a plate and moving it over to the table without realizing that I first need to clear some room to put it down! MIT group, pleeeeaase… can you help me? Fix me?

The human need to eat on a plate (plus other wasteful rituals), to the artilects, is a silly mistake.
Actually, for artilects that are clever enough to radically redesign human beings’ physical structure, the human need to eat is silly mistake.
Ray said that, by 2020s, we will have cyborgizational implants and in-body nanobots to obsoletize eating. And, of course, there will also be implants and nanobots for augmenting your intelligence (not just the commonsensical intelligence in your example, but also the kind of intelligence to figure out that you should not waste your commonsensical intelligence on such extraneous unnecessary things as eating.)

felt so bad for poor blinky… but thats just one of many blinky’s and surely others had better experiences ect, and since their all connected to raputa im sure it works out in the end, or not, meh i just want my own blinky…

In addition to Moore’s Law, quantum computers will allow even more radical improvements on AI.
Let us hope that D-Wave Systems and their clients (Lockheed-Martin and CIA) figure out a breakthrough in QC.
That way, Singularity can arrive much earlier.

They will catch up (with the help from human scientists) and become equal (in many ways) to us in the next 10 – 20 years.
This is a speed much greater than the speed of Natural Selection.
Then, a few years after that, they will surpass us.
That event will be known as the Singularity.

In my view, it’s not the morality of the robots which is a cause for concern…

It’s this type of all-too-human behavior towards others (not just robots) which has me doubting whether the singularity will usher in an enlightened age for all or if some will use their enhanced status to look down on others from a different vantage point.

“Treat your inferiors as you would be treated by your superiors.” — I.J. Good as quoted by Vernor Vinge.

Kurzweil’s predictions seem to be focused on nanotechnology, but won’t large-sized robots become common before nanobots do? Technology usually starts big and becomes more miniaturized over time (witness the progress of computers). Yet Kurzweil’s predictions seem to gloss over the arrival of large-sized robots and what momentous changes they could bring to society. Will we have robot slaves? Robot soldiers? Malfunctioning killer robots? Robots are mentioned surprisingly little in Kurzweil’s predictions.

I think Kurzweil might argue that most people (myself included) equate robots and androids when the two are actually quite different. Robots are already fairly commonplace in manufacturing because they can perform specific tasks with speed and precision. In the home environment the increasingly common household appliance Roomba would fall in this category. That they are task specific and have limited mobility sometimes obscures the fact that these really are robots.

An android would have mobility to move freely in the same environments as humans and would be expected to be able to perform tasks that humans can do. Ideally, these would be semi-autonomous with at least some “decision-making” power within the performance of their tasks. That involves much more programming and processing power to take into account the variability within the physical environment in which performing a task will take place and the variance in the number and types of objects which will be manipulated.

Nanotechnology, on the other hand, is similar to robotics in that it involves simpler tasks manipulating a limited number of things/elements. And the manipulation is being performed at a scale where the physical environment is fairly constant/predictable. Once miniaturization is perfected and cost effective, this field will be able to expand fairly quickly.

So Ray isn’t wrong in his predictions when you parse the terminology differently. The large size robots are already here and their presence is probably expanding– just in areas where we don’t see them being used. Nanotechnology is robotics too– and like current manufacturing robotics it too will be mostly “hidden” from view– this time by scale. But nanotechnology is probably destined to become more widespread. Robots and nanotechnology are easier to foresee/predict since the usage is more limited within the environment in which they will be deployed to perform their tasks. These are prime candidates for rapid improvement as the technological singularity approache since doubling of processing power/efficiency will make them more cost-effective.

But I’d speculate that androids– the type of robots we think of when we hear or read the term robots– are probably going to take longer to develop because the variability involved in dealing with humans will be so much more difficult to effectively program. Which is kind of a shame because I think most of us would appreciate having android companions who can perform all the functions that humans can.

Manufacturing robots are fine, but since nanobots are expected to be able to intelligently move around in their environment, I would expect the same thing of large size robots (macrobots?)

I like the term macrobots, since I think of androids as being more humanoid in appearance and the term “robots” has been appropriated by the makers of industrial arm equipment.

And since technology usually starts big, I’m not going to expect the age of nanotechnology until I see humanoid-type robots navigating and intelligently interacting with the environment.

I mean, do we really expect that it’s going to be easier to create nanobots that can identify and destroy invaders in each of our bodies’ trillions of cells and interface directly with the neurons in our brains before we can create a robot that can interact with common household objects?

I think we have a tendency to underestimate the challenges of nanotechnology since it is, after all, something that we can’t see, but we’re all too aware of the challenges faced by humanoid-type robots and view them more realistically, since we have to deal with the same challenges ourselves every day.

But, who knows? humanoid-type robots may become commonplace by the end of this decade (or shortly thereafter). I just find it surprising that they’ve ben left out of most of Kurzweil’s predictions.

They’re designed to be mobile within a certain environment and to seek and attach themselves to specific materials. Within that limited environment they’ll individually repeat certain tasks (find a molecule with X characteristics, attach yourself, release a payload ; or, gather molecules/atoms with Y characteristics, anchor the materials you find using these instructions, repeat). Enough nanobots working together can fight a disease or gather and partially assemble the stuff needed to make something more complex.

Having several different nanobots working in concert can create more complexity when they combine the lower order elements created by the lower level bots. But I don’t believe intelligence is present– at least not inherently– in the nanobots.

Nanobots can be built to do things, but the intelligence that supplies the instructions will have to come from elsewhere. This is why I view them as analogous to industrial robots.

And, of course, I could be fundamentally wrong about this. This is just the kind of explanation that I can comprehend and repeat. So this is more of an opinion than science. And my suggestion that nanobots lack intelligence could certainly be leveled at me as well. ;-)