AI News, DARPA Seeking to Revolutionize Robotic Manipulation

DARPA Seeking to Revolutionize Robotic Manipulation

The four-year Autonomous Robotic Manipulation, or ARM, program aims at developing both software and hardware that would enable robots to autonomously perform complex tasks with humans providing only high-level direction.

But as it’s happened with other DARPA initiatives, the program could have a broader impact in non-military robotics as well.To find out more about the agency’s plans, I spoke to Dr. Robert Mandelbaum, the program manager who conceived the ARM program hoping to move robotic manipulation “from an art to a science.”

In the software track, six teams will each receive a half-million dollar two-armed robot [photo above, and still nameless], to be used as a standard platform that they will have to program.

In the hardware track, three teams will design and build new multi-fingered hands, which will have to be both robust and low cost.

It consists of an outreach component, in which DARPA is essentially inviting anyone to participate by developing software and testing it out in one of its half-million dollar robots.The ARM program is still in its beginnings, but Dr. Mandelbaum, who has just left DARPA after concluding his four-year stint as program manager, expects a big impact.

Only those that manage to win an NSF grant or some chunk of money from somewhere are able to build these systems and maybe support a grad student for a couple of years.

What is needed, I believe, and DARPA is exactly the right agency to do this, is a major attack on manipulation to galvanize and move the field forward.

EG: Is the goal to develop highly specialized robots skilled at specific manipulation tasks or more adaptable robots that can tackle a broad range of tasks?

And by autonomously what I mean is that instead of having to control every joint of a robot, which is the current approach, you would be able to communicate with the robot by saying things like, “Pick up the IED [improvised explosive device], cut the blue wire, bring the rest back here for forensic testing.” And it would be able to do the hand-eye coordination and other motions itself to get the task done.

The reality is that, as we build robots to assist humans, we’re not going to redesign human spaces and we’re not going to create a whole new set of tools that robots can use.

For the software track, what we’re doing is we’ve created a standard robotic manipulation platform for participants to use.

Or another example, manipulation without vision: reach inside the gym bag that you just opened and find the gun that I hid inside.

Let’s say for example the robot has to grab a coffee mug by the handle but the robot can’t see the handle because it’s behind the mug.

At this point the robot is a fixed platform, but in the last phase of the program we’ll look into making it mobile, which would extend the workspace of the robot.

This explains in part why car companies don’t change their models very often—because they’d have to reprogram all their robots!

The companies can amortize the cost of the robots easily and afford supporting expensive robots and calibration systems.

There’s only one problem: If I have to send a robot that costs me half a million dollars and it costs the enemy like 10 dollars to put together a fertilizer bomb, well, they’ve won already.

We should be able to do the same job almost with one of those 5 dollar plastic hands that you can buy in toy shops—I’m exaggerating a little bit here.

But the basic concept I’m not exaggerating: I’d like to be able to have a hand that costs in the tens or maybe hundreds of dollars, not thousands, and that can go and defuse that IED.

So you replace the cost of producing hardware, which is hard to bring down, sometimes even if you have quantity, and replace that with clever software.

Imagine you have a very sloppy hand that isn’t repeatable: That is, if you tell it to go to the same place 10 times in a row it doesn’t really go to the exact same point.

And an interesting thing is that these tasks are the same ones we’re expecting the software guys to do autonomously—with their software running on the standard robotic platform.

For the hardware guys Phase 2 consists in making five or six of the hands they designed and we’ll use them to replace the hands that we currently have on our standard platform.

If we compare BigDog to an autonomous cars, which had a big push in the ’90s, to control an autonomous car you deal with basically two degrees of freedom—you control the steering and you control the gas or the brakes, that is, you have a linear and an angular acceleration.

When you want to control BigDog and make sure it doesn’t fall over even when it slips on black ice and things like that, you’re controlling four legs each with four actuators at the same time.

Control in 16th dimensional space, as any engineer would know, is not just a little bit harder than two dimensional space—it’s exponentially harder.

The piece of paper itself has hundreds of degrees of freedom and in many cases they are not predictable because you don’t know ahead of time what the mass is, or the friction, or how bendable the object is, or how it’s going to react to the forces that you apply to it.

When you put the whole system together, meaning the hand and the object that it’s manipulating, you’re in the hundreds of degrees of freedom.

And another reason why this is so different from, say, autonomous driving is, when you do navigation you’re trying to not interact with the world—you’re trying to avoid a crash.

So my observation was, in order to move us from the current state of art, which is small number of researchers and small number of very expensive systems, we have to get rid of that barrier to entry.

The fact that if you do an experiment and you write a paper and you publish the results and if I read your paper I can verify your results by doing the same experiment myself.

The community went through a transformation from art into science and what changed was that, in the early 1990s, people would self evaluate their results and there was no way to verify whether that was really good or not.

If you want to prove that you had a great new stereo processing algorithm, you had to prove that against the same set of images that previous people had used.

When DARPA sponsored the Grand Challenge and the Urban Challenge [autonomous driving competitions], one thing we noticed was the huge number of people who came out of the woodwork to participate.

Autonomous Robotic Manipulation (ARM) (Archived)

The Autonomous Robotic Manipulation (ARM) program is creating manipulators with a high degree of autonomy capable of serving multiple military purposes across a wide variety of application domains.

The outreach track engages a larger community by placing robotic systems in public museums (presently the National Air and Space Museum) and also encouraging unfunded participants to develop algorithms robot autonomy through the web to a real system.

In some complicated tabletop object manipulation task for robotic system, demonstration based control is an efficient way to enhance the stability of execution.

The gesture recognition and scenario experiments are carried out, and indicate the improvement of the proposed Multi-LeapMotion hand tracking system in tabletop object manipulation task for robotic demonstration.