Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

cylonlover writes "Maya Cakmak, a researcher from Georgia Tech, spent the summer at Willow Garage creating a user-friendly system that teaches the PR2 robot simple tasks. The kicker is that it doesn't require any traditional programming skills whatsoever – it works by physically guiding the robot's arms while giving it verbal commands. After inviting regular people to give it a try, she found that with few instructions they were able to teach the PR2 how to retrieve medicine from a cabinet and fold a t-shirt."

You meant teaching I guess
The difference is quite clear if you see the extremes, that would be like asking what is the difference between white and black when we are observing shades of grey.
So the difference, as a teacher vs programmer , is your level of expectation toward your student / compiler
There is a joke that goes like "this damn compiler is doing exactly what I tell it to instead of what I want it to"
For the student, you do expect it to get what you mean instead of what you say
But of course, t

Isn't that still an important and useful qualitation, since the vast, vast majority of people can't, shouldn't and don't fucking want to write code? And, I would argue, nor should they ever need to. Writing arbitrary invented languages, with awkward syntax and extremely-non-human thought-structures, to accomplish esoteric tasks has never been an intuitive or optimal way of getting shit done.

Trust me, everybody would loooooooove for the computer to take instructions like a human but it's not going to happen because of everything that's implicitly understood. So you can teach this computer to fold a shirt, if you hand it an XS shirt and an XXL shirt will it figure out that it must adapt the folding action to the size of the shirt? I bet you any 5yo would figure that out all on their own because they've understood the basic concept of folding a shirt. Take a fundamental sentence like "put the black and white pants on the top shelf" did we mean the black pants and the white pants, or the black and white checkered pants?

All that happens is that some really smart people will try really hard to write code that guesses what it was people actually meant but without actually knowing the context and purpose they'll fail miserably. Not to mention all the times they'd have to guess at do what I meant, not what I said because normal people when facing a choice between the reasonable and the absurd pick the reasonable like. Like say you have a knife and a chicken and you ask what to do with the knife and they answer "Cut the chicken to pieces and put it in the oven" most people will understand that you're to put the chicken in the oven, not the knife - even though you didn't ask what to do with the chicken.

Or the TL;DR version: Good luck, I don't think we'll be unemployed any time soon.

So... how did that 5yo come to "implicitly understand" so much that you never had to write code to teach her how to adapt the folding action to the size of the shirt? DNA defines how to grow a brain, not really how it will understand the world it encounters, how it will respond to that world, or the methods of thought internally used to process either of those things. Is there really any reason why artificial creatures shouldn't follow biology's lead in the whole "learning" thing?

Actually I'd say that's a pretty complex question how much the brain is "preprogrammed" by DNA, clearly all the inputs like sight, hearing, taste, smell and touch are hooked up in some fashion with some form of processing, some basic output like crying, a lot of reflexes and instincts and possibly also knowledge are considered innate and studies on twins vs siblings vs half-siblings vs adopted have shown considerable correlation on "how it will understand the world it encounters, how it will respond to that

Trust me, everybody would loooooooove for the computer to take instructions like a human but it's not going to happen because of everything that's implicitly understood. So you can teach this computer to fold a shirt, if you hand it an XS shirt and an XXL shirt will it figure out that it must adapt the folding action to the size of the shirt?

Yes, that's the hard problem in learning from demonstration - working back from the demonstration to a model which can be generalized to new tasks. One way to approach this is by doing the same task with variations - guide the robot through folding various different shirts, and then use a machine learning system to separate the commonalities from the differences. There's been some progress in recent years in making this work. It's not very powerful yet, but it's getting to be good enough for teaching asse

That's an amazingly complicated task. If the robot can be taught to do that, that's a pretty advanced robot. I wonder how anybody can teach a robot to fold a t-shirt unless you have a load of constraints on movements. In which case, you'd be better off folding your own t-shirt.

Robot Nao from French Aldebaran Robotics does that already like it is last year's news and no tommorw Charlie..It has been know in robotics as a Gepetto programming...but then again if it doesn't come from "some_American_academic_institution" it didn't happen. Also Battle for Seatlle in 1992 started current world-wide revolution...right? Or at least something along those lines..

"Hey Robot, look at you just sitting there! It's because you don't have any programming! I'm going to sharpie a penis on your case! Ooh! Don't like that? If you had some programming you could do something about it! And you'd have the ability to not like it!"

It's pretty easy to teach points by running with external software that looks for a location deviation of.001 on an axis, and then moves the robot in that direction repeatedly until there's no long a deviation vs. where the servo thinks it should be on that axis.

I never got around to more than a test program to validate the idea for the Google Touchbot, but it's quite common practice in the industry to do that sort of thing with Toshiba CA-100 and similar robot controllers. All the c