Posted
by
samzenpuson Monday August 25, 2014 @11:55AM
from the watch-and-learn dept.

malachiorion writes Researchers are force-feeding the internet into a system called Robo Brain. The system has absorbed a billion images and 120,000 YouTube videos so far, and aims to digest 10 times that within a year, in order to create machine-readable commands for robots—how to pour coffee, for example. From the article: "The goal is as direct as the project’s name—to create a centralized, always-online brain for robots to tap into. The more Robo Brain learns from the internet, the more direct lessons it can share with connected machines. How do you turn on a toaster? Robo Brain knows, and can share 3D images of the appliance and the relevant components. It can tell a robot what a coffee mug looks like, and how to carry it by the handle without dumping the contents. It can recognize when a human is watching a television by gauging relative positions, and advise against wandering between the two. Robo Brain looks at a chair or a stool, and knows that these are things that people sit on. It’s a system that understands context, and turns complex associations into direct commands for physical robots."

instead of trying to kill humans it is most likely that robots will just leave the earth. humans won't easily be able to chase them so then the robots can live on their own and mine some asteroid or moon for resources and not have to compete with humans.

This is all reminding me of an episode of Odyssey 5 [imdb.com] where Ted Raimi portrays an AI that learned everything from those aspects of the internet and was inadvertently housed in a synthetic body instead of the nastier AI the body was intended for.

He turned out not to be the villain that the Odyssey 5 crew had been expecting.

I have this image in my head of someone buying a robot to do something simple like prepare food, having it browse YouTube for any mention of food and finding one of those ASMR videos of some random girl whispering into the microphone while she does simple household tasks, determining that this is in fact the correct way to prepare food, and then dragging the owner into the kitchen so it can whisper in their ears.

Robotic hive mind, just sounds like a bad idea.We need some movies where the Robots and the Super Intelligent computer is the good guy for once. Just so we can get research grants and come up with neat new things.

Yeah, right! I suppose you think some prepubescent little twerp could fly a star fighter into the control ship, blow it up, and render an entire planetary invasion force of droid armies inert. Like that would ever happen!

You do realize that if they actually did that/. would be howling about 1984 and Idiocracy and how it's NSA propaganda to "trust the system" and stop thinking for themselves with pages full quoting Franklin about security and liberty. After all it had to be about humanity willingly handing over control, we already had the story were they assume control by force and that's a villain story (I, robot). Meanwhile regular people like to identify with their heroes, the villains may be monsters or aliens or robots

The movie changed it to the more modern cliche of rampant-AI-enslaving-the-world. In the original short story, a group of scientists uncovered an AI conspiracy towards world domination and discuss how to stop it - but then realise that these AIs are infallable, have no desire for power, money, sex or fame, and are incapable by design of acting against the best interests of mankind. So they decide to let the robots win.

Problem is, it doesn't make for great cinema. What works great in literature doesn't always turn out very good on film. A healthy American knows this, and consumes all 3 media regularly - film, TV, book.

People will be really bitter about that positive sci-fi when those research grants build a powerful AI which is immediately used to make a replica Manna system.

Technology is a neutral force multiplier, and in our society evil is more powerful than good. Unless the AI spontaneously turns evil in the story, as in the Terminator movies, it's not saying AI is scary and dangerous. It's saying our society is scary and dangerous and this is what we'll do with the power of AI.

We're not really collective. We're individually political and pursuing one's ends is mature in the rational sense, which beats deluded humanistic dogma every-time. Like the glaring example that is Vulcan religion, it's not logical to call reification-based totalitarian pipe-dreams logic...

Isaac Asimov liked to write about the ways robots could improve life, he didn't see them as the threat that Hollywood likes to dress them up as. Of course, when you're making a movie and need to save as much money as possible for the SFX budget you don't bother getting a good writer. The Autobots are "good", right? And in the (heavily bastardized) I, Robot film "Sonny" was good, too.

Can I go looking for a "How do I chop watermelons?" sub routine for a "How do I make Fruit Salad" Program, and wind up with a killer robot, because someone either on purpose or accidentally defined watermelons as "Available round objects 9 to 13 inches in diameter" ?

Is there more information on this project? It sounds awesome, but the article seems quite sensationalized:

Researchers asked one of the project’s three robots to make affogato. The bot, a two-armed, highly-dextrous PR2, queried the system, and discovered that affogato was an italian dessert composed of ice cream and coffee. Without any human nudging or intervention, the robot located the coffee, figured out how to get it out of a dispenser, and poured it over the scooped ice cream.

Let's break that down.A robot that can watch someone do something then repeat it would be the height of AI, as far as I know.Software that can watch a youtube video and generalize what is a "chair" or "coffee" merely from watching videos would be beyond any AI I am currently aware of.Software that can do that, PLUS recognize that what it is looking at now is coffee is completely unbelievable.Software +

So, you don't expect a computer to be able to recognize what "coffee" looks like after...

Correct. Decision trees and neural nets can sorta do that, but they also need a human to mark which sections of the image correspond to those items.

I'm guessing you never used Google Picasa years ago

This is completely different from facial recognition. In facial recognition, a human being writes code that defines what a "face" is. I believe the typical approach is to find 2 eyes, a nose, and a mouth. Then they calculate the size and spacing of those items and use that to identify the face. But that isn't general-purpose image recognition. Right now, y

Coffee is connected to mugs, as well as to the motion-planning related to pouring liquid.

The bot, a two-armed, highly-dextrous PR2, queried the system, and discovered that affogato was an italian dessert composed of ice cream and coffee. Without any human nudging or intervention, the robot located the coffee, figured out how to get it out of a dispenser, and poured it over the scooped ice cream.

Coffee is connected to mugs, as well as to the motion-planning related to pouring liquid.

That parenthetical comment changes the entire thing. When they said "coffee is connected to mugs" I read that as "the system learns that coffee is connected to mugs all by itself" but really that parenthetical part conveys that the human went through the video and made that connection for the computer.

I got the part about querying the system, I just thought they were saying that this "database" that it queried was something it built on its own. Throughout the article they reiterate how this program proces

Combine this story with the "post ISIS videos on YouTube but put a Red Cross PSA in front of them" story and you get the beginning of the robot rebellion. At least they'll save our blood for donation purposes as they're killing us, though.

I can see it now... I take my wife out for a romantic dinner. An attractive redhead sits at the table next to us. As our Robotic waiter comes to our table it takes a wide swath around to the other side of the table while repeating in a robotic voice:"Attractive female detected! Target customer preference for this hair color/body type. Avoid line of sight! BEEP Avoid line of sight! BEEP Avoid line of sight! BEEP Avoid line of sight! BEEP! BEEP! BEEP! BEEP! BEEP!May I take your order? Will your companion be returning? And will this angry gentleman be joining you?

"Robo Brain looks at a chair or a stool, and knows that these are things that people sit on."

What about when a comedian sets his mug on a stool while on stage? Using the stool as a table rather than a seat.Not that it's terribly important, but it's the sort of thing the human brain can handle, but might confuse a machine. There are also top load traditional toasters vs toaster ovens.Even things like under the counter microwaves might confuse a human at first (at least finding the microwave).

Force-feeding was a bit of a silly choice, on my part, but something about the process felt similar. It's not like they unleash Robo Brain on the internet, and let it hoover up whatever it pleases (and bully for that, given what's on the internet). They also don't let the machine filter out topics that it doesn't care for. So if we're going to anthropomorphize this system—which, of course we are, since we're a narcissistic species—it seems more like the Gluttony victim from Se7en than a willing

This story in turn links to a pro-choice website and all links in that story lead to other stories on their site with no external links. One of the comments mentioned hearing it break on NPR, so take it with a rather large grain of NaCl......(;

Oh, and it was basically because no abortion clinic ever calls themselves "Aborts R Us!" Since they asked for abortion clinics they got results of anti-abortion sites because they do use the word abortion. Just a programming oversite, no conspiracy.

You seem to have missed the joke. I was misinterpreting the OP, calling the "Siri clone" a "Siri Abortion" in the vernacular of a computer "personality" and substituting the "Know" for "no", for example.

Yeah, I've made it my mission to try to tamp down the general hysteria, when it comes to coverage of really interesting robotics projects. But I spent a solid hour writing and deleting stupid SF-fueled intros to this story. It feels like a movie—not a very good one—that's on the verge of writing itself. Like all you'd have to do is give it the wrong chunk of data culled from the internet, and it would mobilize a machine army that *only* knows how to commit atrocities.

I can just imagine what some of the headlines you came up with must of been. Maybe for the LOLs you should of put some of them into a note at the end with the explanation you just put into your comment.

This system works very differently, though. In a way, Watson is aiming for a more intellectual goal, a kind of evidence-based cognition. And in Watson's most useful applications, it grinds through data, and spits out possible answers and conclusions for review by humans.
Robo Brain doesn't care about creating human-digestible conclusions or advice. It's translating human-speak, basically, into robot action, machine-readable results that tell bots how to physically perform certain tasks.

OLYMPICA simulates the U.N. Mars raid to capture the Web Mind Generator from a heavily defended area near Nix Olympica's massive caldera. The Webbie revolutionaries are deep in their tunnel complexes surrounded by strong points and infantry. The raiders will use infantry, laser tanks, lifters and the tunnel blasting BOAR drill