Guest Post: Robotics, Part 1 – Where Are We Today?

In the field of robotics, we have no Newton. No one who, assisted by a falling fruit, cried out, “Eureka, I have it, and it is called a… I know… a robot.”

No, the concept of a robot first occurred to some unknown person in some far distant time, as he or she, engaged in a grinding, repetitive task, dreamed of a mechanical contrivance that could do some of the dirty work. We know that moment was more than five hundred years ago, because we have sketchbooks from the incomparable Leonardo da Vinci, dated 1495, that contain detailed plans for one.

There is no hard evidence that Leonardo actually went on to build his robot. But such was his genius that, five centuries later, engineer Mark Rosheim was able to create a fully functional version faithful to the technology of the time. It works.

Leonardo believed that the human body is basically machine-like in structure, and that he could duplicate its intricate movements through the use of levers and pulleys. A rather simplistic view, of course. Nevertheless, da Vinci was able to use his remarkable knowledge of anatomy and kinetics to design a robot with the capability to walk, stand, and sit, raise its arms, move its head from side to side, and open and close its jaw.

While Leonardo’s creation is often considered the initial introduction of a robot in human form, many believe the honor should go instead to Al-Jazari, the engineering genius of the Islamic world during the Middle Ages. The Kurdish Al-Jazari designed, and apparently built in 1206, a boat that floated on a lake to entertain guests at royal drinking parties. Within the boat were four musical automata – two drummers, a harpist, and a flutist. They appear to have been somewhat programmable, although they lacked the sophisticated articulations of da Vinci’s design.

And some researchers want to push the invention of robots even further back into antiquity. But whatever the robot’s provenance turns out to be, we can date the appearance of the world's first modern humanoid robot, to 1939. Elektro, built by Westinghouse, was placed on exhibit at the 1939 World’s Fair in New York. The robot, standing a robust 6’ 9” tall, could smoke, blow up balloons, and speak more than 700 words. Elektro was still around in 1960, when it landed the role of Thinko in the movie Sex Kittens Go to College. As if that weren’t humiliating enough, it was then decapitated, with its head given to a retiring Westinghouse employee and its body sold for scrap.

Since Elektro, well, electrified audiences at the World’s Fair, the robot has entrenched itself in the popular psyche. It’s been a real love/hate relationship, too. When they’re not depicted as the kindly, human-friendly machines typified by C3PO and R2D2 in Star Wars, they tend to be cast as pure evil, like the malevolent metallic monster lurking beneath the artificial skin of the Terminator.

But so much for fiction. Even as the robot was conjuring up all sorts of creatures in movie screenwriters’ minds, the science of robotics has been progressing out in the real world. So where are we today? Short of the Terminator, but well along in other ways. Let’s have a look.

First, however, we must ask, just what is a robot, anyway? Unfortunately, there’s no consensus answer, although we generally tend to think of clanking metal things constructed more or less along the lines of a human. And most people would probably say they know one when they see one.

The primary definition, according to Webster’s Collegiate Dictionary is “a machine that looks like a human being and performs various complex acts (as walking or talking) of a human being.” Many would expand that to include any machine that does one or more of the following: move around, operate a mechanical limb, sense and manipulate its environment, and exhibit intelligent behavior – especially behavior that mimics humans or other animals. And it’s probably useful to add that a robot can be either physically or mentally anthropomorphic, if not both.

Designing a robot is a matter of thinking from brain to bones, so to speak. We’ve gotten very good at the brain part. Given the giant leaps forward in computer software development, we can write programs to do pretty much anything we want. If we have a device that can execute the instructions.

That’s where the external hardware comes in, and it is a different story. Spectacular CGI effects have become so commonplace in movies, we might be led to think that engineers are constructing the creatures from Avatar in their Bay Area laboratories. They aren’t. Physical robotics hasn’t remotely kept up with advances in CGI. We are, however, slowly closing the gap.

Let’s start with brain. Legendary computer scientist Alan Turing once proposed, in his 1950 paper Computing Machinery and Intelligence, a simple test to determine if a computer had achieved true artificial intelligence. Let any person have a conversation with both a machine and another human, both trying to act human, and see if the person can tell which respondent is the machine, if either. In order to ensure the test wasn’t hampered by the limitations of technologies at the time, he proposed both conversations happen not in person, and not via telephone with synthesized computer voice, but by a simple text chat.

As anyone who’s attempted to use one of those interactive voice telephone agents now popular with banks, airlines, and other soul-draining mega-agencies can attest to, we are far, far away from any system passing “The Turing Test.”

Researchers around the world are hard at work on trying to solve the AI problem, with mixed success. But in the meantime a whole other band of engineers are instead working on creating robots that tackle the challenge Turing chose to conveniently ignore: Can we make a machine that not just communicates via text, but that appears to act and think like a human, with all the nuance of non-verbal communication, the subtlety of speech, and the myriad other factors that (for now) separate us from the machines?

This fairly realistic representation of Einstein’s late-life head was built by Hanson Robotics in Texas and programmed by scientists at the University of California, San Diego.

That they were able to get it (him?) to move and exhibit recognizable human facial expressions is remarkable enough. But the head also contains a camera, linked to software that can interpret what it sees. Thus Einstein can learn who you are, including guessing your age and gender, respond to your audio cues, and mimic your own expressions and simple gestures in a sort of faux-empathy. Give him a wireless connection to a decent computer and he could probably beat any of us at chess, not unlike his real-life inspiration. (Well, not quite yet, but he may be getting there.)

For another look down “Creepy Lane,” here’s a full-size, modestly interactive Japanese robot from the 2009 Robotec Expo in Brazil.

As well as demonstrating our present capabilities, these machines also dramatically illustrate our limitations. Giving a robot a brain – the “intelligence” to move, speak, do some rudimentary learning, play chess – is the easy part.

Faithfully recreating even a small part of the human body, that’s tough. Our structure does not readily lend itself to simulation. Muscles, bones, joints, tendons, ligaments are not a series of servos and actuators, but a collection of tens of trillions of individual biological machines all connected in highly intricate ways that allow us to laugh and cry, wrestle with our children, kick a soccer ball, and assume the lotus position. Just imagine how much is involved in the elementary act of stair climbing, as Asimo the robot discovered to his chagrin (if “he” could feel embarrassment).

Speaking of soccer, getting robots to act cooperatively, in addition to kicking a ball, might seem like an insurmountable problem given the state of robotics today. But researchers are working on it. Each year, there is a RoboCup soccer match, and each year the participants get better. The goal (so to speak) is to field a robotic team by 2050 that can defeat the human World Cup champions. For a glimpse of where we stand with that, check out some of the “action” from the 2010 RoboCup.

Neither Einstein’s head, nor the Japanese spokesrobot, nor Asimo, nor our mechanical soccer stars are likely to be confused with real life.

You see, now that the private sector has been trashed and the public sector is now being thrown under the bus, anyone who still has a job will be replaced by a robot so no one will have to work (or eat).

Sometime in the next five to nine years it will start to impact the broad economy as information technology did in the nineties and will drive the next eighteen year bull run. Lets hope the country ends up in a better place as well.

The japanese will never allow significant immigration, so their entire future depends on robot slaves and automation. It might be a very.sureal place in 100 years, a huge number of robots and a small population of robot owners and programmers.

Correct, some of the most racially divisive people on the planet totally convinced of their superiorority (Uchi-Soto) . If you delve deeper you will find most of these robots are not sex toys although many are most are being developed for elder care and secondarily industrial production and sex.

"No, the concept of a robot first occurred to some unknown person in some far distant time, as he or she, engaged in a grinding, repetitive task, dreamed of a mechanical contrivance that could do some of the dirty work."

The idea for robots is at least as old as the Greeks. In mythology Hephaestus fashioned robots to help him because he was crippled.

Tim Geithner has always appeared somewhat robotic to me. Has anyone ever seen his birth certificate or had him x-rayed?

The Turing test is all well and good but the big money will be made when robots become sexy and can fool the partner into believing it is more than an inflatable party doll and is virtually identical to a human but it doesn't complain or whine.

"In about 60 AD, a Greek engineer called Hero constructed a three-wheeled cart that could carry a group of automata to the front of a stage where they would perform for an audience. Power came from a falling weight that pulled on string wrapped round the cart’s drive axle, and Sharkey reckons this string-based control mechanism is exactly equivalent to a modern programming language."

Ahhh Automation and robotics.... Finally a topic for me to take a definitive position on. Tooday we can consider welding robots (or manipulator arms) a big achievement- all the "sci fi" Honda Assimos or Sony Qrios, have little to do with reality- and in my opinion are just marketing stunts. I cant say how things will look like in 50 years, but if anyone is expecting a japaneese robot maid making them great coffe and giving wonderfull blowjobs in 10 years, they will probably end up with a pair of ruined dockers and a laccerated (if not amputated) penis...

From what I know robots at the moment are wonderfull at repetitive tasks that take place in higly controlled enviroments... and thats all... shure we have the science of neural network programming/self learning developing fast, but alot of water will flow under my bridge before it is REALY (not just experimentaly) put to use in junction with a mechanical design that will be able to keep up with any kind of working enviroment that humans handle tooday.

so if it comes down to the economy- robots wont kick Al Bundy out of the shoe store anytime soon.

I'm currently working on a new smart ass grid. It puts wireless recievers in all of IBM employees. When they decide to create something like the tabulating machine. Which was the key technical machinery that made social security possible. You can express your displeasure by pushing a button that will jam the employees in the nuts with a screw driver. Shock them with something similar to electroshock treatment except there will be a putrid smell from burnt flesh. Or simply put them in convulsive shocks till they fall in a pit of robotic fucking machines that rape every orfice they can find with special 80 grit sandpaper covered dildos.

I think my wake up moment came with a drunk surgeon who wanted to know what on earth I was doing with a robotics engineer with AI...don't they make real men anymore? I have seen their world. They are not even close. Besides, with third world labor costs, what's the impetus? Who the heck will pay for a machine that will vacuum and clean and answer the door, when the labor is much more affordable with humans? So, it can paint cars cheap? Whoa! hold the presses! That is a repetitive job, do you realize how many sensors and programming goes into each move? How about if you were to change it? How about a completely new endeavor? Do we realize how expensive labor intensive it is? Push it out another couple of decades. It aint happening till then.

Computers should be seen as robots, robots are machines which do what the people who control them want them to do EG. Paint cars or tally groceries. We are rapidly devolving into two groups of vastly different sizes, the very large group which, figuratively, stands in front of the machine and interacts with the machine and that very small group who, again figuratively, stand behind and control the machines which control the masses, when the machines are programed to react to human emotion the masses will be completely enslaved, if humans can gain some benefit, no matter how small, by displaying anger to a machine then the masses are lost. Anger is the instinctual way to deal with problems so the masses display their instincts, the new rulers of the World will be citizens of the post instinctual world and will use the masses instincts to control them, some what like they do now with mass media but with much greater efficiency.

Computers should be seen as robots, robots are machines which do what the people who control them want them to do EG. Paint cars or tally groceries. We are rapidly devolving into two groups of vastly different sizes, the very large group which, figuratively, stands in front of the machine and interacts with the machine and that very small group who, again figuratively, stand behind and control the machines which control the masses, when the machines are programed to react to human emotion the masses will be completely enslaved, if humans can gain some benefit, no matter how small, by displaying anger to a machine then the masses are lost. Anger is the instinctual way to deal with problems so the masses display their instincts, the new rulers of the World will be citizens of the post instinctual world and will use the masses instincts to control them, some what like they do now with mass media but with much greater efficiency.

The field of robotics is so important, and changing so quickly, that this article hardly does it justice, I'm sad to say. It certainly doesn't live up to its title. I'm hoping that the next parts in the series will be better.

What was the point of this article?! I was expecting some HFT Rabbit to be pulled out of the hat. There is nothing in here that most who know AI don't already know. I feel sorry fr Turning to be cited in such a primative primer on "robots"

If you clicked on this article, and actually wanted to know about the state of robotics today, I've copied a better article below, a Q&A from the journal Proceedings of the National Academies of Sciences.

QnAs with Terrence J. Sejnowski

Prashant Nair, Science Writer

For most of his career, Terrence Sejnowski, a professor of computational neuroscience at the Salk Institute for Biological Studies and a Howard Hughes Medical Institute investigator, has peered at the brain with pin-sharp precision. By using simulations to make sense of experimental data, Sejnowski has helped link biophysical processes in the brain to human behavior. His research has revealed insights into a raft of phenomena from vision to sleep to brain disorders. They could lead to practical benefits: Bestriding the fields of computational biology, neuroscience, psychology, and education, Sejnowski and other researchers hope to usher the age of machine learning into the real world. Sejnowski tells PNAS how using machines to model and emulate human behavior could make a difference in our lives.

PNAS: How did you become interested in machine learning?

Sejnowski: One of the most challenging questions in neuroscience is how social behaviors emerge from brain processes underlying sensation, emotions, language, memory, and cognition. When we first set out to address this challenge, it occurred to us that one way by which physicists figured out phenomena like gravity and aerodynamics was by building devices that exploited those phenomena. So, we needed to build machines that work like the brain by using software and computer chips that would form circuits capable of interacting with humans through social signals. In collaboration with Paul Ekman, an expert on reading facial expressions, our goal was to make machines capable of interpreting facial expressions so that, someday, social robots could communicate with humans on their own terms.

PNAS: And where would we use these social robots?

Sejnowski: Javier Movellan, a computational neuroscientist at the University of California, San Diego’s Institute for Neural Computation, has built a social robot he calls Rubi that interacts with toddlers who are just beginning to learn language. One of the challenges for preschool teachers is classroom control; the kids are running all over the place, so it’s difficult for the lone teacher to help kids focus. Rubi engaged the kids, encouraged dialogue, and facilitated learning. So, the idea is to use robots as teaching assistants. But it’s still early days.

PNAS: How do you make robots emulate human social learning?

Sejnowski: The first step is to get the child to accept the robot as a learning partner rather than as a toy. By using mathematical theory and demonstration, Javier showed that the most crucial variable for interacting with humans is response time. If a robot does not respond to a child’s question within a certain time window, the child loses interest. Also, a child will look at an object to which a teacher is pointing, so robots should be capable of shared attention, another hallmark of human learning. Robots must also be capable of other important features of human learning, such as empathy and imitation, which come from recognizing human emotions. But again, it’s early days.

PNAS: All this smacks of artificial intelligence.

Sejnowski: This is very different from traditional approaches in artificial intelligence, where the goal is to create a cognitive machine that creates a model of the world and computes responses based on that model. That’s not how the brain generates behavior. With its limited capacity, the brain selects only the most important sensory inputs to process and the most effective responses to store. Thanks to its capacity for learning and memory, the brain is able to interact in a social way with relatively low bandwidth, which is partly what makes social robots feasible. By emulating biological intelligence, machine learning is heralding a new era.

PNAS: To many, a robot in the classroom is the stuff of science fiction. How do you convince policymakers that the investment is worth the payoff?

Sejnowski: First of all, the cat’s already out of the bag. It’s now a question of optimizing the technology for our own benefit. For example, social robots can serve as personal cognitive enhancers. Second, the idea would not be to replace teachers but to provide them with assistants. Besides helping teachers to hold toddlers’ attention in the classroom, social robots can stand in when teachers need to be briefly absent. Robots can help relieve teachers of some of their mundane duties so that teachers can serve as role models and tailor attention to individual students. That said, we can’t predict the full impact of these transformative technologies.

PNAS: Fair enough. So where’s the rub?

Sejnowski: It’s mainly in the resources. We’ve made sufficient progress in neuroscience and engineering to be able to overcome technical challenges to using machines in social contexts. But we need to scale up lab experiments, clearly calling for a major investment of resources. If we had a thousand Rubis, we could accelerate research and reduce costs. The other problem is societal. Will our institutions be able to adapt to the new environments that such endeavors will help create? That’s an open question.

PNAS: How will the new environment help children improve their cognitive skills?

Sejnowski: There’s a lot of emphasis on classroom learning of subjects like language, mathematics, and science, but to improve learning, we also need an emphasis on acquiring basic cognitive skills like attention, listening, and memory. We have evidence that social robots can help improve attention. Paula Tallal, codirector of the Temporal Dynamics of Learning Center (TDLC) in San Diego, has developed software already being used in classrooms across the country that can help children who have difficulties listening and hence, understanding language. Hal Pashler, also at TDLC, has studied a well-known phenomenon in memory research—the spacing effect—to find the optimal intervals for refreshing memory to help children retain learned material for many years. These are just a couple of examples of wide-ranging research in neuroeducation, a field dedicated to helping children become better learners.

PNAS: Your own work in the mid-1990s shed surprising light on reinforcement learning.

Sejnowski: We developed a computational model of the brain’s dopamine system, involved in reward-based learning, to understand how the dopamine neurons learn to make predictions about future rewards. This computational model has been confirmed in a wide range of settings using brain imaging in humans. As they learn new facts about the world, children use the dopamine system as a guide to finding the best sequence of steps to solve problems and to reach a goal. We are just beginning to understand how the different learning systems in the brain work together to produce the astonishing range of behaviors humans are capable of.

our friend tyler'Z friend, al jazari ("Al") left us this incredible schematic, which Casey and Durden have served up2 slewie with his morning jo and hiz first 20 tokes of cornsilkishness. Dug bent over to shew us his great ass, one of his passports fell outa a concealed intellect, somewhere, and one of my robotic spot welderZ from the (now defunct) Toyota/GM plant in Freemont spot-welded his plumber's crack shut.

so here's doug, pogo-ing like there's no tomorrow, in a way, passports flying all over the place, but, apparently, he is gonna stick aroung, at least via his digi-bot-axatrix, kinda like slewie. and you. plus, if you read between the limes, you cas see how patriotic he is becoming. even if he IS canadian, he has a forged long form birth certificate to BE an american if those hoserZ up there partying in moose head w/ the mcKenZee bro's gang up on him. eh?

robotics, part deux: with Al's fresco, or whatever it is, myne pi-ratical eye is drawn to the top, for some reason. the scale sez: must be read from top-down to understand how the water-clock works? i wonder what those strange symbols in the upper arc mean? actually, at this point, that question only matters to those who are attached to a millstone, about to be arced everboard, by the noose @ their neck, i would opine.

slewie also like the 2 gasl/guys who look like the owly thing on the ancient greek coins. from athens, right tyler? plus, they seem to be wearing epauletteZ? shoulder pads? then there are the 4 musicians, two drummers, one conch-player, and a "floutisctic" who seems to be looking thru a telescope of some kind...on, shit! one tkoe ober the line... again.

but, to be balanced, i wanted to point out the triptychal framework and foundation; so fra angelic-esque...

p.s. don't forget the irish "general election", tomorrow. then, later, they have a "presidential election". betweed these two great civic events, i think they have a heluva Fyte Club thingy, too. Fight Club.

listen. yesterday i read where one of the candidates, probably a lowly mogambo-type footsojer, "suggested" that they pass a law outlawing the Olde AND nEWE native speak-ease.e.e.e stuff. i guess that would leave, uhh...