The Everything-Robotic blog, by The Robot Report, provides stories that matter about topical and interesting aspects of the robotics industry. It supplements articles appearing on The Robot Report website.

Wednesday, March 30, 2011

In a presentation at InnoRobo, the Innovation Robotics Summit held March 23-25 in Lyon, France, Sang-Rok Oh, from the South Korean Institute of Science and Technology (KIST), and an advisor to the government on r-learning, described the early childhood project that many of us have read about. Most of the stories, however, have elicited concern that our children are soon to be schooled by robots instead of humans.

This is quite a different message than Mr. Oh spoke of when he described the classroom use of the iRobiQ robot and how it supplements, augments and assists the teacher.

Teaching and tutoring English is one of the tasks, true, but the project is significantly greater. It is a paradigm shift to digital educational practices from traditional methods, to augmenting teachers with the teaching assistants they can't afford, and to relieve them of mundane tasks whilst freeing extra time for them to teach.

Yujin Robotics'
iRobi Q, teacher's assistant

There are 8,400 kindergarten classes in South Korea. By the end of 2011 3,000 of them will be involved in the r-learning program and equipped with an iRobi Q teaching assistant robot. Some classes will also have a Genibo robot dog. Mr. Oh says that by the end of 2012 5,000 classrooms will be part of the program.

His group at KIST, plus teachers and teacher groups, are working to develop additional tasks for the robots - tasks that take the mundane time-consuming chores from the teacher and distribute them to the students and the robot thereby making more quality teacher time available for the students. They are also working on adding content so that the children don't outgrow the platform.

He discussed delays due to the conservative infrastructure but stressed the commitment of the South Korean government to move forward with r-learning not only in kindergarten classes but onwards from there.

DasaRobot's Genibo

He showed a video of the kids checking themselves in with the robot at the beginning of the day (attendance taking) and getting the robot to photograph and store their art and other materials into their online digital library which their parents and teachers can see whenever they choose to log in. He also had a video showing Dasa Robot's Genibo robots being used to lead PE and story-telling sessions.

Kyung Shin, the President of Yujin Robots, the manufacturer of the iRobi Q, described the service robotics marketplace and used a phrase about their vision in building service robots that I think is fitting: that they be attentive partners... human-friendly attentive partners... in the various tasks of daily life.

The iRobi Q produced by Shin's company has an object recognition camera, voice recognition mic, sound replay speakers, IR, ultrasound, bumper and floor sensing sensors, emotional facial expressions, display and touch screen as well as learning, gaming and tutoring content.

Both speakers referenced the South Korean government's 2008 plan (the Special Law) and $1 billion investment involved in developing r-learning, English language proficiency, and agricultural automation. Mr. Shin also told of the Japanese government's program to stimulate agricultural automation, provide consumer-level and professional cleaning robots and to sustain Japan's 20% annual growth in providing worldwide robotics and also of China's focus on manufacturing, space, defense and cleaning and cooking robots. All of these public-private partnerships are jointly funded by governments and industry with the government spearheading scientific challenges.

Mr. Shin said that South Korea is attempting to achieve tech leadership within 10 years. [This is particularly timely in that the UK's Royal Society, their national academy of science, just released a study indicating that China would surpass the US in scientific output in the next few years, perhaps as early as 2013, but surely within this decade.] Other goals for South Korea are to export surveillance robots and to provide eldercare robots within five years and personal assistant robots within ten.

One way that the government's support is helpful is in enabling massive test markets directed toward achieving a national strategic goal - for example, insuring that Korean children are able to speak English so that they can be schooled abroad and bring their education back into play at home.

Part of the Special Law - and the $1 billion stimulus - is to launch up to 500 businesses and provide 80,000 new jobs in the robotics industry by the end of the decade.

Bruno Bonnell, President of the French Union of Service Robotics (Syrobo) and Chairman of RoboPolis, in his presentation at InnoRobo in Lyon, France, said that the service robotic market would have growth of 30X the current rate this decade and be a $100 billion per year industry in 2020.

These governmental (Japan's, South Korea's, Taiwan's and China's) robotic stimulus programs have been undertaken with this $100 billion industry in mind. Gaining market share in this industry will provide jobs and revenues to these strategic early planners -- America take note.

Monday, March 28, 2011

In a March 22 presentation at Automate 2011 in Chicago, the NASA and GM Team Leaders provided new details about Robonaut2 A and B, called R2A and R2B, the two products of their four-year collaboration.

Marty Linn, from GM, talked about the flexibility and quality that R2 will bring to GM's manufacturing lines. R2 will complement and support humans just as they will help the crew in the space station. It's designed to do work and will assist with ergonomically difficult tasks.

Ronald Diffler, the NASA project leader, displayed R2's hand dexterity, fine motion with tendon-like tensioning, and with a patented tactile system enabling haptic understanding of objects. R2s arms have springs for softness and stiffness control thereby enabling R2 to be safe around humans. Further, R2 has a 2-joint neck so that it can see fully down and around.

R2A watching launch of R2B

R2 has applied for 44 patents. Computing power and sensors are many years old because NASA requires the boards to be certified and other safety and bureaucratic delays. The actual flight of R2B to the space station was delayed for many months. But R2B is there, in the station now and R2A was there to see it launched.

GM will upgrade their new R2-like robots, which they intend to farm out to a robotics manufacturer, a "development partner," with the latest chips, cameras and sensors.

Linn, in an interview following the presentation, said that the real value of R2 for GM is the flexibility of the hands and lower arm springs. Motoman's two-armed robot lacks end-of-arm flexibility and sensors and arm tensioning thereby making it not flexible or safe enough to work side by side with humans as is the plan with R2 and GM.

A full body training suit was designed for simulation and training. An astronaut dons the suit and records his movements doing a task. R2 can then repeat the task and, if necessary, a programmer can enhance the recorded movements. Linn said that this simulation and training feature will be a valuable tool to help GM jumpstart the usual extended time for initial programming and safety simulation.

Sunday, March 6, 2011

IBM's achievement with their Watson system and software was more than good television:

It's a major language processing realization. Computing systems will no longer be limited to responding to simple commands.

The data management aspect lends itself to specialization, ie, medical sub-sets, legal data sets, call/support centers databases, etc. John Markoff, in a recent NY Times article on the subject, said "any job that now involves answering questions and conducting commercial transactions by telephone will soon be at risk. It is only necessary to consider how quickly A.T.M.’s displaced human bank tellers to have an idea of what could happen."

The language processing is amazing, illuminating, and lets one dream of a future where the promises of human-robot (or for that matter, human-device) interaction and instantaneous translation is really going to happen soon.

A staggering amount of horsepower was harnessed to work harmoniously using massively parallel technology on 2,700 processors spread over 90 servers to enable the Jeopardy! win. Historically, this will advance to smaller devices within a few years. Ray Kurzweil, quoted in The Economist, notes that it was only five years after the massive and hugely expensive Deep Blue beat Mr Kasparov in 1997 that Deep Fritz was able to achieve the same level of performance by combining the power of just eight personal computers. In part, that was because of the inexorable effects of Moore’s Law halving the price/performance of computing every 18 months. It was also due to the vast improvements in pattern-recognition software used to make the crucial tree-pruning decisions that determine successful moves and countermoves in chess. Now that the price/performance of computers has accelerated to a halving every 12 months. Mr Kurzweil expects a single server to do the job of Watson’s 90 servers within seven years—and by a PC within a decade. If cloud computing fulfills its promise, then bursts of Watson-like performance could be available to the public at nominal cost even sooner.

And most importantly, right after the Jeopardy! win, IBM announced partnerships with a few hospital groups to provide diagnostic physician assistance using Watson's DeepQA software and data management methods. And their website displays other areas where Watson might be particularly helpful. IBM is bringing Watson to the marketplace.

It's important to keep in mind that inside a computer there is no connection from words to human experience or cognition. To Watson, words are just tokens. In parsing a question such as those on Jeopardy!, a computer has to decide what's the verb, the subject, the object, the preposition and the object of the preposition. It must remove uncertainty from words with multiple meanings, by taking into account any and all contexts it can recognise. When people talk among themselves, they bring so much contextual awareness that answers become obvious. The computer must use logic to "disambiguate" incoming tokens into choices which can be measured (scored) against alternative choices. And it must do all that within seconds.What about robots and robotics?

The AI system managing a robot gathers facts through sensors or human input, compares this to stored data, and decides what the information signifies. The system then runs through various possible actions and predicts which action will be most successful.

Some robots also have a limited ability to learn. Learning robots recognize if a certain action achieved a desired result and store that information for the next time it encounters the same situation. Naturally, they can't absorb information like a human but in Japan, roboticists have taught a robot to dance by demonstrating the moves themselves.

It's important to remember that IBM isn't the only AI game in town. There are many companies and research facilities developing and providing AI software, the most visible of which is Google.

IBM 701 Computer

From Wired's Danger Room: Back in 1954, IBM announced that its 701 computer crunched a bit of Russian text into its English equivalent. A Georgetown professor who worked on the project predicted the computerized translation of entire books “five, perhaps three years hence.”

Thus was born a scientific (and sci-fi) drive that’s lasted 57 years, from Star Trek to Babel Fish to Google Translate: instantaneous speech translation. But even though no one’s mastered that yet, the Pentagon’s out-there research branch is asking for even more with its Boundless Operational Language Translation, or BOLT. As outlined in Darpa’s fiscal 2012 budget request. For the low, low starting cost of $15 million, Congress can “enable communication regardless of medium (voice or text), and genre (conversation, chat, or messaging).”

Not only will BOLT be a universal translator — the creation of which would be a revolutionary human development — but it will “also enable sophisticated search of stored language information and analysis of the information by increasing the capability of machines for deep language comprehension. In other words, a 701 translator that works.

So What's The Holdup?

There are many reasons for the delay in robotic training and interaction with humans - some of which can been seen in the mammoth resources it took IBM to achieve their Watson Jeopardy! victory. You cannot place those resources into a robot nor can you rely on a computer controlling a robot (or series of robots) via a wireless communication channel as they go about their various tasks.

Matthias Scheutz, an Associate Professor of Cognitive Science, Computer Science and Informatics and Director of the Human-Robot Interaction Lab at Tufts University, adds research funding to the equation saying:

The fields of robotics and human-robot interaction are growing, with the highest expected growth rates not in industrial, but service robots. Several countries (Japan, South Korea, the EU, etc.) around the world are heavily investing in service and social robotics. In the US, there are very few funding programs specifically targeted at artificial cognitive systems that would enable complex autonomous service robots. My hope is that this will be changing soon given enormous market potential of this area and the heavy investments other countries are making. To keep the US competitive and to enable, not Watson-like, but more modest, more natural interactions between humans and autonomous robots in natural language, we will need interdisciplinary funding programs that are aimed at developing the right kinds of integrated control architectures for these systems, which we are currently still lacking.

Scheutz goes on to say:

Computing power is obviously a critical component for a lot of AI technology (e.g., algorithms that are data-based and need to be trained on large data sets, or algorithms that have to explore large search spaces in a short amount of time). Equally important is the architecture of an intelligent system, the way in which different components operate and interact. And here is where we have made much less progress compared to the hardware side. Consequently, although the performance of Watson is very impressive and clearly a break-through, from an engineering perspective, it does not yet address the problem of human-like natural language processing as we will need it for robots. And while there will likely be applications in the context of recommender systems in the near future, it is not clear to me how the technology used on Watson can be put on a robot and make it have natural task-based dialogues with humans.

The EU, Japan and Korea have roadmaps which lay out the science that needs to be tackled before effective products can be produced. And they have national direction and public-private funding to make their plans happen. America does not yet have such a plan nor any national direction regarding robotics. And this is a critical holdup.

President Obama, in his State of the Union Speech, specifically excluded robotics when he discussed the need for strategic investment in key areas of innovation. How the President could overlook that not a single sector is devoid of the applications of robotics is one question. Another is to ask whether he is aware that 12 of the 13 major robotic manufacturers selling industrial and manufacturing robots in the US are off-shore companies.