Instructions

ZOOM IN by clicking on the page. A slider will appear, allowing you to adjust your zoom level. Return to the original size by clicking on the page again.

MOVE the page around when zoomed in by dragging it.

ADJUST the zoom using the slider on the top right.

ZOOM OUT by clicking on the zoomed-in page.

SEARCH by entering text in the search field and click on "In This Issue" or "All Issues" to search the current issue or the archive of back issues
respectively.
.

PRINT by clicking on thumbnails to select pages, and then press the
print button.

SHARE this publication and page.

ROTATE PAGE allows you to turn pages 90 degrees clockwise or counterclockwise.Click on the page to return to the original orientation. To zoom in on a rotated page, return the page to its original orientation, zoom in, and
then rotate it again.

CONTENTS displays a table of sections with thumbnails and descriptions.

ALL PAGES displays thumbnails of every page in the issue. Click on
a page to jump.

22 JANUARY/FEBRUARY 2015 | DefenseSystems.com
UAS
&ROBOTICS
BY KEVIN McCANEY
Kids (and grownups) can learn a lot
from instructional videos. So can
chimpanzees. Now you can add
robots to the list.
A research team at the University of
Maryland, funded by a Defense Advanced
Research Projects Agency program, has
developed a system that enables a robot to
interpret visual cues and then perform the
task it just witnessed. Robot see, robot do.
And the robot also will remember what to
do the next time.
The university’s research, led by com-
puter scientist Yiannis Aloimonos, is being
conducted under DARPA’S Mathematics
of Sensing, Exploitation and Execution,
or MSEE, program, which aims
to develop autonomous systems
that use a minimalist grammar
in order to respond to visuals.
“The MSEE program initially
focused on sensing, which in-
volves perception and under-
standing of what’s happening
in a visual scene, not simply
recognizing and identifying
objects,” Reza Ghanadan, a pro-
gram manager in DARPA’s De-
fense Sciences Offices, said in a
release. “We’ve now taken the
next step to execution, where
a robot processes visual cues
through a manipulation action-
grammar module and translates
them into actions.”
In this case, the action in-
volved cooking. Several Bax-
ter Research Robots basically
watched a series of videos on
how to cook and were able to
recognize utensils on screen,
grab the appropriate one in
front of them and adroitly ma-
nipulate it the right way, even neatly pour-
ing liquid into a moving container.
The robots also are able to retain that
knowledge and share it with other robots,
which DARPA said is an advancement for
sensor systems, which tend to see every-
thing freshly from moment to moment.
Baxter Research Robots, made by
Rethink Robotics, are used at research
institutions around the world. The com-
pany’s flagship Baxter also is used widely
in manufacturing as an inexpensive plat-
form for performing repetitive tasks, but
the research robots are different in several
key ways.
As Philip Dasler, a Ph.D. student in
Maryland’s computer science department,
points out in a paper, a Baxter Research
Robot’s ability to watch and learn elimi-
nates the programming required for each
specific task. Just plug the robot in, and
start showing it what to do—by, for ex-
ample, manipulating its arms to perform
a task, after which the robot will be able
to repeat it.
The MSEE research is taking Baxter to
the next level, allowing it to learn through
visual, rather than physical, instruction,
which DARPA said could have an impact
in military areas like repair and logistics.
“ This system allows robots to continu-
ously build on previous learning—such
as types of objects and grasps associated
with them—which could have a huge im-
pact on teaching and training,” Ghanadan
said. n
Robot learns from watching videos
Researchers find a way to turn visual cues into action
Computer scientist Yiannis
Aloimonos with Baxter.
0215ds_020-022.indd 22
2/4/15 12:43 PM