Marius Muja of University of British Columbia began his internship in the middle of Milestone 2 excitement. For several weeks, he worked on two important perception components: detecting outlets from far away, and detecting door handles.

Thereafter, Marius focused on tabletop object detection and wrote the tabletop_objects package. Determining the exact position and orientation of an object, as well as its identity, is very important if a robot is grasping objects, and especially crucial if the object in question is fragile. Tabletop_objects uses a two-stage approach. In the bottom-up stage, initial estimations of possible object locations are made, and in the top-down stage, 3D models are fit into the estimated locations. After fitting the correct 3D model, the object's identity, position and orientation can be determined with high confidence. This approach can even distinguish between similar-looking drinking glasses. Marius worked with Ioan Sucan to integrate tabletop_objects and motion planning (move_arm), and together, they were able to successfully detect, grasp and manipulate fragile glass objects.

In addition to his work with tabletop_objects, Marius integrated FLANN (Fast Library for Nearest Neighbors) into OpenCV, and developed a phone-based teleoperation mode for PR2 based on Asterisk, an open source PBX.

Here are Marius's end-of-summer slides, where you can find more details about his work.

This summer, Dan Munoz of Carnegie Mellon University worked on helping
the PR2 understand its environment using its 3-D sensors. Improving
3-D
perception is important because it can help the PR2 with many tasks
such as localization and object grasping. At CMU, Dan and
collaborators are developing techniques to improve 3-D perception for
an unmanned vehicle in outdoor natural and urban environments. These
techniques first take in a cloud of 3-D points, usually collected from
a laser scanner, and a label associated with each point. These labels
identify such objects as buildings, tree trunks, plants, power lines,
and
the ground. Then, various local and more global features that describe the local shape and
distribution of each object are extracted for
each point and region of points. These labeled examples are
then used to train an advanced machine learning tool that reasons the
best way to combine the local and global features that describe each
object. In new environments, this feature extraction process is
repeated and given to the machine learning tool to determine what
objects are present in the novel scene.

While at Willow Garage, Dan integrated this learning framework into
ROS. As shown in the video, Dan experimented with helping the PR2
perceive objects on the room-sized scale, such as tables and chairs, as
well as objects at the table-top-sized scale, including mugs and
staplers. During the Intern Challenge, Dan also applied this same
framework to distinguish between the three different types of bottles
being served: Odwalla, Naked, and water. Dan developed the
descriptors_3d package, the library used to compute various 3-D features for a point
or region of points from a
stereo camera or laser scanner. Additionally, he developed the
functional_m3n package (Functional Max-Margin
Markov Networks), the advanced machine learning tool that learns
how to combine low-level and high-level feature information for each
object.

ROS 0.8 has been released. ROS has been undergoing user testing, which has led to numerous improvements and bug fixes to the underlying tools. This update also includes several major updates to tools like roslaunch and rostopic, as well as updates to how roscpp handles tilde names. We have also separated out experimental packages into a "ros_experimental" stack. This separation reduces many of the prerequisites for installing ROS.

For a more detailed list of changes and instructions on how to use these updates, please see the changelist.

Associate Professor Mark Yim, of the University of Pennsylvania, visits Willow Garage a couple of times each year to collaborate with Willow researchers and engineers. This year, Mark experimented with a quick-change end-effector to the PR2.

Mark has designed, built, and controlled a wide array of modular robotic systems. His research group, the ModLab, works on all aspects of modular and self-reconfiguring robots, from mechanical and electrical design to high-level programming. Their CKBot robot modules can be rearranged into a variety of physical forms, from snake-like crawlers to legged walkers.

Together with Penn students Jimmy Sastra, Matt Piccoli, and Mohit Bhoite, Mark applied his modular approach to the design of an experimental quick-change end-effector that allows PR2 to swap out its standard gripper for custom tools, like screwdrivers. Using this system, PR2 can single-handedly exchange one end-effector for another, storing the spares in a custom holster on its base. The attachment of an end effector is made with a "stub" that provides both a mechanical and an electrical connection, allowing the power and data flow required to control the new tool.

PR2 is a research platform that was designed to support users who want to modify hardware and the software to meet their needs. From the start, PR2 was designed with modularity in mind, and Mark's work is testing that design, demonstrating what can and cannot be easily changed. It's exciting to see the first of what will hopefully be a long line of innovative customizations to PR2.

We have successfully completed migration of our "Personal Robots" code from SourceForge to code.ros.org. As part of this move, we took the opportunity
to split personalrobots into two new projects: ros-pkg and wg-ros-pkg. ros-pkg contains software for a general robotics platform and has contributions from many external collaborators. From navigation to drivers to visualizations, this software runs on a variety of robots and enables researchers to focus on cutting-edge capabilities. wg-ros-pkg builds on top of ros-pkg to provide the software for the PR2 robot platform. We hope that the PR2 platform will accelerate collaboration between researchers by providing both common software and hardware. In addition to ros-pkg and wg-ros-pkg, we encourage you to checkout the many other repositories of open source ROS code available from other institutions.

You may be wondering why we chose to move from SourceForge. We were stunned to discover that the Personal Robots repository was consistently ranking either #1 or #2 in daily SVN activity, out of more than 230,000 projects hosted at SourceForge. This heavy use was putting strain on SourceForge's infrastructure and it was unfair to expect an external organization to support such heavy use. We are grateful to SourceForge for the support they have provided, as well as the tools we have needed to foster the ROS community. Now that we have launched ROS.org, it was time to for us to support the community using our own infrastructure.

The pace of activity with ROS software has increased these past several weeks, which reflects our progress towards completing Milestone 3. We have done initial releases of nearly all the software we expect to deploy with the PR2 robot. We have launched ROS.org as a new home for documentation, tutorials, and news about ROS. And now we have launched code.ros.org, which will strengthen the infrastructure used to share code. There's still much more to do, from hardening the software to improving documentation. We will also need tools to bring all the pieces together with new tools for installing and managing these platforms. We looking forward to sharing more with you as these become ready.

Jorge Cham of PhD Comics and robotics fame is doing a comics series for us titled simply, "R.O.B.O.T. Comics." We hope you enjoy these as much as we do. Be sure to stop by next week or subscribe to our RSS feed for the next in the series. Also, feel free to leave a comment and let us know what you think.

Mrinal Kalakrishnan, one of three motion planning interns here at
Willow Garage, is finishing up his summer project and returning to
the University of Southern California. Mrinal has been working on a smooth
motion planning and control pipeline for the PR2, introducing a new
approach to object manipulation The key component of this work was the
implementation of CHOMP (Covariant Hamiltonian Optimization and Motion Planning), a motion planner developed at CMU and Intel
Research. You can find this implementation in the chomp_motion_planner package for ROS.

Mrinal chose to implement this motion planner on
the PR2 because CHOMP's method of planning away from obstacles produces
very smooth, natural-looking movements. You can see in the video that
the PR2's arm trajectory is rather fluid and avoids unusual or awkward
joint angles. The animation shows the arm optimizing the trajectory
away from the bookshelf, while maintaining a smooth motion plan.
Mrinal's work with CHOMP allowed for informative comparisons to
be made with the two other motion planners being researched and
implemented here, ompl and sbpl_arm_planner. All three
motion planners use the same interface, making switches between the
three systems very simple.

In addition to his work with CHOMP, Mrinal wrote the
distance_field package for ROS which performs 3-D obstacle inflation to generate a
cost-map for arm planners. He also wrote spline_smoother, a library of
algorithms which can convert a set of waypoints, as typically generated by motion planners, into a smooth spline trajectory suitable for execution on a robot.

Below is Mrinal's end-of-summer presentation, where you can find additional details about his work here at Willow Garage.

Min Sun is returning to University of Michigan at Ann Arbor, where he
does computer vision research with particular interest in 3-D object
recognition. During his summer here at Willow Garage, he focused on
recognizing table-top objects like mice, mugs,
and staplers in an office environment. Min is the the primary creator
of rf_detector (Random Forest), which recognizes objects and their poses. The detector
uses the stereo camera along with the texture light projector to collect
images and the corresponding dense stereo point clouds. From there,
rf_detector predicts the object type (i.e. mouse, mug, stapler) and its
location and orientation. This information can be crucial to have
before attempting object manipulation, as many object types, such as mugs,
require careful handling.

In the future, Min will be looking for ways to scale up this
approach to a wider range of object classes. Min continues to look for
other features and model representations that make object recognition more
robust.

Min also wrote the geometric_blur package, which calculates geometric blur descriptors.

Here are the slides from Min's final internship presentation describing his work on rf_detector and the detection pipeline.