Video Friday: Pepper's Fish Mode, Deep Learning in the Warehouse, and Stealing From a Delivery Robot

Your weekly selection of awesome robot videos

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next two months; here’s what we have so far (send us your events!):

Daniel Claes wrote in to share some of his recent Ph.D. work on decentralized multi-robot warehouse commissioning:

The robots plan autonomously which actions to take based on the information that they get from the warehouse management software, i.e. the number of active orders and the approximate positions of the other robots. The robots adapt directly to new incoming orders and picked orders from the other robots. The robots have to plan with a limited capacity of three items after which they have to return to the depot to unload. The exact positions of the objects on the platforms is not known. Additionally, the platforms are at different heights.

A paper on this will be in AAMAS 2017, but in the meantime, you can read more details at the link below.

In 20 years there will be 9.6 billion people to feed, and not enough food. Carnegie Mellon University’s FarmView is tackling this problem through a team effort of researchers and robots that will increase crop yields with fewer resources by controlling and measuring the environment then analyzing the data, to provide solutions to farmers across the globe.

CMU RI Seminar: Peter Stone on “Robot Skill Learning: From the Real World to Simulation and Back.”

For autonomous robots to operate in the open, dynamically changing world, they will need to be able to learn a robust set of interacting skills. This talk begins by introducing "Overlapping Layered Learning" as a novel hierarchical machine learning paradigm for learning such interacting skills in simulation. While learning in simulation is appealing because it avoids the prohibitive sample cost of learning in the real world, unfortunately policies learned in simulation often fail when applied on physical robots. This talk then introduces "Grounded Simulation Learning" to address this problem by algorithmically altering the simulator to better match the real world, and connects this new algorithm to a theoretical analysis of off-policy evaluation in reinforcement learning. Overlapping Layered Learning was the key deciding factor in UT Austin Villa’s RoboCup robot soccer 3D simulation league championship, and Grounded Simulation Learning has led to the fastest known stable walk on a widely used humanoid robot.