AAAI-07: Sixteenth Annual AAAI Mobile Robot Competition

Sixteenth Annual AAAI Mobile Robot Competition

The Sixteenth Annual AAAI Mobile Robot Competition and Exhibition was held Monday –Thursday, July 23–26 in the Balmoral room.

This year's robot competition and exhibition brought together teams from universities, colleges, and research laboratories to compete and to demonstrate cutting edge, state of the art research in robotics and artificial intelligence.

Mobile Robot Workshop

The robot events commence with a workshop where participants describe the research behind their entries. The workshop will include a panel of academic, industrial, and governmental roboticists that will address “The Personal Robotics Revolution: Where Does It Stand and Where Is It Going?”

Semantic Robot Vision Challenge

In this competition, robots are given a listing of objects that they must locate and recognize. In order to determine what these objects look like, the robots are given an opportunity to search the web for images of the objects in their list before starting their search. This competition attempts to push the state of the art of semantic image understanding by requiring that robots make use of the wealth of unstructured image data that exist on the Internet today.

The Robot Exhibition

The mission of the Robot Exhibition is twofold. The first goal is to demonstrate state of the art research in a less structured environment than the competition events. The exhibition gives researchers an opportunity to showcase current robotics and embodied-AI research that does not fit into the competition tasks. Second, the exhibition provides a venue for faculty using robotics in education to present their approaches and experiences.

OPTIMOL is a novel, automatic dataset collecting and model learning system for object categorization developed by a joint UIUC-Princeton team. Our algorithm mimics the human learning process of iteratively cumulating model knowledge and image examples. As a fully automated system, OPTIMOL uses the Internet as the (nearly) unlimited resource for images. The learning and image collection processes are done by applying object recognition techniques in an iterative and incremental way. The goal of this work is to use the tremendous web resource to learn robust object category models in order to detect and search for objects in real-world cluttered scenes.

University of British Columbia
UBC LCI RoboticsEvent: Robot Competition

University of Manitoba
Keystone Mixed RealityEvent: Robot Exhibition

University of Washington
Team Sunflowers
Team Contact: Masaharu KobashiEvent: Robot Competition and Exhibition

Our robot interprets the environment by vision and it does not use any range finder. It has two video cameras whose pan, tilt, vergence, focus, and exposure are controlled by the installed computers to perform the active vision. It can accommodate up to five ATX size computer motherboards to handle the CPU intensive vision computation. The robot is designed for both indoor and outdoor use equipped with two powerful motors and sturdy chassis that can carry up to 250 pounds of batteries for extended operation of powerful computers.

“DARwIn: Dynamic Anthropomorphic Robot with Intelligence” is a humanoid bipedal robot research platform to study dynamic gaits and locomotion. Outfitted with a sensor suite and computers, DARwIn can also perform complicated high-level tasks and autonomous behaviors such as playing soccer. DARwIn will be the first and only US entry into the humanoid division of the international autonomous soccer competition, RoboCup.