Project Ideas for Intelligent Robotics

Each student in CS 395T will do a term project creating, documenting,
and evaluating a project on Vulcan (the wheelchair) or Lassie (the
Magellan). Here are suggestions.

This list of ideas will change, and the individual ideas will change,
as we learn more about the problems. (And as I provide better
references.)

Learning Skilled Control Laws

A control law helps a robot accomplish a low-level task. However, it is important
not only to do the task, but to do it well.

Here are a few projects that involve first designing a control law,
then figuring out how to incorporate learning to allow the robot
to do that task quickly, precisely, and with assurance. In short,
to demonstrate its skill.

Project 1: Pass Through a Doorway

The robot is moving along a hallway, looking for a doorway on one
side or the other. It detects the door, assures itself that the
door is open, then turns and drives directly and skilfully through
the doorway, stopping at a distinctive state on the other side.

This is particularly demanding for the wheelchair, since it has about
two inches of clearance on each side in a standard doorway. But the
ability to move through doorways with assurance, rather than stopping
and creeping through them, will make a big difference to the acceptance
of the wheelchair.

An important subtask is for the robot to estimate its own uncertainty in
sensing and motion, so it knows when it can move confidently, and when
it needs to slow down to avoid collisions.

A good way to start the project is by simulating doorways with the large
cardboard boxes we have in the robot lab.

Project 2: Using the Elevator

The wheelchair moves along the hallway to the elevator. It pauses
while the driver or a companion presses the elevator button, and
waits for the elevator to arrive. When the doors open, it waits
briefly to see if anyone is exiting, then sweeps smoothly into the
elevator and turns around to position itself by the elevator buttons.
Wait for the doors to open again, and move smoothly and confidently
out of the elevator. (Here we are assuming that the next stop is
where we really want to get off.)

As with passing through the doorway, the goal is to move skilfully
and confidently without taking significant risks of collisions.

This is one of a large set of "semi-autonomous routines" that the
wheelchair might need, including negotiating a wheelchair ramp (see the
south door of Taylor Hall), or getting onto the lift on a bus.
Consider the design of a general-purpose language and toolkit of
utilities to support specification and refinement of these control laws.

With both projects, the selection and use of appropriate machine learning
or neural net learning methods will be essential.

Project 3: Pedestrian Avoidance

The robot moves down the hallway, which also contains some number of
pedestrians moving in both directions. For various densities of
pedestrians and rates of motion, devise control laws to make as much
progresss as possible toward one's own goals while avoiding collisions.

E. Prassler, J. Scholz and P. Fiorini, Navigating a robotic wheelchair
in a railway station during rush hour.
International Journal of Robotics Research18(7): 711-727, 1999.

Topological Graphs: Node and Edge versus Place and Path

A traditional graph consists of nodes and edges. Graph search gives
a route, which is a sequence of nodes, edges connecting adjacent nodes
are implicit.

The SSH topological map includes places and paths, as well as other
structures we will discuss elsewhere. A place is on one or more paths.
A path includes an ordered set of places. A route is an alternating
sequence of places and paths, with the starting place at the beginning
and the destination place at the end. The constraint within a route
is that a place is on each of the adjacent paths in the route.

Compare the two representations (node-edge vs place-path)
in terms of the practical complexity of way-finding.

It seems likely that asymptotic complexity is the same.
Make sure this is true.

The place-path representation will have a higher branching factor
and shorter route length. How does this affect difficulty
of way-finding?

Testing Hierarchical Topological Maps

The TOUR Model can scale up to very large maps because it can represent
hierarchical topological maps. This set of three collaborative projects
makes it possible to test a hypothesis about the structure of hierarchical
topological maps.

A number of cognitive psychologists have observed that expert way-finders
such as taxi-drivers often use a "skeleton" of streets through a city,
and solve way-finding problems by finding a route from the start-point
to the skeleton, and from the skeleton to the end-point, then connecting
the two through the skeleton. The hypothesis is that the presence of
this skeleton is explained by the interaction between a simple rule for
learning "boundary relations" in the topological map, and a simple
heuristic for using boundary relations to define subgoals during
way-finding.

Testing this hypothesis will require several significant parts,
working together.

Project 1: Evaluation Environment

To evaluate a hypothesis about the topological map, it is helpful to
have a model of a large-scale environment where we can simulate travel
and way-finding. You may select one of two alternative approaches:

Synthesize a road-map for a large imaginary city.

Find a database with the road-map for Austin, and translate it to
a form we can use. This is more compelling, but more difficult.
There are people who have done it for other purposes.

However the map is created, it has the same purpose, and the same
interface with the other projects:

Start from a given place.

Select a random place as a destination.

Find a route from the starting place to the destination.

If both start and destination places are in the cognitive map,
call the way-finding method created by one of the other projects
to find the route.

If either place is not in the cognitive map, use graph search
to find a route to the closest place in the cognitive map,
then use the cognitive map way-finder to connect them.

Simulate the route, giving an alternating sequence of views and actions
serving as input to the TOUR model, which allows the agent
to learn its own causal and topological map of the part of the
environment it experiences.

The TOUR model implementation is created by one of the other
projects.

In full generality, different distinctive states may have the
same view, but for this project, you can make views that
specify distinctive states uniquely.

Treat the destination as the new starting place, and repeat.

Project 2: Implement the TOUR Model, Learn Boundary Relations

The rules developed by Kuipers and Remolina to implement the TOUR model
are implemented in a special-purpose logic programming language called
Algernon. Design a simple macro-language for rules in Lisp or C++, and
a simple set of object declarations for the objects in the TOUR model
(primarily views, actions, distinctive states, schemas, places, paths
and regions).

The program should take an alternating sequence of views and actions
as produced by the route simulator in Project 1, and create the linked
network of views, actions, distinctive states, schemas, places, paths
and regions that are the causal and topological maps.

Initially, you may assume that views uniquely specify distinctive states.
A version that handles perceptual ambiguity is significantly more
difficult, but important.

Boundary relations are relations between places and paths. Conceptually,
a path divides the world into three subsets: the places on the path,
the places to the right of the path, and the places to the left of the
path. As the agent travels along a route, it is sometimes easy to learn
the relations between certain places and certain paths. In particular,
one can put places into the right-side or left-side regions of
specific paths, based on travel patterns. Rules to do this are
included in the TOUR model.

Project 3: Way-finding with Boundary Heuristics

Given a causal and topological map constructed by the TOUR model, and
a start place and destination place in that map, find a route from the
start to the destination place.

A place is on one or more paths. A path includes an ordered set of places.
Therefore, the topological map is a graph of places and paths. A route
is an alternating sequence of places and paths, with the starting place
at the beginning, the destination at the end, and each place on each
path it is adjacent to in the route.

Depending on the information in the cognitive map, there are a variety of
heuristics that can speed up the search. The boundary heuristic is

If there is a path such that the starting place is in the
right-side region of the path, and the destination place is in
the left-side region of the path (or vice versa),

then treat the path as a sub-goal in the way-finding search.

Putting the Projects Together

The hypothesis is that there will be a self-reinforcing pattern between
the paths with rich boundary relations and the way-finding heuristic that
sends new travel along those paths. Paths that appear in a lot of routes
will get a lot of boundary relations, and thus appear in more routes.
They will form the abstract skeleton of the environment.

Setting up the above evaluation environment, learning method, and
way-finding method will allow us to determine whether the skeleton
emerges from this cycle. If it does, we will also be able to study
the impact of different geography and different travel patterns on the
emergence of the skeleton.

If this is successful, it will be a publishable result.

Projects 1 and 2 could each be done and evaluated in isolation, but Project
3 would be very difficult to evaluate without Projects 1 and 2 working.

Exploring and Mapping 3D Environments

The SSH, out to the patchwork metrical map, is remarkably insensitive
to the dimensionality of the underlying space. The major impact of
moving in a higher-dimensional space is that the control laws may be
slightly more complex, and that the local metrical maps will require
more storage.

There are two different 3D environments of great importance that you
can study from this perspective:

in space, you have zero gravity and zero friction;

under water, you have zero gravity but very high damping friction.

The problems to be solved include:

Specify control laws for roll, pitch and yaw.

Specify control laws for stopping and docking.

How do we define and hill-climb to distinctive states in
a 3D place neighborhood?

How do we define and follow an edge?

How do we wander around a 2D face of a 3D object, looking for
a 1D edge?

How do we wander in the 3D environment, looking for an object?

The undamped and overdamped environments are sufficiently different
that there would be room for two people to work on this project, sharing
insights into the common parts.

Voronoi Diagrams from Exploration

For a geometric figure on two dimensions, the Voronoi Diagram is the
set of points equidistant from two or more points in the figure. In a
corridor environment, the Voronoi Diagram follows the midline of the
corridors. It is widely used in computational geometry, and there are
many algorithms for computing it.

Build a simulated robot to compute the Voronoi Diagram of an arbitrary
scene by exploring the environment.

The Voronoi Robot is circular, with center at (x,y) and radius r.

The Voronoi Robot (VR) moves by changing x, y and r to stay in
contact with obstacles.

If no contact, increase r until first contact.

With one point of contact, move (x,y) in a circular arc
about that point of contact until a second contact occurs.

With two points of contact, move (x,y) along the perpendicular
bisector of the chord, shrinking or growing r to maintain contact.

If a third point of contact appears, define a distinctive place.

Explore by moving toward all three chords defined by the points of
contact.

Keep track of choice points at the distinctive places to make sure
the Voronoi Robot explores the entire environment.

The trace of (x,y) values of the center of the Voronoi Robot is the
Voronoi Diagram.

Once this works, generalize the Voronoi Robot in three ways:

Put upper and lower bounds on r.

Add a wall-following control law, in addition to the midline following
control law.

Provide noise-rejection, so that two contact points that are sufficiently
close together are treated as a single point.

Note that each of these three generalizations includes parameter values.

Explore the effects of different parameter values on the diagrams produced
by the generalized Voronoi Robots.

Train NN to select control laws

Take a laser range-finder scan as input (or the local occupancy grid)
and train a selection mechanism given user-provided selections of the
appropriate control law. One issue is the relatively small number of
training instances provided. An opportunity is to treat the
activation levels of the output units to be appropriateness measures,
which would then support fuzzy transitions. Can we use the same
method to determine when the current control law is terminating?

Hazard Repulsion

Define a control strategy to avoid hazards while responding to a
specified control law. Assume for this project that hazards can be
sensed by the laser range-finder.

If the driver is commanding at the control level via the joystick,
deviate minimally from the commanded direction, as needed to avoid
collisions with the environment. Does this require recognizing the
driver's intentions, or can it be implemented as a simple potential
field controller?

If the driver is commanding at the causal or topological level, then
the wheelchair may detect and unsuspected obstacle in the middle of a
place neighborhood or path segment. What is the minimal deviation
required to avoid the obstacle, and if possible to enable to the robot
to complete the desired action? If it is not possible, what then?

Fox, et al, on obstacle avoidance

Arkin and/or Slack on potential fields

Hazard Detection

Devise a multi-modal strategy, combining laser range-finder, vision,
IR proximity sensor, sonar and others, for detecting hazards in the
way of commanded travel. These include mappable barriers like walls
and closed doors, unexpected obstacles like trash, and movable and
moving obstacles like pedestrians.

A particular problem is detecting drop-offs like curbs and stairwells.

Coping with Pedestrians

Devise a strategy for detecting moving obstacles (walking pedestrians
and moving cars) and avoiding collisions with them. Detect motion and
predict approximate trajectory. Move minimally to sidestep the
possibility of collision. Analyze motion strategies and customs of
pedestrians and cars, to be able to function effectively in a human
environment. Generalize to motion in a crowded halllway or transit
terminal. How should you respond when there is simply no room to
move?

German wheelchair on railway platform

A Programming Language for Semi-Autonomous Routines

Devise, implement and demonstrate a language for explicitly
programming behavioral routines to handle intricate, localized motion
problems. The could include passing through a doorway, entering and
using an elevator, navigating up/down a wheelchair ramp, and
eventually parking itself on an automobile wheelchair rack.

The language should be simple and clear, to allow easy programming,
but then should have a mechanism for learning through practice to tune
its parameters and smooth its transitions. The generality of the
language is essential, not just a demonstration of one or two
routines.

The simplest routine is an interesting unguided wander around an
irregular space, avoiding collisions.

Mataric, wandering routine in subsumption architecture

Gomi, wheelchair stuff

Natural Languages for Commanding a Robot

An intelligent robotic assistant raises interesting problems of
autonomy and control. An assistant is valuable because it can use its
intelligence autonomously in service to the human's goals. The human
must maintain (and perceive) control over the robot's actions. With
experience, the human driver will gain confidence in the robot's
ability to perform autonomously in response to higher level commands.

There are three conceptual levels of command language.

Control: the driver provides control actions via joystick, which the robot follows, avoiding hazards and interpreting intentions if possible.

Action: the driver specifies an action to perform (Forward, Right, Left, Stop, etc). The robot selects and executes control laws appropriate to the environment, stopping at the next decision point for further instruction.

Goal: the driver specifies a goal to achieve (usually a named place). The robot devises a plan to reach that goal, perhaps discussing it with the driver, and executes the plan.

Implement these languages for control of the wheelchair, designing a
suitable human interface to support the necessary human interaction
with the robot. Evaluate speech input for this application.

Analyze the generality of this command language framework for other
robot control tasks, including command of a teleoperated robot
explorer on Mars (30-60 minute communication delay).

Disambiguate Perceptual Aliasing

When traveling through an environment, some of the distinctive states
will have the same view. This is called perceptual aliasing,
and we want to solve it by using information from context (travel
history or topological neighbors) to determine unambiguously which
distinctive state the agent is at.

The problem of eliminating perceptual aliasing is one aspect of the
problem of learning the structure of a deterministic finite automaton
(DFA) from its sequence of observations. Stated in the robot
exploration context, the problem is to define a good strategy for
exploring unknown environments.