Carnegie Mellon
Mobile Robot Agents
Eduardo Camponogara
18-879, Special Topics in Systems and Control: Agents
Electrical & Computer Engineering
Carnegie Mellon
Report Goals
A study of the specifics of robotic
agents.
What makes robot agents different
than agents in other domains,
Goals: such as the web?
An investigation of collaboration
mechanisms for teams of robots.
Carnegie Mellon
Today’s Outline
Agent Perception Mapping
Collaboration
Navigation Planning
“Collaborative Mobile Robotics: “Sensor-Based Real-World
Antecedents & Directions,” Mapping & Navigation,” 1987
1998 by Uny Cao et al. by Elfes.
“Using Occupancy Grid for Mobile
Robot Perception &
Navigation,” 1989 by Elfes.
“A Probabilistic Approach to
Concurrent Mapping and
Localization for Mobile
Robots,” 1998 by Thrun.
Carnegie Mellon
Multiple-Robot Systems
The motivations for the intense interest in
designing systems of multiple robots:
Tasks may be complex. The efficiency of scale. Building
A robot is limited in the space simple robots is easier, cheaper
it covers and perceives. and more flexible.
Replace
Limited Faulty
Perceptio Robot
n
Carnegie Mellon
Cooperative Behavior
Non-cooperative
Given a task, a multiple-robot
system displays cooperative
behavior when:
The underlying collaboration
mechanism makes the total
Cooperative
utility increase.
That is, the system’s
performance is higher when
robot agents collaborate.
Same work,
but less effort
Carnegie Mellon
Cooperative Behavior
Observation: Most of the research has focused
on cooperation mechanisms.
Environment
The design Given a) a team of robots,
problem: b) an environment, and
c) a task,
Find a cooperation mechanism.
Research: Along the axes, or elements, Robots
of the design space
Carnegie Mellon
The Axes of the Design Space
Organization Centralized/Decentralized
Architecture Differentiation Homogeneous/Heterog
Model Other Ags.
Resource Conflicts Space Sharing Restricted /Multiple Paths
Autonomous /Centralized
Innate (Insects)
CooperationOrigin
Motivated (Utility)
Learning Find control parameters
Sensing (Vision, Radar)
Communications Explicit (Wireless Net)
Carnegie Mellon
Two Relevant Points
1.) Does the scaling property of decentralization offset
the coordinative advantage of centralized systems?
Neither empirical, nor theoretical, work that addresses
this question in mobile robotics has been published yet.
2.) Agent perception and localization are usually
taken for granted in the software domain?
In Robotics, perception and localization define
research sub-fields.
Distinguishing Simulated results may be inconclusive without
characteristic of adequate modeling of error and uncertainty
robot agents in perception and location.
Carnegie Mellon
Perception & Location In Robot Agents
To accomplish its task,
Where am I?
the autonomous robot must plan.
To conceive a plan,
the autonomous robot needs a description of
the “world” and should know its location.
How does the robot agent represent its world?
How does the agent map the unknown environment, while
accounting for uncertainty in perception & location?
The questions define: The Mapping Problem.
Carnegie Mellon
Representing the World
y
Occupancy Grid
The grid stores the
probability p(x,y) that x
cell c(x,y) is occupied.
p(x,y)
Applications: Given the occupancy grid and landmarks, the
agent can come up with a plan to accomplish
its tasks. (e.g., drop cans into a garbage bin)
Carnegie Mellon
Features of the Occupancy Grid
Traditional approaches, to representing the world,
rely on recovery and manipulation of geometric models.
No need of prior knowledge of the
environment.
Advantages of the Incremental discovery procedure.
occupancy grid:
Explicit handling of uncertainties.
Ease to combine data from multiple
sensors.
Carnegie Mellon
Sensing the Surroundings
Sensing Procedure: The robot agent
a) senses its surroundings,
b) process the signals, and
c) computes the occupancy estimate r(i),
{OCC, EMP, UNK}, of cell i.
Sensing Action: Obstacle
Pe is the probability that
the cell is empty.
1 Pe Po
Po is the probability that
the cell is occupied.
Distance R
Carnegie Mellon
Updating the Occupancy Grid
OCC - occupied
The robot computes the
occupancy estimate of cell i, EMP - empty
r(i),
at time t.
UNK - unknown
We want to compute the probability that cell i is occupied at time t,
p[C(i)=OCC | r(i)], given the observation r(i).
Assuming that the process is markovian in space and time,
p[C(i)=OCC | r(i)] can be computed with Bayes rule as follows:
p[C(i)=OCC| r(i)] = p[r(i) | C(i)=OCC].p[C(i)=OCC]/p[r(i)]
p[r(i) | C(i)=OCC].p[C(i)=OCC]
p[C(i)=OCC| r(i)] =
å ("s) p[r(i) | C(i)=s].p[C(i)=s]
Carnegie Mellon
An Instance of Occupancy Grid
The probabilities
The occupancy
estimates
Carnegie Mellon
Weakness of the Updating Procedure
Reminder: Map building is the problem of
determining the location of the entities of interest,
relative to a global frame of reference.
Example: Determine obstacles relative to the cartesian frame.
To determine the The robot agent needs to
location of these entities know its location
Weaknesses of Sensitive to error/uncertainty in the agent’s location.
the previous
approach: It does not account for past sensor readings.
Carnegie Mellon
Improving Quality of Occupancy Grids
New Approach: Formulate the mapping problem (updating) as a
maximum-likelihood estimation problem such that:
a) The location of the landmarks are estimated,
b) The robot’s position is estimated, and
c) All past sensor readings are considered
Given the current position
Robot Motion and control input,
what is the next position?
Elementary Models:
Given the current map and
Robot Perception robot’s position,
what are the observations?
Carnegie Mellon
Elementary Models
Robot Motion Robot Perception
X denotes the robot’s location O denotes the landmark
in space. observation (e.g., obstacle).
U denotes the control action. M denotes the map of the
environment (occup. grid).
P(O | X,M)
P(X’ | X,U) The probability of making
The probability that the robot is at observation O, given that the
position X’, if it executed actionU robot is at location X and M is
at location X. the map.
Carnegie Mellon
The Data
The data is a sequence of control actions, u(t), and observations, o(t).
d ={o(1),u(1),…,o(n-1),u(n-1),o(n)}
The model is a HMM (Hidden Markov Model)
1) The agent does not know the location at time t,
Hidden
x(t).
Variables
2) It does not know the map m either.
Carnegie Mellon
Finding the Most Likely Map
P(m|d) be the likelihood of map m given data d.
P(d|m) be the likelihood of data d given map m.
Let:
P(d) be the probability of observing data d.
P(m) be the prior probability of map m.
P(d|m) . P(m)
The most likely map: m* = ArgMax P(m|d) =
P(d)
The Expectation-Maximization Alg (EM)
Problem Solution: for HMMs, together with some tricks,
can compute m* efficiently.
Carnegie Mellon
The Outline of the EM Algorithm
Step 1. Set t=0 and guess a map m(0).
Step 2. (E-step) Fix the model m(t) and estimate the
probabilities.
Step 3. (M-step) Find model m(t+1) of maximum
likelihood.
Step 4. Make t=t+1 and go to step 2.
It works like a
Estimate the
steepest decent Take a step
gradient
algorithm:
Carnegie Mellon
Experiments
Map from raw data Occupancy grid from sonar data
Max likelihood map Max likelihood occupancy grid