THE CMU ROVER
HANS P. MORAVEC
CARNEGIE-MELLON UNIVERSITY
This draft: March 18, 1982
BACKGROUND
A new mobile robot being designed at the CMU Robotics Institute will continue
and extend the visual navigation research (the Cart Project) completed in 1981
at Stanford's AI Lab.
The project is funded by the Office of Naval Research under a contract
entiltled Underwater Robots. The effort is interesting in its own right, and
complements projects involving underwater vehicles such as those at the
University of New Hampshire and the Naval Ocean Systems Center in San Diego.
These other efforts are interesting to us, but we feel they are incomplete.
Their current control systems are very simple, typically incorporating quite
small programs on one or a few microprocessors and implementing simple dead
reckoning navigation or terrain following behavior.
We have no experience or facilities for working with submersibles, but we do
have definitive experience with powerful control systems able to efficiently
navigate and otherwise intelligently deal with cluttered real world
environments. We plan to improve and expand on these techniques, making them
more reliable and computationally more efficient. Our testbed will, for
convenience, be a series of land mobile rovers, the later ones having
lightweight manipulators, with high agility and vision and sonar primary
senses. Our design calls for about a dozen onboard processors (about half of
them powerful 16 bit MC68000's) on the first rover for high speed local
sequencing, servo control and communication. Serious processing power,
primarily for vision, is to be provided at the other end of a remote-control
link by a VAX 11/780 - array processor combination. Though the fixed base
portion of our system is too big and power consumptive for a small autonomous
vehicle, we expect that the algorithms developed on it can be implemented in
future on compact specialized hardware.
Applying our dry land techniques to a submersible vehicle will require a
further effort, which we hope to make some time in the future in co-operation
with a group with experience with submarine robots (resulting in offspring
their bodies with our brains). Our land techniques are not as far as they may
appear from the needs of an underwater traveller. Though the sensing modality
in the latter case may be primarily sonar rather than primarily optical, the
three dimensional reasoning and planning that would be needed in an advanced
system is the same after the initial step of reducing the raw data to a rough
3D internal model. Only the sensory "front-end" would be different. The
difficult task of navigating very close to a cluttered seabed, or in an
intricate structure, is similar to the cluttered dry-land navigation problem,
though involving the third dimension to a greater extent.
Status
We are still in the system building stage in this project. Mechanical design
of the first rover is essentially complete, and many of the key mechanical
portions have been fabricated and a structural framework exists. Electronic
design of the onboard processors is also essentially complete, and benchtop
prototypes are running. The required onboard low-level software has been
started, and portions of it are working in simulation.
DETAILS
The new rover may be the first of a series. Later ones may be equipped with
small lightweight manipulators being developed by Hari Asada and Takeo Kanade
in a parallel effort under the same contract. These arms are driven by
directly coupled low speed, high torque samarium-cobalt magnet motors. The
weight and power requirements will be further reduced by using the mobility of
the rover to substitute for the shoulder joint of the arm. Such a strategy
works best if the rover body is given a full three degrees of freedom (X, Y and
angle) in the plane of the floor. Conventional steering arrangements as in cars
give only two degrees at any instant.
Though our first robot will initially have no arm, we are constructing it
with the agility required later. We achieve this by mounting the chassis on
three independently steerable wheel assemblies. The control algorithm for this
arrangement at every instant orients the wheels so that lines through their
axles meet at a common point. Properly orchestrated, this design permits
unconstrained motion in any (2D) direction, and simultaneous independent
control of the robot's rotation about its own vertical axis. An unexpected
benefit of this agility is the availability of a "reducing gear" effect. By
turning about the vertical axis while moving forward the robot derives a
mechanical advantage for its motors. For a given motor speed, the faster the
rover spins, the slower it travels forward, and the steeper the slope it can
climb. (Visualization of this effect is left as an exercise for the reader.)
To permit low friction steering while the robot is stationary, each assembly
has two parallel wheels connected by a differential gear. The drive shaft of
the differential goes straight up into the body of the robot. A concentric
hollow shaft around this one connects to housing of the differential. Turning
the inner shaft causes the wheels to roll forwards or backwards, turning the
outer one steers the assembly, with the two wheels rolling in a little circle.
They were manufactured for us by Summit Gear Corp.
Each shaft is connected to a motor and an optical shaft encoder (Datametrics
K3). The two motors and two encoders are stacked pancake fashion on the wheel
assembly, speared by the shafts. There are no gears except for the ones in the
differential.
The motors are brushless with samarium-cobalt permanent magnet rotors and
three-phase windings (Inland Motors BM-3201). With the high energy magnet
material, this design has better performance when the coils are properly
sequenced than a conventional rotating coil motor. The coils for each are
energized by six power MOSFETs (Motorola MTP1224) mounted in the motor casing
and switched by six opto-isolators (to protect the controlling computers from
switching noise) whose LEDs are connected in bidirectional pairs in a delta
configuration, and lit by three logic signals connected to the vertices of the
delta.
The motor sequencing signals come directly from onboard microprocessors, one
for each motor. These are CMOS (Motorola MC146805 with Hitachi HM6116 RAMs) to
keep power consumption reasonable. Each processor compares the output of its
shaft encoder to a desired motion (supplied by yet another processor as a time
parameterized function) and energizes or de-energizes the appropriate motor
phase to keep the motor on track with no wasted power. Since each motor has a
potential power dissipation of 250 watts this is more than a nicety.
The shaft encoder outputs and the torques from all the motors, as estimated
by the motor processors, are monitored by another processor, the simulator, a
Motorola MC68000, which maintains a dead-reckoned model of the robot's position
from instant to instant. The results of this simulation (which represents the
robot's best position estimate) are compared with the desired position,
produced by another processor (68000), the sequencer, in yet another processor
(68000), the conductor, which orchestrates the individual motor processors. The
conductor adjusts the rates and positions of the individual motors in an
attempt to bring the simulator in line with the sequencer, in what amounts to a
highly non-linear feedback loop.
Other onboard processors are:
communications A 68000 which maintains an error corrected and checked packet
radio link with a large controlling computer (a VAX 11/780
helped out by an FPS 300 array processor, and a custom high
speed digitizer) which will do the heavy thinking. Programs
run in the sequencer are obtained over this link.
camera A 6805 which controls the pan, tilt and slide motors of the
onboard TV camera. This compact camera (Edo Western 1631)
broadcasts its image on a small 2W UHF transmitter (made by
3DBM Inc?). The signal is received remotely by a standard TV
tuner (we have several made by Motorola) whose video output is
digitized by a high bandwidth digitizer system and then read by
the remote VAX. There are tentative plans for a parallel
(minimal) vision system using a 68000 with about 256K of extra
memory onboard the rover, for small vision tasks when the rover
is out of communication with the base system.
sonar A 6805 which controls a number of Polaroid sonar ranging
devices around the body of the rover. These will be used to
maintain a rough navigation and bump avoidance model. Their
readings are available to the sequencer which will have
conditional and interrupt capabilities.
Communication between processors is serial, via Harris CMOS UARTs. The
Conductor talks with the motor processors on a shared serial line and the
Sequencer communicates with the Sonar, Camera and any other peripheral
processors by a similar method.
The rover is powered by six sealed lead-acid batteries (Globe gel-cell 12230)
with a total capacity of 60 amp hours at 24 volts. The motors are powered
directly from these, the rest of the circuitry derives its power indirectly
from them through switching DC/DC converters (Kepco RMD-24-A-24 and
Semiconductor Circuits U717262). Each 6805 processor draws about one eighth of
a watt, each 68000 board only one watt.
Physically the robot is approximately a cylinder a meter tall and 55
centimeters in diameter. It weighs about 100 kilograms. The maximum
acceleration is one quarter g, and the top speed is 10 kilometers/hour. With
appropriate onboard programming the motions should be very smooth. The great
steering flexibility will permit simulation of other steering systems such as
those of cars, tanks and boats by changes in programming.