The compute-power required to perform general-purpose manipulation of
color video streams is too unwieldy to be worn in a backpack (although
I've constructed body-worn computers and other hardware to
facilitate very limited forms of reality mediation). In particular, a
system with good video-processing capability, such as
Cheops [16] or one or more SGI Reality Engines, may be used
remotely by establishing a full-duplex video communications channel
between the RM and the host computer(s).

In particular, a
high-quality communications link (which I call
the `inbound-channel') is used to
send the video from my cameras to the remote computer(s), while a
lower quality communications link (the `outbound channel') is used to
carry the processed signal from the computer back
to my HMD. This apparatus is
depicted in a simple diagram
(Fig 2).

Figure 2: Simple implementation of a reality mediator
for use as a personal visual assistant.
The camera sends video to one or more more computer systems
over a high-quality microwave communications link,
which I refer to as the `inbound channel'.
The computer system(s) send back the
processed image over a UHF communications link
which I refer to as the `outbound channel'.
Note the designations ``i'' for inbound (e.g. iTx
denotes inbound transmitter), and ``o'' for outbound.
`visual filter' refers to the process(es)
that mediate(s) the visual reality and possibly insert
virtual objects into the reality stream.

Ideally both channels would be of high-quality, but the
machine-vision algorithms were found to be much more susceptible to noise
than was my own vision (e.g. I could still find my way around in a
``noisy'' reality, and still interact with ``snowy'' virtual objects).

WearCam (e.g. Fig 1)
permits me to experience
any coordinate transformation that can be expressed
as a mapping from a 2D domain to a 2D range,
in real time (30frames/sec = 60fields/sec) in full color, because
a full-size remote computer (e.g. SGI Reality Engine) is used to perform
the coordinate transformations.
This apparatus allows me to experiment with various
computationally-generated coordinate
transformations both indoors and
outdoors, in a variety of different practical situations.
Examples of some useful coordinate transformations
appear in Fig 3.

Figure 3: Living in coordinate-transformed worlds:
Color video images are transmitted, coordinate-transformed,
and then received back at 30 frames per second -- the full
frame-rate of the VR4 display device.
(top)
This `visual filter' might allow a person with very poor
vision to read (due to the central portion of the visual field
being hyper-foveated for
a very high degree of magnification in this area),
yet still have good peripheral vision
(due to a wide visual field of view arising from demagnified
periphery).
(bottom) This `visual filter' might allow a person with a
scotoma (a blind or
dark spot in the visual field) to see more clearly,
once having learned the mapping.
The visual filter also provides edge enhancement
in addition the coordinate transformation.
Note the distortion in the cobblestones on the ground
and the outdoor stone sculptures.

Researchers at Johns Hopkins University have been experimenting
with the use of cameras and head-mounted displays for helping
the visually handicapped. Their approach has been to use the optics
of the cameras for magnification, together with the contrast
adjustments of the video display to increase apparent scene
contrast[17].
The real-time visual mappings
(Fig 3)
successfully implemented using the
apparatus of Fig 1 may be combined with these
other approaches (using well-designed optics and
contrast-enhancement achieved by adjusting the analog circuits in the
video display itself).