RSTA-MEP and the Linux Crewstation

Automatically detect the enemy in the dark and notify friendly units where he is.

How Fast Is Fast Enough?

The crewstation falls into the category of soft real time: the
system doesn't fail if something is late. Because this is a
human-in-the-loop test bed, with most of the time-critical components in
the embedded systems, it has to run only fast enough for the operator
to perform tasks. For this reason we're not using one of the real-time
Linux frameworks; we're overpowering the problem with brute-force
hardware. We have SCSI disks, enough memory basically to eliminate paging,
a GeForce4 graphics card for fast OpenGL and dual 2.4GHz
processors from Microway.

A limit did emerge for two parts of the system: video and pointing
the sensor. The live framing video is fed to the crewstation at
RS-170 rate, one set of scan lines every 1/60 of a second. These had
to be combined and displayed fast enough to keep up and maintain a constant
rate. To do that, we made sure we had enough network bandwidth to ship the
video and plenty of CPU and graphics horsepower to
keep the displays refreshed. We then used the NVIDIA driver's ability to
sync to the vertical retrace of the monitor. With the monitor set to
refresh at 60Hz, we were there. (See the README.txt file supplied with
NVIDIA drivers; the current one is at download.nvidia.com/XFree86_40/1.0-4194/README.)

Pointing the sensor presents a similar challenge. The sensor must be
responsive enough to the grips that the operator can point it
without missing the target and overcompensating. Although the video is
largely a matter of bandwidth, pointing the sensor is a matter of
latency. Having a long message chain contributes to latency. For
instance, a button press or a slewing command starts off in the grip or
GUI process, goes to the control process to determine if that input is
valid and then moves to the translator to go to EO message format. Next it goes across the
gigabit Ethernet to an embedded process that receives that message,
then moves
to the OE and the embedded systems code, then on to the
actuator and, finally, the results come back in the video stream. The combination of
dividing the translator process into two threads and compiling with
full optimizations (-Wall -ansi -O3 -ffast-math
-mpentiumpro) did the
trick.

We used the gprof profiler to see where the hot spots were in the
code. (See the info page for gprof.) Here, we ran into a problem with
profiling the video code: when we used X timers (XtAppAddTimeOut), no
timing data accrued in the profile. (Do the profiler and
XtAppAddTimeOut use the same signal and interfere with each other?)
Another optimization we discovered is for the video source to write
both the odd and even scan lines across the network with a single
write statement instead of two separate, smaller ones.

Pluses, Pitfalls and Conclusions

Using Linux led to problems in a few places. For
instance, we couldn't find a vendor who supplied PCI Mezzanine
cards for PowerPC with VxWorks drivers, PCI cards with Linux drivers or
who could handle VI protocol. In the end we had to drop Fibre Channel.

We did find, though, a couple of cases where Linux gave us an advantage on this
project. Because we booted from a hard drive, we didn't have to write
our system to EEPROM the way the embedded side did. When they made
that transition to EEPROM, their ability to debug was diminished.
Also, Linux provides core files to aid debugging, which
VxWorks doesn't. The Linux crewstation is more robust and delivers
better image quality than its predecessor. Finally, our shop
integration and unit testing are easier on Linux, because commodity PCs are
more plentiful than are embedded PowerPCs.
In the future, we expect that having the full power and flexibility
of the Linux, X and OpenGL environment will be valuable as we
add more modes and more devices to our prototype.