> Can anyone recommend a source of information on converting> arbitrary-precision rationals to floating-point numbers? [...]

Thank you for all the replies. Here is a summation of good sources:

"How to Read Floating Point Numbers Accurately," William D Clinger
Canonical Knuth, Volume 2 "The Art of Computer Programming:
Seminumerical Algorithms"
"What Every Computer Scientist Should Know About Floating-Point
Arithmetic," David Goldberg
and the GNU GMP looks like a good place to try to find an example
implementation.

I haven't had time to give the above the full attention they deserve,
but I think I have a handle on how to produce the significand and
exponent of the floating point number (as arbitrary-precision integers
(conveniently in base 256 in my implementation)). What's missing is
the conversion to the actual float. The Clinger paper makes use of a
mysteriously unexplained "make-float" function, for which there seems
to be no portable manifestation. Things like radix conversion are no
sweat, but bit twiddling is not something I've had a lot of experience
with. Is it typical in these situations to produce the IEEE 754 float
bit pattern directly? If so, do I have to worry not only about whether
that's the standard on the chosen platform, but also about things like
endianness?

-thant
[Yes, one typically twiddles the bits directly, and you have to be aware
of endianness. Fortunately, these days the only popular formats are
big- and little-endian IEEE and maybe legacy IBM hex. -John]