claird at lairds.com (Cameron Laird) wrote in
news:vaip4ts74ajqd0 at corp.supernews.com:
> In article <mailman.1051286382.24604.python-list at python.org>,
> Tim Peters <tim.one at comcast.net> wrote:
>>[Fidtz]
> .
> .
> .
>>> While I think I can see the way to do this with various bit-grubbing
>>> techniques, the potential for error is massive, esp considering my
>>> non comp-sci background.
>>>>Post a link to a definition of this format, and I bet someone will
>>help.
>>>>> Does anyone know of a library in python or even (fairly) easily
>>> wrappable C that might let me specify the format in a more high
>>> level way?
>>>>Precise English will translate into Python easily enough <wink>.
>>>>>> It's true that getting floating-point formats correct
> to the far-right bit is, in general, tedious; Fidtz
> shows good instincts in his caution about that prospect.
>> The bad news is that, no, there are no widely-applicable
> parametrizations of the range of interesting floating-
> point formats. This remains an area of "craftwork".
> Learn to use a hand-chisel, or find someone who will do
> so for you.
Thanks for the info, I was worried that might be the situation!
The closest thing I have to a definition from the AlphaBasic programming
manual: The reason for this is that floating point numbers occupy six
bytes of storage. Of the 48 bits in use for each 6-byte variable, the
high order bit is the sign of the mantissa. The next 8 bits represent
the signed exponent in excess-128 notation, giving a range of
approximately 2.9*10^-39 through 1.7*10^38. The remaining 39 bits
contain the mantissa, which is normalized with an implied high-order bit
of one. This gives an effective 40-bit mantissa which results in an
accuracy of 11 significant digits.
I had a go this afternoon and managed to read the sign and something
approximating the exponent, though I will have to test the edge
conditions carefully :)
Dom