I wonder if the following may represent a reasonable balance between
existing assumptions/practice/code and the benefits of a virtually
bounded reciprocal real number system:
1/0 == inf ; exact sign-less 0 and corresponding reciprocal.
1/0.0 == inf.0 ; inexact sign-less 0.0 and corresponding reciprocal.
1/-0 == -inf ; exact signed 0, and corresponding reciprocal.
1/-0.0 == -inf.0 ; inexact signed 0, and corresponding reciprocal.
1/+0 == +inf ; exact signed 0, and corresponding reciprocal.
1/+0.0 == +inf.0 ; inexact signed 0, and corresponding reciprocal.
(where sign-less infinities ~ nan's as their sign is ambiguous)
And realize I've taken liberties designating values without decimal points
as being exact, but only did so to enable their symbolic designation if
desired to preserve the correspondence between exact and inexact
designations. (as if -0 is considered exact, then so presumably must -1/0)
Thereby one could define that an unsigned 0 compares = to signed 0's to
preserve existing code practices which typically compare a value against
a sign-less 0. i.e.:
(= 0 0.0 -0 -0.0) => #t
(= 0 0.0 +0 +0.0) => #t
(= -0 -0.0 +0 +0.0) => #f
While preserving the ability to define a relative relationship between
the respective 0 values:
(< 1/-0 -0 +0 1/+0) => #t
(<= 1/-0 1/-0.0 -0 -0.0 0 +0 +0.0 1/+0 1/+0.0) => #t
(= 1/0 1/0.0) => #t ; essentially nan's
(= 1/0 1/+0) => #f ; as inf (aka nan) != +inf
Correspondingly, it seems desirable, although apparently contentious:
1/0 == inf :: 1/inf == 0 :: 0/0 == 1/1 == inf/inf == 1
and (although most likely more relevant to SRFI 70):
x^y == 1
As lim{|x|==|y|->0} x^y :: lim{|x|==|y|->0} (exp (* x (log y))) = 1
As it seems that the expression should converge to 1 about the limit of 0;
as although it may be argued that the (log 0) -> -inf, it does so at an
exponentially slower rate than it's operand, therefore: lim{|x|==|y|->0}
(* x (log y)) = 0, and lim{|x|==|y|->0} (exp (* x (log y))) = (exp 0) = 1;
and although it can argued that it depends on it's operands trajectories
and rates, I see no valid argument to assume that it's operands will not
approach that limit at equivalent rates from equidistances, which will also
typically yield the most useful result, and tend not to introduce otherwise
useless value discontinuities and/or ambiguities.
Where I understand that all inf's are not strictly equivalent, but when
expressed as inexact values it seems more ideal to consider +-inf.0 to
be equivalent to the bounds of the inexact representation number system,
thereby +-inf.0 are simply treated as the greatest, and +-0.0 the smallest
representable inexact value; as +-1/0 and +-0 may be considered abstractions
of exact infinite precision values if desired.
However as it's not strictly compatible with many existing floating point
implementations, efficiency may be a problem? (but do like it's simplifying
symmetry).