* Loic Grenie [2007-03-10 19:23]:
> * Karim Belabas [2007-03-10 18:13]:
>> * Loic Grenie [2007-03-10 17:41]:
>>> I'm just wondering if there is a strong reason why 1.*I*I has an
>>> imaginary part (equal to 0., but it's there). For instance neither I*I*1.
>>> nor 1.*(I*I) have any imaginary part.
>>
>> 1) Remember that floating point arithmetic is not associative, and need not
>> be commutative.
>>
>> 2) Using the standard formulas for complex arithmetic
>>
>> (a+bI)(c+dI) = (a*c - b*d) + I (a*d + b*c)
>>
>> [ 4 multiplications ], we obtain:
>>
>> (0 + 1.*I) * (0 + I) = (0*0 - 1.) + I*(0*1. + 0*1) = -1.
>>
>> and imaginary part is indeed an exact 0.
>>
>> 3) Unfortunately we use Karatsuba (aka 3M) instead of standard formulas,
>> and compute
>>
>> (a+bI)(c+dI) = (a*c - b*d) + I ((a+b)*(c+d) - a*c - b*d)
>>
>> [ 3 multiplications ]. There the imaginary part becomes
>>
>> (0 + 1.)*(0 + 1) - (0*0) - (1.*1) = 0.,
>>
>> NOT an exact 0 !
>>
>> So, yes, this is expected.
>
> I've read the code and I knew it was expected. The question was: is there
> a \underline{strong} reason, meaning: is it reasonable to change the
> semantics to gain between 1 and 3% on make bench ?
<ASIDE>
make bench is not a sensible tool to measure efficiency. It's more a
(quick, partial) coverage / regression test; as well as a nice
historical comparison point, see MACHINES.
>From <http://pari.math.u-bordeaux.fr/benchs/timings-int.html> I gather
that multiplying "ordinary" integers (100 decimal digits) is about 7
times slower than adding them (> 100 times for 1000 digits). Hence
saving one multiplication in gmul(t_COMPLEX, t_COMPLEX) at the expense
of some additions essentially means a 25% speedup for all complex
arithmetic.
</ASIDE>
<ASIDE2>
One thing that *can* be said about mulcc() is that my 3M implementation
is unstable, possibly suffering from catastrophic cancellation. One
should implement all 4 possible Karatsuba formulas and choose the stable
one, depending on the signs.
</ASIDE2>
The actual result is indeed different, but we are not changing semantics:
an inexact computation returned an inexact result, which should be the
expected behaviour.
As Bill pointed out, it's rather I*I = -1 which changes semantics.
In fact it's worse than that, we have only 'de facto' semantic here (no
semantics, if you wish): it would make sense that a t_COMPLEX with an
imaginary part which is an exact zero is an invalid object, because it
can be coerced to a simpler object [ I'm not advocating this !!! ], just
like t_FRAC with non-coprime components are invalid. But here, a single
function, mulcc() decides on its own that -1 is nicer than -1 + 0*I.
Some other functions might not bother in analogous situations and return
-1 + 0*I. None of this can be deduced from any part of the documentation.
Except the bits specifying PARI types by stating that operations yield
the "expected result".
So, I agree with Bill's fundamental objection [ return types should be
predictable ]. But this is a programmer's objection.
>From a "naive" point of view on the other hand, automatic type coercion
is nice, because users very often know a result belongs to a certain
domain, and strongly dislike getting something more complicated when an
"obvious" simplification is at hand. To some extent, 'simplify()' solves
the problem, but not many people use it. Also, almost nobody reads the
documentation, esp. the fine points about type specifications.
>From a user interface point of view, whatever one decides, there will be
irritating behaviours and the principle of least suprise is essential.
Say we revert to strict semantics and decide I*I = -1 + 0*I. I decide to
factor the norm of a Gaussian integer [ z * conj(z) ]; presto, I get a
factorization over Z[I], not what I wanted !
Any ideas ?
K.B.
P.S:
> After Bill's answer, my question has slightly changed though: is it
> important that it is 0.I instead of 0I ?
What do you mean ? If you suggest replacing '0.0' by an exact '0' then
this will break a huge amount of code [ see "What is zero?" somewhere in
the manual. I don't quite agree with the philosophy, but it's been there
from the start. ] In particular, the exponent of t_REAL zeros is often used
to estimate needed accuracy.
P.S2: In the case at hand, the matter is rather aesthetic. A much more
annoying situation is 'exact 0' x 'inexact scalar' = 'exact 0'.
This is quite a minor simplification whenever it happens, and makes
librari programming very irritating. E.g. one must use 'mp<fun>' instead
of '<fun>rr' because of the fringe mpmul(gen_0, t_REAL) --> gen_0 [ in
particular, we have "spurious" type checks and branches all over the
place ]
I have no ready-made solution for this, since returning an inexact 0 is
not natural. What should its exponent be ? Something like
/* gmul(gen_0, t_REAL x) *
signe(x)? real_0_bit(expo(x) - bit_accuracy(lg(x)))
: rcopy(x);
[ i.e. something small compared to what we know of x ] ????
--
Karim Belabas Tel: (+33) (0)5 40 00 26 17
Universite Bordeaux 1 Fax: (+33) (0)5 40 00 69 50
351, cours de la Liberation http://www.math.u-bordeaux.fr/~belabas/
F-33405 Talence (France) http://pari.math.u-bordeaux.fr/ [PARI/GP]
`