I have to repeat what I have written many times already: the main reasons why I now generally do not respond to RJF's posts are
1) nothing new ever comes up
2) we are always arguing always cross purposes
I think I know why the first happens, but this time, out of politeness, I won't say. As for the second, I can only speculate. Perhaps our minds work so differently that even when we use the same words we invariably have different things in mind. Anyway, I don't have the time to go RJF's latest reply line by line so I will just consider a short sample, which, I think, illustrates my point pretty clearly.
On 1 Jun 2013, at 02:05, Richard Fateman <fateman at cs.berkeley.edu> wrote:
>>
>> 1. Significance arithmetic, being a (faster) version of interval
>> arithmetic, can be useful when used by people who understand it and that
>> it is used quite reliably by many built-in Mathematica functions such as
>> NSolve (and also Solve).
>
> I doubt that I said this. I probably said (since I believe it to be the
> case) that internal to routines in Mathematica, the default significance arithmetic is bypassed, essentially by separately computing
> the precision of intermediate results and imposing that on the numbers in an iteration. In other words, these routines work IN SPITE OF
> significance arithmetic, not because of it.
A number of times Daniel Lichblau explained to you how significance arithmetic is used in his implementation of numerical Groebner Basis, which is a fundamental tool in NSolve. I believe is also used in some aspects of Solve and Reduced implemented by Adam Strzebonski (although it is possible that interval arithmetic is use instead, possibly both are). Daniel has even pointed out that nobody has been able to implement a working version using fixed precision, although a whole book and at least one doctoral thesis has been written on this topic. You have known about it a long time and never realy try to challenge Daniel's assertions (well, how could you, what do you know about this sort of stuff). At most, you try to deny the importance of the Groebner Basis approach itself, again, not based on any deep knowledge or experience.
>
>> 2. It is easy to switch to fixed precision arithmetic whenever one wants
>> with a simple usage of Block.
>
> If you switch to fixed precision, then you have to know what
> the precision should be fixed at. Numerical analysts have some techniques for doing this, and these techniques can be implemented
> in Mathematica, and have been, in some internal routines, by
> clever people. But Mathematica documentation would lead you to
> believe that Mathematica does the right thing by default. This
> is, in general, false. As you see by the example you quote, from me!
Significance arithmetic, when used properly, can accomplish most of the same things that Interval arithmetic can do, and it can do it automatically. Here is an example (due to Maxim Rytin): proving that a expression is non-zero.
N[Sum[E^(-(1/2 - k)^2/2) + E^(-k^2/2), {k, -12, 12}] - 2*Sqrt[2*Pi], 10]
3.955157333*10^-34
Accuracy[%]
43.4028
Here we see that the accuracy of the result is greater than 43 and so the absolute error is less than 10^-43, therefore the result is mathematically proved to be different from zero. (Of course this is assuming that significance arithmetic does work the way it is supposed to work. Also, exactly the same thing can be achieved by using Interval but more more slowly).
>>
>
>
> The damage, however, is to the user who believes the Significance
> arithmetic when it does not deserve to be believed.
See the example above.
>
>> one: in non standard analysis one has infinitely many "infinitely
small
>> numbers" x such that n x <1 for every positive integer n.
>
> OK, If I want to model that I should definitely not use significance
> arithmetic. The zero-ish number z below has the property than
> z==0
> Rationalize[z] is 0
> yet z<1 is false.
This is one of many examples of "cross purpose" arguing. I was not discussing implementing non-standard analysis at all. My point was that that there is nothing logically more dubious about a finite "number" x such that x+1=1 than there is about a positive "number" x such that nx< 1 for every positive integer n, or, alternatively, finite "number" x such that x/n >1 for every positive integer n. Mathematicians often use the word "number" when referring to objects belonging to some "extension" of the real line.
The use of such things is justified purely by usefulness and both non-standard analysis and significance arithmetic are useful. Also, everything that can be done using these tools can also be done without them but at the cost of more work and less intuitive approach.
In other words, everything I wrote concerned an analogy between two approaches: one in mathematics the other in Mathematica. I can't imagine how you managed to misinterpret it but I am certainly no longer surprised.
>
>
> This is both
>> logically sound and very useful in practical proofs and computations.
>> Exactly the same is true of "fuzz balls".
>
> I don't see how this relates to Mathematica. There are many algebraic
> systems possible. Mathematica deals with just a few, like the Ring
> of integers, the Field of rational numbers, some modular stuff, Polynomials and Rational Functions, Matrices, floats as a model
> of reals, principally. complex numbers, and a bunch of other
> stuff including Log, Exp, graphics stuff, maps, whatever.
> Someone may have written a package for non-standard analysis, but
> if so, probably did not use software extended precision.
Again the same thing. So I think this illustrates my point and is a good place to stop.
Andrzej Kozlowski