> On Sep 30, 8:33 am, Daniel Lichtblau <d... at wolfram.com> wrote:
>> jwmerr... at gmail.com wrote:
>> > Below is a definite integral that Mathematica does incorrectly.
>> > Thought someone might like to know:
>>
>> > In[62]:= Integrate[Cos[x]/(1 + x^2), {x, -\[Infinity], \[Infinity]}]
>>
>> > Out[62]= \[Pi]/E
>>
>> > [...]
>>
>> Pi/E is correct. For one thing, it agrees with NIntegrate. For another,
>> you can find and verify correctness of an antiderivative, observe it
>> crosses no branch cuts, and take limits at +-infinity to verify the
>> definite integral.
>>
>> Moreover I do not replicate your parametrized result.
>>
>> In[20]:= Integrate[Cos[a*x]/(1+x^2), {x,-Infinity,Infinity},
>> Assumptions -> Element[a,Reals]] // InputForm
>> Out[20]//InputForm= Pi/E^Abs[a]
>>
>> I got that result, or something equivalent, in every Mathematica version
>> I tried going back to 4. I may have missed some point releases. Also it
>> could be a timing-dependent problem, particularly if you are running
>> version 6 (where it seems to be much slower than other versions).
>
> What is a 'time-dependent problem' in this context?
>
> -- m
Before responding, let me mention that I was incorrect in my remark to the
effect that the problem could not be replicated. It was in version 7.0.0.
This was shown to me off-line (by the person who reported it as a bug in
that version; perhaps more embarrassing is that I has fixed it, around 10
months ago, but had no recollection). I suspect when I ran tests in my
prior response, I either forgot 7.0.0, or was mistakenly using 7.0.1 when
I thought I was testing it.
As for what I mean by timing-dependent problems, there is brief mention in
"Symbolic definite integration: methods and open issues", which can be
found here.
http://library.wolfram.com/infocenter/Conferences/5832/
I will quote from one of the notebooks:
----------------------
Some methods require intrinsically "slow" technology. For example,
refinement of conditions (which is sometimes essential in order that they
not blow up) may require some level of CAD support behind the scenes. Even
limit extraction for Newton-Leibniz methods can be slow. We are thus faced
with questions of when to apply such technology and how to prevent it from
causing many inputs to hang.
In regard to prevention of hanging in computationally intensive technology
noted above, we have found it quite necessary to place time constraints on
certain pieces of code. (Motivation: often they succeed. If they fail, so
be it, and we then try other things.) This gives rise to a new set of
problems. One is that asynchronous interrupt handling, required by
TimeConstrained, is imperfect and in rare cases will cause a kernel crash.
Another is that results now take on a platform dependent nature, and this
is seriously unsettling. A possible future direction that will alleviate
this: have potentially slow code stopped by some measure of operation
count rather than asynchronous interrupts.
----------------------
The gist is that on some machines a timing-dependent computation might run
to completion that would abort on others. This can give rise to different
behaviors that vary based on machine speed. They can also be
state-dependent, in the sense that cached partial results can have an
effect on speed of computation.
Daniel Lichtblau
Wolfram Research