We've run our performance tests both for our own code, and against other
open source implementations of the same functions. The results are presented
below to give you a rough idea of how they all compare.

Caution

You should exercise extreme caution when interpreting these results, relative
performance may vary by platform, the tests use data that gives good code
coverage of our code, but which may skew the results
towards the corner cases. Finally, remember that different libraries make
different choices with regard to performance verses numerical stability.

[2]
The performance here is dominated by a few cases where the parameters
grow very large: faster asymptotic expansions are available,
but are of limited (or even frankly terrible) precision. The
same issue effects all of our Bessel function implementations,
but doesn't necessarily show in the current performance data.
More investigation is needed here.

All the results were measured on a 2.0GHz Intel T5800 Core 2 Duo, 4Gb RAM,
Windows Vista machine, with the test program compiled with Microsoft Visual
C++ 2009, and R-2.9.2 compiled in "standalone mode" with MinGW-4.3
(R-2.9.2 appears not to be buildable with Visual C++).

[1]
There are a small number of our test cases where the R library
fails to converge on a result: these tend to dominate the performance
result.

[2]
This result is somewhat misleading: for small values of the parameters
there is virtually no difference between the two libraries, but
for large values the Boost implementation is much
slower, albeit with much improved precision.

[3]
The R library appears to use a linear-search strategy, that can
perform very badly in a small number of pathological cases, but
may or may not be more efficient in "typical" cases

[4]
There are a small number of our test cases where the R library
fails to converge on a result: these tend to dominate the performance
result.

[5]
There are a small number of our test cases where the R library
fails to converge on a result: these tend to dominate the performance
result.

[1]
There are a small number of our test cases where the R library
fails to converge on a result: these tend to dominate the performance
result.

[2]
This result is somewhat misleading: for small values of the parameters
there is virtually no difference between the two libraries, but
for large values the Boost implementation is much
slower, albeit with much improved precision.

[3]
The R library appears to use a linear-search strategy, that can
perform very badly in a small number of pathological cases, but
may or may not be more efficient in "typical" cases

[4]
There are a small number of our test cases where the R library
fails to converge on a result: these tend to dominate the performance
result.

[5]
There are a small number of our test cases where the R library
fails to converge on a result: these tend to dominate the performance
result.