The majority of the implementation notes are included with the documentation
of each function or distribution. The notes here are of a more general nature,
and reflect more the general implementation philosophy used.

There will always be potential compromises to be made between speed and accuracy.
It may be possible to find faster methods, particularly for certain limited
ranges of arguments, but for most applications of math functions and distributions,
we judge that speed is rarely as important as accuracy.

So our priority is accuracy.

To permit evaluation of accuracy of the special functions, production of
extremely accurate tables of test values has received considerable effort.

(It also required much CPU effort - there was some danger of molten plastic
dripping from the bottom of JM's laptop, so instead, PAB's Dual-core desktop
was kept 50% busy for days calculating some
tables of test values!)

For a specific RealType, say float or double, it may be possible to find
approximations for some functions that are simpler and thus faster, but less
accurate (perhaps because there are no refining iterations, for example,
when calculating inverse functions).

If these prove accurate enough to be "fit for his purpose", then
a user may substitute his custom specialization.

For example, there are approximations dating back from times when computation
was a lot more expensive:

In order to be accurate enough for as many as possible real types, constant
values are given to 50 decimal digits if available (though many sources proved
only accurate near to 64-bit double precision). Values are specified as long
double types by appending L, unless they are exactly representable, for example
integers, or binary fractions like 0.125. This avoids the risk of loss of
accuracy converting from double, the default type. Values are used after
static_cast<RealType>(1.2345L) to provide the appropriate RealType
for spot tests.

Functions that return constants values, like kurtosis for example, are written
as

static_cast<RealType>(-3)/5;

to provide the most accurate value that the compiler can compute for the
real type. (The denominator is an integer and so will be promoted exactly).

So tests for one third, not exactly representable
with radix two floating-point, (should) use, for example:

static_cast<RealType>(1)/3;

If a function is very sensitive to changes in input, specifying an inexact
value as input (such as 0.1) can throw the result off by a noticeable amount:
0.1f is "wrong" by ~1e-7 for example (because 0.1 has no exact
binary representation). That is why exact binary values - halves, quarters,
and eighths etc - are used in test code along with the occasional fraction
a/b with b
a power of two (in order to ensure that the result is an exactly representable
binary value).

Otherwise when long double has more digits than the test data, then no amount
of tweaking an epsilon based tolerance will work.

A common problem is when tolerances that are suitable for implementations
like Microsoft VS.NET where double and long double are the same size: tests
fail on other systems where long double is more accurate than double. Check
first that the suffix L is present, and then that the tolerance is big enough.

A mathematical function is said to be defined at a point a = (a1, a2,
. . .) if the limits as x = (x1, x2, . . .) 'approaches a from all directions
agree'. The defined value may be any number, or +infinity, or -infinity.

Put crudely, if the function goes to + infinity and then emerges 'round-the-back'
with - infinity, it is NOT defined.

The library function which approximates a mathematical function shall
signal a domain error whenever evaluated with argument values for which
the mathematical function is undefined.

Guideline 2

The library function which approximates a mathematical function shall
signal a domain error whenever evaluated with argument values for which
the mathematical function obtains a non-real value.

If you enable throw but do NOT have try & catch block, then the program
will terminate with an uncaught exception and probably abort. Therefore
to get the benefit of helpful error messages, enabling all
exceptions and using try&catch is
recommended for all applications. However, for simplicity, this is not
done for most examples.

Functions that are not mathematically defined, like the Cauchy mean, fail
to compile by default. A
policy allows control of this.

If the policy is to permit undefined functions, then calling them throws
a domain error, by default. But the error policy can be set to not throw,
and to return NaN instead. For example,

#defineBOOST_MATH_DOMAIN_ERROR_POLICYignore_error

appears before the first Boost include, then if the un-implemented function
is called, mean(cauchy<>()) will return std::numeric_limits<T>::quiet_NaN().

Warning

If std::numeric_limits<T>::has_quiet_NaN is false (for example T
is a User-defined type), then an exception will always be thrown when a
domain error occurs. Catching exceptions is therefore strongly recommended.

Some functions and distributions are well defined with + or - infinity as
argument(s), but after some experiments with handling infinite arguments
as special cases, we concluded that it was generally more useful to forbid
this, and instead to return the result of domain_error.

Handling infinity as special cases is additionally complicated because, unlike
built-in types on most - but not all - platforms, not all User-Defined Types
are specialized to provide std::numeric_limits<RealType>::infinity() and would return zero rather than any representation
of infinity.

The rationale is that non-finiteness may happen because of error or overflow
in the users code, and it will be more helpful for this to be diagnosed promptly
rather than just continuing. The code also became much more complicated,
more error-prone, much more work to test, and much less readable.

However in a few cases, for example normal, where we felt it obvious, we
have permitted argument(s) to be infinity, provided infinity is implemented
for the realType on that implementation.

Users who require special handling of infinity (or other specific value)
can, of course, always intercept this before calling a distribution or function
and return their own choice of value, or other behavior. This will often
be simpler than trying to handle the aftermath of the error policy.

We have also tried to catch boundary cases where the mathematical specification
would result in divide by zero or overflow and signalling these similarly.
What happens at (and near), poles can be controlled through error
handling policies.

but found that these concepts are not defined (or their definition too contentious)
for too many distributions to be generally applicable. Because they are non-member
functions, they can be added if required.

Default parameters for the Triangular Distribution. We are uncertain about
the best default parameters. Some sources suggest that the Standard Triangular
Distribution has lower = 0, mode = half and upper = 1. However as a approximation
for the normal distribution, the most common usage, lower = -1, mode =
0 and upper = 1 would be more suitable.

Some of the special functions in this library are implemented via rational
approximations. These are either taken from the literature, or devised by
John Maddock using our
Remez code.

Rational rather than Polynomial approximations are used to ensure accuracy:
polynomial approximations are often wonderful up to a certain level of accuracy,
but then quite often fail to provide much greater accuracy no matter how
many more terms are added.

Our own approximations were devised either for added accuracy (to support
128-bit long doubles for example), or because literature methods were unavailable
or under non-BSL compatible license. Our Remez code is known to produce good
agreement with literature results in fairly simple "toy" cases.
All approximations were checked for convergence and to ensure that they were
not ill-conditioned (the coefficients can give a theoretically good solution,
but the resulting rational function may be un-computable at fixed precision).

Recomputing using different Remez implementations may well produce differing
coefficients: the problem is well known to be ill conditioned in general,
and our Remez implementation often found a broad and ill-defined minima for
many of these approximations (of course for simple "toy" examples
like approximating exp the
minima is well defined, and the coeffiecents should agree no matter whose
Remez implementation is used). This should not in general effect the validity
of the approximations: there's good literature supporting the idea that coefficients
can be "in error" without necessarily adversely effecting the result.
Note that "in error" has a special meaning in this context, see
"Approximate construction
of rational approximations and the effect of error autocorrection.",
Grigori Litvinov, eprint arXiv:math/0101042. Therefore the coefficients
still need to be accurately calculated, even if they can be in error compared
to the "true" minimax solution.

A macro BOOST_DEFINE_MATH_CONSTANT in constants.hpp is used to provide high
accuracy constants to mathematical functions and distributions, since it
is important to provide values uniformly for both built-in float, double
and long double types, and for User Defined types like NTL::quad_float and
NTL::RR.

To permit calculations in this Math ToolKit and its tests, (and elsewhere)
at about 100 decimal digits with NTL::RR type, it is obviously necessary
to define constants to this accuracy.

However, some compilers do not accept decimal digits strings as long as this.
So the constant is split into two parts, with the 1st containing at least
long double precision, and the 2nd zero if not needed or known. The 3rd part
permits an exponent to be provided if necessary (use zero if none) - the
other two parameters may only contain decimal digits (and sign and decimal
point), and may NOT include an exponent like 1.234E99 (nor a trailing F or
L). The second digit string is only used if T is a User-Defined Type, when
the constant is converted to a long string literal and lexical_casted to
type T. (This is necessary because you can't use a numeric constant since
even a long double might not have enough digits).

Note that it is necessary (if inconvenient) to specify the type explicitly.

So you cannot write

doublep=boost::math::constants::pi<>();// could not deduce template argument for 'T'

Neither can you write:

doublep=boost::math::constants::pi;// Context does not allow for disambiguation of overloaded function
doublep=boost::math::constants::pi();// Context does not allow for disambiguation of overloaded function

Reporting of error by setting errno should be thread safe already (otherwise
none of the std lib math functions would be thread safe?). If you turn on
reporting of errors via exceptions, errno gets left unused anyway.

Other than that, the code is intended to be thread safe for
built in real-number types : so float, double and long double
are all thread safe.

For non-built-in types - NTL::RR for example - initialisation of the various
constants used in the implementation is potentially not
thread safe. This most undesiable, but it would be a signficant challenge
to fix it. Some compilers may offer the option of having static-constants
initialised in a thread safe manner (Commeau, and maybe others?), if that's
the case then the problem is solved. This is a topic of hot debate for the
next C++ std revision, so hopefully all compilers will be required to do
the right thing here at some point.

We found a large number of sources of test data. We have assumed that these
are "known good" if they agree with the results
from our test and only consulted other sources for their 'vote'
in the case of serious disagreement. The accuracy, actual and claimed, vary
very widely. Only Wolfram Mathematica
functions provided a higher accuracy than C++ double (64-bit floating-point)
and was regarded as the most-trusted source by far.

It is also the only independent source found for the Weibull distribution;
unfortunately it appears to suffer from very poor accuracy in areas where
the underlying special function is known to be difficult to implement.

Note that SVGMath requires that the mml files are not
wrapped in an XHTML XML wrapper - this is added by Mathcast by default -
one workaround is to copy an existing mml file and then edit it with Mathcast:
the existing format should then be preserved. This is a bug in the XML parser
used by SVGMath which the author is aware of.

Note that unlike the sample config file supplied with SVGMath, this does
not make use of the Mathematica 7 font as this lacks sufficient Unicode information
for it to be used with either SVGMath or XEP "as is".

Also note that the SVG files in the repository are almost certainly Windows-specific
since they reference various Windows Fonts.

PNG files can be created from the SVG's using Batik
and a command such as:

PAB had to alter his because the Lucida Sans Unicode font had a different
name. Changes are very likely to be required if you are not using Windows.

XZ authored his equations using the venerable Latex, JM converted these to
MathML using mxlatex.
This process is currently unreliable and required some manual intervention:
consequently Latex source is not considered a viable route for the automatic
production of SVG versions of equations.

Equations are embedded in the quickbook source using the equation
template defined in math.qbk. This outputs Docbook XML that looks like:

Graphs were produced in SVG format and then converted to PNG's using the
same process as the equations.

The programs /libs/math/doc/sf_and_dist/graphs/dist_graphs.cpp and /libs/math/doc/sf_and_dist/graphssf_graphs.cpp
generate the SVG's directly using the [@http:/code.google.com/soc/2007/boost/about.html
Google Summer of Code 2007] project of Jacob Voytko (whose work so far is
at .\boost-sandbox\SOC\2007\visualization).