The following table shows the peak errors (in units of epsilon) found on
various platforms with various floating point types, along with comparisons
to the GSL-1.9 and
Cephes libraries. Unless
otherwise specified any floating point type that is narrower than the one
shown will have effectively
zero error.

The tests for these functions come in two parts: basic sanity checks use
spot values calculated using Mathworld's
online evaluator, while accuracy checks use high-precision test values
calculated at 1000-bit precision with NTL::RR
and this implementation. Note that the generic and type-specific versions
of these functions use differing implementations internally, so this gives
us reasonably independent test data. Using our test data to test other "known
good" implementations also provides an additional sanity check.

All versions of these functions first use the usual reflection formulas to
make their arguments positive:

The generic versions of these functions are implemented using the series:

When the significand (mantissa) size is recognised (currently for 53, 64
and 113-bit reals, plus single-precision 24-bit handled via promotion to
double) then a series of rational approximations devised
by JM are used.

For 0 < z < 1 the approximating form is:

For a rational approximation R(1-z) and a constant C.

For 1 < z < 4 the approximating form is:

For a rational approximation R(n-z) and a constant C and integer n.

For z > 4 the approximating form is:

ζ(z) = 1 + eR(z - n)

For a rational approximation R(z-n) and integer n, note that the accuracy
required for R(z-n) is not full machine precision, but an absolute error
of: ε/R(0). This saves us quite a few digits when dealing with large z, especially
when ε is small.

Finally, there are some special cases for integer arguments, there are closed
forms for negative or even integers:

and for positive odd integers we simply cache pre-computed values as these
are of great benefit to some infinite series calculations.