All operations support both Perl UV's (32-bit or 64-bit) and bignums. If you want high performance with big numbers (larger than Perl's native 32-bit or 64-bit size), you should install Math::Prime::Util::GMP and Math::BigInt::GMP. This will be a recurring theme throughout this documentation -- while all bignum operations are supported in pure Perl, most methods will be much slower than the C+GMP alternative.

The module is thread-safe and allows concurrency between Perl threads while still sharing a prime cache. It is not itself multi-threaded. See the Limitations section if you are using Win32 and threads in your program. Also note that Math::Pari is not thread-safe (and will crash as soon as it is loaded in threads), so if you use Math::BigInt::Pari rather than Math::BigInt::GMP or the default backend, things will go pear-shaped.

Two scripts are also included and installed by default:

primes.pl displays primes between start and end values or expressions, with many options for filtering (e.g. twin, safe, circular, good, lucky, etc.). Use --help to see all the options.

factor.pl operates similar to the GNU factor program. It supports bigint and expression inputs.

Install Math::Prime::Util::GMP, as that will vastly increase the speed of many of the functions. This does require the GMP library be installed on your system, but this increasingly comes pre-installed or easily available using the OS vendor package installation tool.

Install and use Math::BigInt::GMP or Math::BigInt::Pari, then use use bigint try => 'GMP,Pari' in your script, or on the command line -Mbigint=lib,GMP. Large modular exponentiation is much faster using the GMP or Pari backends, as are the math and approximation functions when called with very large inputs.

Install Math::MPFR if you use the Ei, li, Zeta, or R functions. If that module can be loaded, these functions will run much faster on bignum inputs, and are able to provide higher accuracy.

I have run these functions on many versions of Perl, and my experience is that if you're using anything older than Perl 5.14, I would recommend you upgrade if you are using bignums a lot. There are some brittle behaviors on 5.12.4 and earlier with bignums. For example, the default BigInt backend in older versions of Perl will sometimes convert small results to doubles, resulting in corrupted output.

PRIMALITY TESTING

This module provides three functions for general primality testing, as well as numerous specialized functions. The three main functions are: "is_prob_prime" and "is_prime" for general use, and "is_provable_prime" for proofs. For inputs below 2^64 the functions are identical and fast deterministic testing is performed. That is, the results will always be correct and should take at most a few microseconds for any input. This is hundreds to thousands of times faster than other CPAN modules. For inputs larger than 2^64, an extra-strong BPSW test is used. See the "PRIMALITY TESTING NOTES" section for more discussion.

FUNCTIONS

is_prime

print "$n is prime" if is_prime($n);

Returns 0 is the number is composite, 1 if it is probably prime, and 2 if it is definitely prime. For numbers smaller than 2^64 it will only return 0 (composite) or 2 (definitely prime), as this range has been exhaustively tested and has no counterexamples. For larger numbers, an extra-strong BPSW test is used. If Math::Prime::Util::GMP is installed, some additional primality tests are also performed, and a quick attempt is made to perform a primality proof, so it will return 2 for many other inputs.

Also see the "is_prob_prime" function, which will never do additional tests, and the "is_provable_prime" function which will construct a proof that the input is number prime and returns 2 for almost all primes (at the expense of speed).

For native precision numbers (anything smaller than 2^64, all three functions are identical and use a deterministic set of tests (selected Miller-Rabin bases or BPSW). For larger inputs both "is_prob_prime" and "is_prime" return probable prime results using the extra-strong Baillie-PSW test, which has had no counterexample found since it was published in 1980.

Sieving will be done if required. The algorithm used will depend on the range and whether a sieve result already exists. Possibilities include primality testing (for very small ranges), a Sieve of Eratosthenes using wheel factorization, or a segmented sieve.

next_prime

$n = next_prime($n);

Returns the next prime greater than the input number. The result will be a bigint if it can not be exactly represented in the native int type (larger than 4,294,967,291 in 32-bit Perl; larger than 18,446,744,073,709,551,557 in 64-bit).

prev_prime

$n = prev_prime($n);

Returns the prime preceding the input number (i.e. the largest prime that is strictly less than the input). 0 is returned if the input is 2 or lower.

forprimes

Given a block and either an end count or a start and end pair, calls the block for each prime in the range. Compared to getting a big array of primes and iterating through it, this is more memory efficient and perhaps more convenient. This will almost always be the fastest way to loop over a range of primes. Nesting and use in threads are allowed.

forcomposites

forcomposites { say } 1000;
forcomposites { say } 2000,2020;

Given a block and either an end number or a start and end pair, calls the block for each composite in the inclusive range. The composites, OEIS A002808, are the numbers greater than 1 which are not prime: 4, 6, 8, 9, 10, 12, 14, 15, ....

foroddcomposites

Similar to "forcomposites", but skipping all even numbers. The odd composites, OEIS A071904, are the numbers greater than 1 which are not prime and not divisible by two: 9, 15, 21, 25, 27, 33, 35, ....

fordivisors

fordivisors { $prod *= $_ } $n;

Given a block and a non-negative number n, the block is called with $_ set to each divisor in sorted order. Also see "divisor_sum".

forpart

Given a non-negative number n, the block is called with @_ set to the array of additive integer partitions. The operation is very similar to the forpart function in Pari/GP 2.6.x, though the ordering is different. The algorithm is ZS1 from Zoghbi and Stojmenović (1998), hence the ordering is identical to that of Integer::Partition. Use "partitions" to get just the count of unrestricted partitions.

An optional hash reference may be given to produce restricted partitions. Each value must be a non-negative integer. The allowable keys are:

n restrict to exactly this many values
amin all elements must be at least this value
amax all elements must be at most this value
nmin the array must have at least this many values
nmax the array must have at most this many values

Like forcomb and forperm, the partition return values are read-only. Any attempt to modify them will result in undefined behavior.

forcomb

Given non-negative arguments n and k, the block is called with @_ set to the k element array of values from 0 to n-1 representing the combinations in lexicographical order. While the binomial function gives the total number, this function can be used to enumerate the choices.

Rather than give a data array as input, an integer is used for n. A convenient way to map to array elements is:

forcomb { say "@data[@_]" } @data, 3;

where the block maps the combination array @_ to array values, the argument for n is given the array since it will be evaluated as a scalar and hence give the size, and the argument for k is the desired size of the combinations.

Like forpart and forperm, the index return values are read-only. Any attempt to modify them will result in undefined behavior.

forperm

Given non-negative argument n, the block is called with @_ set to the k element array of values from 0 to n-1 representing permutations in lexicographical order. The total number of calls will be n!.

Rather than give a data array as input, an integer is used for n. A convenient way to map to array elements is:

forperm { say "@data[@_]" } @data;

where the block maps the permutation array @_ to array values, and the argument for n is given the array since it will be evaluated as a scalar and hence give the size.

Like forpart and forcomb, the index return values are read-only. Any attempt to modify them will result in undefined behavior.

prime_iterator

my $it = prime_iterator;
$sum += $it->() for 1..100000;

Returns a closure-style iterator. The start value defaults to the first prime (2) but an initial value may be given as an argument, which will result in the first value returned being the next prime greater than or equal to the argument. For example, this:

my $it = prime_iterator(200); say $it->(); say $it->();

will return 211 followed by 223, as those are the next primes >= 200. On each call, the iterator returns the current value and increments to the next prime.

prime_iterator_object

Returns a Math::Prime::Util::PrimeIterator object. A shortcut that loads the package if needed, calls new, and returns the object. See the documentation for that package for details. This object has more features than the simple one above (e.g. the iterator is bi-directional), and also handles iterating across bigints.

prime_count

Returns the Prime Count function Pi(n), also called primepi in some math packages. When given two arguments, it returns the inclusive count of primes between the ranges. E.g. (13,17) returns 2, (14,17) and (13,16) return 1, (14,16) returns 0.

The current implementation decides based on the ranges whether to use a segmented sieve with a fast bit count, or the extended LMO algorithm. The former is preferred for small sizes as well as small ranges. The latter is much faster for large ranges.

The segmented sieve is very memory efficient and is quite fast even with large base values. Its complexity is approximately O(sqrt(a) + (b-a)), where the first term is typically negligible below ~ 10^11. Memory use is proportional only to sqrt(a), with total memory use under 1MB for any base under 10^14.

The extended LMO method has complexity approximately O(b^(2/3)) + O(a^(2/3)), and also uses low memory. A calculation of Pi(10^14) completes in a few seconds, Pi(10^15) in well under a minute, and Pi(10^16) in about one minute. In contrast, even parallel primesieve would take over a week on a similar machine to determine Pi(10^16).

prime_count_upper

prime_count_lower

Returns an upper or lower bound on the number of primes below the input number. These are analytical routines, so will take a fixed amount of time and no memory. The actual prime_count will always be equal to or between these numbers.

A common place these would be used is sizing an array to hold the first $n primes. It may be desirable to use a bit more memory than is necessary, to avoid calling prime_count.

These routines use verified tight limits below a range at least 2^35, and use either the Dusart (2010) bounds or the Axler (2014) bounds above that range. These bounds do not assume the Riemann Hypothesis. If the configuration option assume_rh has been set (it is off by default), then the Schoenfeld (1976) bounds are used for large values.

prime_count_approx

Returns an approximation to the prime_count function, without having to generate any primes. For values under 10^36 this uses the Riemann R function, which is quite accurate: an error of less than 0.0005% is typical for input values over 2^32, and decreases as the input gets larger. If Math::MPFR is installed, the Riemann R function is used for all values, and will be very fast. If not, then values of 10^36 and larger will use the approximation li(x) - li(sqrt(x))/2. While not as accurate as the Riemann R function, it still should have error less than 0.00000000000000001%.

A slightly faster but much less accurate answer can be obtained by averaging the upper and lower bounds.

twin_primes

Returns the lesser of twin primes between the lower and upper limits (inclusive), with a lower limit of 2 if none is given. This is OEIS A001359. Given a twin prime pair (p,q) with q = p + 2, p prime, and <q prime>, this function uses p to represent the pair. Hence the bounds need to include p, and the returned list will have p but not q.

This works just like the "primes" function, though only the first primes of twin prime pairs are returned. Like that function, an array reference is returned.

twin_prime_count

Similar to prime count, but returns the count of twin primes (primes p where p+2 is also prime). Takes either a single number indicating a count from 2 to the argument, or two numbers indicating a range.

The primes being counted are the first value, so a range of (3,5) will return a count of two, because both 3 and 5 are counted as twin primes. A range of (12,13) will return a count of zero, because neither 12+2 nor 13+2 are prime. In contrast, primesieve requires all elements of a constellation to be within the range to be counted, so would return one for the first example (5 is not counted because its pair 7 is not in the range).

There is no useful formula known for this, unlike prime counts. We sieve for the answer, using some small table acceleration.

twin_prime_count_approx

Returns an approximation to the twin prime count of n. This returns quickly and has a very small error for large values. The method used is conjecture B of Hardy and Littlewood 1922, as stated in Sebah and Gourdon 2002. For inputs under 10M, a correction factor is additionally applied to reduce the mean squared error.

nth_prime

say "The ten thousandth prime is ", nth_prime(10_000);

Returns the prime that lies in index n in the array of prime numbers. Put another way, this returns the smallest p such that Pi(p) >= n.

For relatively small inputs (below 1 million or so), this does a sieve over a range containing the nth prime, then counts up to the number. This is fairly efficient in time and memory. For larger values, create a low-biased estimate using the inverse logarithmic integral, use a fast prime count, then sieve in the small difference.

While this method is thousands of times faster than generating primes, and doesn't involve big tables of precomputed values, it still can take a fair amount of time for large inputs. Calculating the 10^12th prime takes about 1 second, the 10^13th prime takes under 10 seconds, and the 10^14th prime (3475385758524527) takes under 30 seconds. Think about whether a bound or approximation would be acceptable, as they can be computed analytically.

If the result is larger than a native integer size (32-bit or 64-bit), the result will take a very long time. A later version of Math::Prime::Util::GMP may include this functionality which would help for 32-bit machines.

nth_prime_upper

nth_prime_lower

Returns an analytical upper or lower bound on the Nth prime. These are very fast as they do not need to sieve or search through primes or tables. An exact answer is returned for tiny values of n. The lower limit uses the Dusart 2010 bound for all n, while the upper bound uses one of the two Dusart 2010 bounds for n >= 178974, a Dusart 1999 bound for n >= 39017, and a simple bound of n * (logn + 0.6 * loglogn) for small n.

nth_prime_approx

say "The one trillionth prime is ~ ", nth_prime_approx(10**12);

Returns an approximation to the nth_prime function, without having to generate any primes. For values where the nth prime is smaller than 2^64, an inverse Riemann R function is used. For larger values, uses the Cipolla 1902 approximation with up to 2nd order terms, plus a third order correction.

nth_twin_prime

Returns the Nth twin prime. This is done via sieving and counting, so is not very fast for large values.

nth_twin_prime_approx

Returns an approximation to the Nth twin prime. A curve fit is used for small inputs (under 1200), while for larger inputs a binary search is done on the approximate twin prime count.

is_pseudoprime

Takes a positive number n and one or more non-zero positive bases as input. Returns 1 if the input is a probable prime to each base, 0 if not. This is the simple Fermat primality test. Removing primes, given base 2 this produces the sequence OEIS A001567.

is_strong_pseudoprime

Takes a positive number n and one or more non-zero positive bases as input. Returns 1 if the input is a strong probable prime to each base, 0 if not.

If 0 is returned, then the number really is a composite. If 1 is returned, then it is either a prime or a strong pseudoprime to all the given bases. Given enough distinct bases, the chances become very, very high that the number is actually prime.

This is usually used in combination with other tests to make either stronger tests (e.g. the strong BPSW test) or deterministic results for numbers less than some verified limit (e.g. it has long been known that no more than three selected bases are required to give correct primality test results for any 32-bit number). Given the small chances of passing multiple bases, there are some math packages that just use multiple MR tests for primality testing.

Even inputs other than 2 will always return 0 (composite). While the algorithm does run with even input, most sources define it only on odd input. Returning composite for all non-2 even input makes the function match most other implementations including Math::Primality's is_strong_pseudoprime function.

is_lucas_pseudoprime

Takes a positive number as input, and returns 1 if the input is a standard Lucas probable prime using the Selfridge method of choosing D, P, and Q (some sources call this a Lucas-Selfridge pseudoprime). Removing primes, this produces the sequence OEIS A217120.

is_strong_lucas_pseudoprime

Takes a positive number as input, and returns 1 if the input is a strong Lucas probable prime using the Selfridge method of choosing D, P, and Q (some sources call this a strong Lucas-Selfridge pseudoprime). This is one half of the BPSW primality test (the Miller-Rabin strong pseudoprime test with base 2 being the other half). Removing primes, this produces the sequence OEIS A217255.

is_extra_strong_lucas_pseudoprime

Takes a positive number as input, and returns 1 if the input passes the extra strong Lucas test (as defined in Grantham 2000). This test has more stringent conditions than the strong Lucas test, and produces about 60% fewer pseudoprimes. Performance is typically 20-30% faster than the strong Lucas test.

The parameters are selected using the Baillie-OEIS method method: increment P from 3 until jacobi(D,n) = -1. Removing primes, this produces the sequence OEIS A217719.

is_almost_extra_strong_lucas_pseudoprime

This is similar to the "is_extra_strong_lucas_pseudoprime" function, but does not calculate U, so is a little faster, but also weaker. With the current implementations, there is little reason to prefer this unless trying to reproduce specific results. The extra-strong implementation has been optimized to use similar features, removing most of the performance advantage.

An optional second argument (an integer between 1 and 256) indicates the increment amount for P parameter selection. The default value of 1 yields the parameter selection described in "is_extra_strong_lucas_pseudoprime", creating a pseudoprime sequence which is a superset of the latter's pseudoprime sequence OEIS A217719. A value of 2 yields the method used by Pari.

Because the U = 0 condition is ignored, this produces about 5% more pseudoprimes than the extra-strong Lucas test. However this is still only 66% of the number produced by the strong Lucas-Selfridge test. No BPSW counterexamples have been found with any of the Lucas tests described.

is_perrin_pseudoprime

Takes a positive number n as input and returns 1 if n divides P(n) where P(n) is the Perrin number of n. The Perrin sequence is defined by

C<P(0) = 3, P(1) = 0, P(2) = 2; P(n) = P(n-2) + P(n-3)>

While pseudoprimes are relatively rare (the first two are 271441 and 904631), infinitely many exist. The pseudoprime sequence is OEIS A013998.

The implementation uses modular 3x3 matrix exponentiation, which is efficient but still quite slow compared to the other probable prime tests.

is_frobenius_pseudoprime

Takes a positive number n as input, and two optional parameters a and b, and returns 1 if the n is a Frobenius probable prime with respect to the polynomial x^2 - ax + b. Without the parameters, b = 2 and a is the least positive odd number such that (a^2-4b|n) = -1. This selection has no pseudoprimes below 2^64 and none known. In any case, the discriminant a^2-4b must not be a perfect square.

Some authors use the Fibonacci polynomial x^2-x-1 corresponding to (1,-1) as the default method for a Frobenius probable prime test. This creates a weaker test than most other parameter choices (e.g. over twenty times more pseudoprimes than (3,-5)), so is not used as the default here. With the (1,-1) parameters the pseudoprime sequence is OEIS A212424.

The Frobenius test is a stronger test than the Lucas test. Any Frobenius (a,b) pseudoprime is also a Lucas (a,b) pseudoprime but the converse is not true, as any Frobenius (a,b) pseudoprime is also a Fermat pseudoprime to the base |b|. We can see that with the default parameters this is similar to, but somewhat weaker than, the BPSW test used by this module (which uses the strong and extra-strong versions of the probable prime and Lucas tests respectively).

The performance cost is slightly more than 3 strong pseudoprime tests. Also see "is_frobenius_underwood_pseudoprime" which is an extremely efficient construction of a Frobenius test using good parameter selection, allowing it to run 1.5 to 2 times faster than the general Frobenius test.

is_frobenius_underwood_pseudoprime

Takes a positive number as input, and returns 1 if the input passes the efficient Frobenius test of Paul Underwood. This selects a parameter a as the least non-negative integer such that (a^2-4|n)=-1, then verifies that (x+2)^(n+1) = 2a + 5 mod (x^2-ax+1,n). This combines a Fermat and Lucas test with a cost of only slightly more than 2 strong pseudoprime tests. This makes it similar to, but faster than, a Frobenius test.

There are no known pseudoprimes to this test and extensive computation has shown no counterexamples under 2^50. This test also has no overlap with the BPSW test, making it a very effective method for adding additional certainty.

miller_rabin_random

Takes a positive number (n) as input and a positive number (k) of bases to use. Performs k Miller-Rabin tests using uniform random bases between 2 and n-2.

This should not be used in place of "is_prob_prime", "is_prime", or "is_provable_prime". Those functions will be faster and provide better results than running k Miller-Rabin tests. This function can be used if one wants more assurances for non-proven primes, such as for cryptographic uses where the size is large enough that proven primes are not desired.

is_prob_prime

Takes a positive number as input and returns back either 0 (composite), 2 (definitely prime), or 1 (probably prime).

For 64-bit input (native or bignum), this uses either a deterministic set of Miller-Rabin tests (1, 2, or 3 tests) or a strong BPSW test consisting of a single base-2 strong probable prime test followed by a strong Lucas test. This has been verified with Jan Feitsma's 2-PSP database to produce no false results for 64-bit inputs. Hence the result will always be 0 (composite) or 2 (prime).

For inputs larger than 2^64, an extra-strong Baillie-PSW primality test is performed (also called BPSW or BSW). This is a probabilistic test, so only 0 (composite) and 1 (probably prime) are returned. There is a possibility that composites may be returned marked prime, but since the test was published in 1980, not a single BPSW pseudoprime has been found, so it is extremely likely to be prime. While we believe (Pomerance 1984) that an infinite number of counterexamples exist, there is a weak conjecture (Martin) that none exist under 10000 digits.

is_bpsw_prime

Given a positive number input, returns 0 (composite), 2 (definitely prime), or 1 (probably prime), using the BPSW primality test (extra-strong variant). Normally one of the "is_prime" in Math::Prime::Util or "is_prob_prime" in Math::Prime::Util functions will suffice, but those functions do pre-tests to find easy composites. If you know this is not necessary, then calling "is_bpsw_prime" may save a small amount of time.

is_provable_prime

say "$n is definitely prime" if is_provable_prime($n) == 2;

Takes a positive number as input and returns back either 0 (composite), 2 (definitely prime), or 1 (probably prime). This gives it the same return values as "is_prime" and "is_prob_prime". Note that numbers below 2^64 are considered proven by the deterministic set of Miller-Rabin bases or the BPSW test. Both of these have been tested for all small (64-bit) composites and do not return false positives.

Using the Math::Prime::Util::GMP module is highly recommended for doing primality proofs, as it is much, much faster. The pure Perl code is just not fast for this type of operation, nor does it have the best algorithms. It should suffice for proofs of up to 40 digit primes, while the latest MPU::GMP works for primes of hundreds of digits (thousands with an optional larger polynomial set).

The pure Perl implementation uses theorem 5 of BLS75 (Brillhart, Lehmer, and Selfridge's 1975 paper), an improvement on the Pocklington-Lehmer test. This requires n-1 to be factored to (n/2)^(1/3)). This is often fast, but as n gets larger, it takes exponentially longer to find factors.

Math::Prime::Util::GMP implements both the BLS75 theorem 5 test as well as ECPP (elliptic curve primality proving). It will typically try a quick n-1 proof before using ECPP. Certificates are available with either method. This results in proofs of 200-digit primes in under 1 second on average, and many hundreds of digits are possible. This makes it significantly faster than Pari 2.1.7's is_prime(n,1) which is the default for Math::Pari.

prime_certificate

Given a positive integer n as input, returns a primality certificate as a multi-line string. If we could not prove n prime, an empty string is returned (n may or may not be composite). This may be examined or given to "verify_prime" for verification. The latter function contains the description of the format.

is_provable_prime_with_cert

Given a positive integer as input, returns a two element array containing the result of "is_provable_prime": 0 definitely composite 1 probably prime 2 definitely prime and a primality certificate like "prime_certificate". The certificate will be an empty string if the first element is not 2.

verify_prime

Given a primality certificate, returns either 0 (not verified) or 1 (verified). Most computations are done using pure Perl with Math::BigInt, so you probably want to install and use Math::BigInt::GMP, and ECPP certificates will be faster with Math::Prime::Util::GMP for its elliptic curve computations.

If the certificate is malformed, the routine will carp a warning in addition to returning 0. If the verbose option is set (see "prime_set_config") then if the validation fails, the reason for the failure is printed in addition to returning 0. If the verbose option is set to 2 or higher, then a message indicating success and the certificate type is also printed.

A certificate may have arbitrary text before the beginning (the primality routines from this module will not have any extra text, but this way verbose output from the prover can be safely stored in a certificate). The certificate begins with the line:

[MPU - Primality Certificate]

All lines in the certificate beginning with # are treated as comments and ignored, as are blank lines. A version number may follow, such as:

Version 1.0

For all inputs, base 10 is the default, but at any point this may be changed with a line like:

Base 16

where allowed bases are 10, 16, and 62. This module will only use base 10, so its routines will not output Base commands.

Next, we look for (using "100003" as an example):

Proof for:
N 100003

where the text Proof for: indicates we will read an N value. Skipping comments and blank lines, the next line should be "N " followed by the number.

After this, we read one or more blocks. Each block is a proof of the form:

If Q is prime, then N is prime.

Some of the blocks have more than one Q value associated with them, but most only have one. Each block has its own set of conditions which must be verified, and this can be done completely self-contained. That is, each block is independent of the other blocks and may be processed in any order. To be a complete proof, each block must successfully verify. The block types and their conditions are shown below.

Finally, when all blocks have been read and verified, we must ensure we can construct a proof tree from the set of blocks. The root of the tree is the initial N, and for each node (block), all Q values must either have a block using that value as its N or Q must be less than 2^64 and pass BPSW.

Some other certificate formats (e.g. Primo) use an ordered chain, where the first block must be for the initial N, a single Q is given which is the implied N for the next block, and so on. This simplifies validation implementation somewhat, and removes some redundant information from the certificate, but has no obvious way to add proof types such as Lucas or the various BLS75 theorems that use multiple factors. I decided that the most general solution was to have the certificate contain the set in any order, and let the verifier do the work of constructing the tree.

The blocks begin with the text "Type ..." where ... is the type. One or more values follow. The defined types are:

Small

Type Small
N 5791

N must be less than 2^64 and be prime (use BPSW or deterministic M-R).

A more sophisticated n-1 proof using BLS theorem 5. This requires N-1 to be factored only to (N/2)^(1/3). While this looks much more complicated, it really isn't much more work. The biggest drawback is just that we have multiple Q values to chain rather than a single one. This block verifies if:

An elliptic curve primality block, typically generated with an Atkin/Morain ECPP implementation, but this should be adequate for anything using the Atkin-Goldwasser-Kilian-Morain style certificates. Some basic elliptic curve math is needed for these. This block verifies if:

is_aks_prime

say "$n is definitely prime" if is_aks_prime($n);

Takes a positive number as input, and returns 1 if the input passes the Agrawal-Kayal-Saxena (AKS) primality test. This is a deterministic unconditional primality test which runs in polynomial time for general input.

While this is an important theoretical algorithm, and makes an interesting example, it is hard to overstate just how impractically slow it is in practice. It is not used for any purpose in non-theoretical work, as it is literally millions of times slower than other algorithms. From R.P. Brent, 2010: "AKS is not a practical algorithm. ECPP is much faster." We have ECPP, and indeed it is much faster.

This implementation includes the v6 improvements from Lenstra as well as further improvements from Bernstein and Voloch. It runs substantially faster than the original or v6 versions. The GMP implementation uses a binary segmentation method for modular polynomial multiplication (see Bernstein's 2007 Quartic paper), which reduces to a single scalar multiplication, at which GMP excels. Because of this, the GMP implementation is likely to be faster once the input is larger than 2^32.

is_mersenne_prime

say "2^607-1 (M607) is a Mersenne prime" if is_mersenne_prime(607);

Takes a positive number p as input and returns 1 if 2^p-1 is prime. Since an enormous effort has gone into testing these, a list of known Mersenne primes is used to accelerate this. Beyond the highest sequential Mersenne prime (currently 32,582,657) this performs pretesting followed by the Lucas-Lehmer test.

The Lucas-Lehmer test is a deterministic unconditional test that runs very fast compared to other primality methods for numbers of comparable size, and vastly faster than any known general-form primality proof methods. While this test is fast, the GMP implementation is not nearly as fast as specialized programs such as prime95. Additionally, since we use the table for "small" numbers, testing via this function call will only occur for numbers with over 9.8 million digits. At this size, tools such as prime95 are greatly preferred.

is_power

say "$n is a perfect square" if is_power($n, 2);
say "$n is a perfect cube" if is_power($n, 3);
say "$n is a ", is_power($n), "-th power";

Given a single positive integer input n, returns k if n = p^k for some integer p > 1, k > 1, and 0 otherwise. The k returned is the largest possible. This can be used in a boolean statement to determine if n is a perfect power.

If given two arguments n and k, returns 1 if n is a k-th power, and 0 otherwise. For example, if k=2 then this detects perfect squares. Setting k=0 gives behavior like the first case (the largest root is found and its value is returned).

If a third argument is present, it must be a scalar reference. If n is a k-th power, then this will be set to the k-th root of n. For example:

This corresponds to Pari/GP's ispower function with integer arguments.

lucasu

say "Fibonacci($_) = ", lucasu(1,-1,$_) for 0..100;

Given integers P, Q, and the non-negative integer k, computes U_k for the Lucas sequence defined by P,Q. These include the Fibonacci numbers (1,-1), the Pell numbers (2,-1), the Jacobsthal numbers (1,-2), the Mersenne numbers (3,2), and more.

This corresponds to OpenPFGW's lucasU function and gmpy2's lucasu function.

lucasv

say "Lucas($_) = ", lucasv(1,-1,$_) for 0..100;

Given integers P, Q, and the non-negative integer k, computes V_k for the Lucas sequence defined by P,Q. These include the Lucas numbers (1,-1).

This corresponds to OpenPFGW's lucasV function and gmpy2's lucasv function.

lucas_sequence

my($U, $V, $Qk) = lucas_sequence($n, $P, $Q, $k)

Computes U_k, V_k, and Q_k for the Lucas sequence defined by P,Q, modulo n. The modular Lucas sequence is used in a number of primality tests and proofs. The following conditions must hold: |P| < n ; |Q| < n ; k >= 0 ; n >= 2.

gcd

Given a list of integers, returns the greatest common divisor. This is often used to test for coprimality.

lcm

Given a list of integers, returns the least common multiple. Note that we follow the semantics of Mathematica, Pari, and Perl 6, re:

gcdext

Given two integers x and y, returns u,v,d such that d = gcd(x,y) and u*x + v*y = d. This uses the extended Euclidian algorithm to compute the values satisfying Bézout's Identity.

This corresponds to Pari's gcdext function, which was renamed from bezout out Pari 2.6. The results will hence match "bezout" in Math::Pari.

chinese

say chinese( [14,643], [254,419], [87,733] ); # 87041638

Solves a system of simultaneous congruences using the Chinese Remainder Theorem (with extension to non-coprime moduli). A list of [a,n] pairs are taken as input, each representing an equation x ≡ a mod n. If no solution exists, undef is returned. If a solution is returned, the modulus is equal to the lcm of all the given moduli (see "lcm". In the standard case where all values of n are coprime, this is just the product. The n values must be positive integers, while the a values are integers.

vecsum

say "Totient sum 500,000: ", vecsum(euler_phi(0,500_000));

Returns the sum of all arguments, each of which must be an integer. This is similar to List::Util's "sum0" in List::Util function, but has a very important difference. List::Util turns all inputs into doubles and returns a double, which will mean incorrect results with large integers. vecsum sums (signed) integers and returns the untruncated result. Processing is done on native integers while possible.

vecprod

say "Totient product 5,000: ", vecprod(euler_phi(1,5_000));

Returns the product of all arguments, each of which must be an integer. This is similar to List::Util's "product" in List::Util function, but keeps all results as integers and automatically switches to bigints if needed.

vecmin

Returns the minimum of all arguments, each of which must be an integer. This is similar to List::Util's "min" in List::Util function, but has a very important difference. List::Util turns all inputs into doubles and returns a double, which gives incorrect results with large integers. vecmin validates and compares all results as integers. The validation step will make it a little slower than "min" in List::Util but this prevents accidental and unintentional use of floats.

vecmax

Returns the maximum of all arguments, each of which must be an integer. This is similar to List::Util's "max" in List::Util function, but has a very important difference. List::Util turns all inputs into doubles and returns a double, which gives incorrect results with large integers. vecmax validates and compares all results as integers. The validation step will make it a little slower than "max" in List::Util but this prevents accidental and unintentional use of floats.

vecreduce

Does a reduce operation via left fold. Takes a block and a list as arguments. The block uses the special local variables a and b representing the accumulation and next element respectively, with the result of the block being used for the new accumulation. No initial element is used, so undef will be returned with an empty list.

The interface is exactly the same as "reduce" in List::Util. This was done to increase portability and minimize confusion. See chapter 7 of Higher Order Perl (or many other references) for a discussion of reduce with empty or singular-element lists. It is often a good idea to give an identity element as the first list argument.

While operations like vecmin, vecmax, vecsum, vecprod, etc. can be fairly easily done with this function, it will not be as efficient. There are a wide variety of other functions that can be easily made with reduce, making it a useful tool.

invmod

say "The inverse of 42 mod 2017 = ", invmod(42,2017);

Given two integers a and n, return the inverse of a modulo n. If not defined, undef is returned. If defined, then the return value multiplied by a equals 1 modulo n.

This results correspond to the Pari result of lift(Mod(1/a,n)). The semantics with respect to negative arguments match Pari. Notably, a negative n is negated, which is different from Math::BigInt, but in both cases the return value is still congruent to 1 modulo n as expected.

valuation

say "$n is divisible by 2 ", valuation($n,2), " times.";

Given integers n and k, returns the numbers of times n is divisible by k. This is a very limited version of the algebraic valuation meaning, just applied to integers. This corresponds to Pari's valuation function. 0 is returned if n or k is one of the values -1, 0, or 1.

hammingweight

Given an integer n, returns the binary Hamming weight of abs(n). This is also called the population count, and is the number of 1s in the binary representation. This corresponds to Pari's hammingweight function for t_INT arguments.

moebius

Returns μ(n), the Möbius function (also known as the Moebius, Mobius, or MoebiusMu function) for an integer input. This function is 1 if n = 1, 0 if n is not square free (i.e. n has a repeated factor), and -1^t if n is a product of t distinct primes. This is an important function in prime number theory. Like SAGE, we define moebius(0) = 0 for convenience.

If called with two arguments, they define a range low to high, and the function returns an array with the value of the Möbius function for every n from low to high inclusive. Large values of high will result in a lot of memory use. The algorithm used for ranges is Deléglise and Rivat (1996) algorithm 4.1, which is a segmented version of Lioen and van de Lune (1994) algorithm 3.2.

The return values are read-only constants. This should almost never come up, but it means trying to modify aliased return values will cause an exception (modifying the returned scalar or array is fine).

mertens

say "Mertens(10M) = ", mertens(10_000_000); # = 1037

Returns M(n), the Mertens function for a non-negative integer input. This function is defined as sum(moebius(1..n)), but calculated more efficiently for large inputs. For example, computing Mertens(100M) takes:

The summation of individual terms via factoring is quite expensive in time, though uses O(1) space. Using the range version of moebius is much faster, but returns a 100M element array which, even though they are shared constants, is not good for memory at this size. In comparison, this function will generate the equivalent output via a sieving method that is relatively memory frugal and very fast. The current method is a simple n^1/2 version of Deléglise and Rivat (1996), which involves calculating all moebius values to n^1/2, which in turn will require prime sieving to n^1/4.

Various algorithms exist for this, using differing quantities of μ(n). The simplest way is to efficiently sum all n values. Benito and Varona (2008) show a clever and simple method that only requires n/3 values. Deléglise and Rivat (1996) describe a segmented method using only n^1/3 values. The current implementation does a simple non-segmented n^1/2 version of their method. Kuznetsov (2011) gives an alternate method that he indicates is even faster. Lastly, one of the advanced prime count algorithms could be theoretically used to create a faster solution.

euler_phi

say "The Euler totient of $n is ", euler_phi($n);

Returns φ(n), the Euler totient function (also called Euler's phi or phi function) for an integer value. This is an arithmetic function which counts the number of positive integers less than or equal to n that are relatively prime to n. Given the definition used, euler_phi will return 0 for all n < 1. This follows the logic used by SAGE. Mathematica and Pari return euler_phi(-n) for n < 0. Mathematica returns 0 for n = 0, Pari pre-2.6.2 raises and exception, and Pari 2.6.2 and newer returns 2.

If called with two arguments, they define a range low to high, and the function returns an array with the totient of every n from low to high inclusive.

jordan_totient

say "Jordan's totient J_$k($n) is ", jordan_totient($k, $n);

Returns Jordan's totient function for a given integer value. Jordan's totient is a generalization of Euler's totient, where jordan_totient(1,$n) == euler_totient($n) This counts the number of k-tuples less than or equal to n that form a coprime tuple with n. As with euler_phi, 0 is returned for all n < 1. This function can be used to generate some other useful functions, such as the Dedekind psi function, where psi(n) = J(2,n) / J(1,n).

exp_mangoldt

say "exp(lambda($_)) = ", exp_mangoldt($_) for 1 .. 100;

Returns EXP(Λ(n)), the exponential of the Mangoldt function (also known as von Mangoldt's function) for an integer value. The Mangoldt function is equal to log p if n is prime or a power of a prime, and 0 otherwise. We return the exponential so all results are integers. Hence the return value for exp_mangoldt is:

p if n = p^m for some prime p and integer m >= 1
1 otherwise.

liouville

Returns λ(n), the Liouville function for a non-negative integer input. This is -1 raised to Ω(n) (the total number of prime factors).

chebyshev_theta

say chebyshev_theta(10000);

Returns θ(n), the first Chebyshev function for a non-negative integer input. This is the sum of the logarithm of each prime where p <= n. This is effectively:

my $s = 0; forprimes { $s += log($_) } $n; return $s;

chebyshev_psi

say chebyshev_psi(10000);

Returns ψ(n), the second Chebyshev function for a non-negative integer input. This is the sum of the logarithm of each prime power where p^k <= n for an integer k. An alternate but slower computation is as the summatory Mangoldt function, such as:

my $s = 0; for (1..$n) { $s += log(exp_mangoldt($_)) } return $s;

divisor_sum

This function takes a positive integer as input and returns the sum of its divisors, including 1 and itself. An optional second argument k may be given, which will result in the sum of the k-th powers of the divisors to be returned.

This is known as the sigma function (see Hardy and Wright section 16.7, or OEIS A000203). The API is identical to Pari/GP's sigma function. This function is useful for calculating things like aliquot sums, abundant numbers, perfect numbers, etc.

The second argument may also be a code reference, which is called for each divisor and the results are summed. This allows computation of other functions, but will be less efficient than using the numeric second argument. This corresponds to Pari/GP's sumdiv function.

For numeric second arguments (sigma computations), the result will be a bigint if necessary. For the code reference case, the user must take care to return bigints if overflow will be a concern.

primorial

$prim = primorial(11); # 11# = 2*3*5*7*11 = 2310

Returns the primorial n# of the positive integer input, defined as the product of the prime numbers less than or equal to n. This is the OEIS series A034386: primorial numbers second definition.

primorial(0) == 1
primorial($n) == pn_primorial( prime_count($n) )

The result will be a Math::BigInt object if it is larger than the native bit size.

Be careful about which version (primorial or pn_primorial) matches the definition you want to use. Not all sources agree on the terminology, though they should give a clear definition of which of the two versions they mean. OEIS, Wikipedia, and Mathworld are all consistent, and these functions should match that terminology. This function should return the same result as the mpz_primorial_ui function added in GMP 5.1.

pn_primorial

$prim = pn_primorial(5); # p_5# = 2*3*5*7*11 = 2310

Returns the primorial number p_n# of the positive integer input, defined as the product of the first n prime numbers (compare to the factorial, which is the product of the first n natural numbers). This is the OEIS series A002110: primorial numbers first definition.

pn_primorial(0) == 1
pn_primorial($n) == primorial( nth_prime($n) )

The result will be a Math::BigInt object if it is larger than the native bit size.

consecutive_integer_lcm

$lcm = consecutive_integer_lcm($n);

Given an unsigned integer argument, returns the least common multiple of all integers from 1 to n. This can be done by manipulation of the primes up to n, resulting in much faster and memory-friendly results than using a factorial.

partitions

Calculates the partition function p(n) for a non-negative integer input. This is the number of ways of writing the integer n as a sum of positive integers, without restrictions. This corresponds to Pari's numbpart function and Mathematica's PartitionsP function. The values produced in order are OEIS series A000041.

This uses a combinatorial calculation, which means it will not be very fast compared to Pari, Mathematica, or FLINT which use the Rademacher formula using multi-precision floating point. In 10 seconds:

carmichael_lambda

Returns the Carmichael function (also called the reduced totient function, or Carmichael λ(n)) of a positive integer argument. It is the smallest positive integer m such that a^m = 1 mod n for every integer a coprime to n. This is OEIS series A002322.

kronecker

Returns the Kronecker symbol (a|n) for two integers. The possible return values with their meanings for odd prime n are:

0 a = 0 mod n
1 a is a quadratic residue mod n (a = x^2 mod n for some x)
-1 a is a quadratic non-residue mod n (no a where a = x^2 mod n)

The Kronecker symbol is an extension of the Jacobi symbol to all integer values of n from the latter's domain of positive odd values of n. The Jacobi symbol is itself an extension of the Legendre symbol, which is only defined for odd prime values of n. This corresponds to Pari's kronecker(a,n) function, Mathematica's KroneckerSymbol[n,m] function, and GMP's mpz_kronecker(a,n), mpz_jacobi(a,n), and mpz_legendre(a,n) functions.

factorial

Given positive integer argument n, returns the factorial of n, defined as the product of the integers 1 to n with the special case of factorial(0) = 1. This corresponds to Pari's factorial(n) and Mathematica's Factorial[n] functions.

binomial

Given integer arguments n and k, returns the binomial coefficient n*(n-1)*...*(n-k+1)/k!, also known as the choose function. Negative arguments use the Kronenburg extensions. This corresponds to Pari's binomial(n,k) function, Mathematica's Binomial[n,k] function, and GMP's mpz_bin_ui function.

For negative arguments, this matches Mathematica. Pari does not implement the n < 0, k <= n extension and instead returns 0 for this case. GMP's API does not allow negative k but otherwise matches. Math::BigInt does not implement any extensions and the results for n < 0, k 0> are undefined.

bernfrac

Returns the Bernoulli number B_n for an integer argument n, as a rational number represented by two Math::BigInt objects. B_1 = 1/2. This corresponds to Pari's bernfrac(n) and Mathematica's BernoulliB functions.

This currently uses the simple Brent-Harvey recurrence, so will not be nearly as fast as Pari or Mathematica which use high-precision values of Pi and Zeta. With Math::Prime::Util::GMP installed it is, however, faster than Math::Pari which uses an older algorithm.

bernreal

Returns the Bernoulli number B_n for an integer argument n, as a Math::BigFloat object using the default precision. An optional second argument may be given specifying the precision to be used.

stirling

Returns the Stirling numbers of either the first kind (default) or second kind (with a third argument of 2). It takes two non-negative integer arguments n and k. This corresponds to Pari's stirling(n,k,{type}) function and Mathematica's StirlingS1 / StirlingS2 functions.

Stirling numbers of the first kind are -1^(n-k) times the number of permutations of n symbols with exactly k cycles. Stirling numbers of the second kind are the number of ways to partition a set of n elements into k non-empty subsets.

znorder

$order = znorder(2, next_prime(10**16)-6);

Given two positive integers a and n, returns the multiplicative order of a modulo n. This is the smallest positive integer k such that a^k ≡ 1 mod n. Returns 1 if a = 1. Returns undef if a = 0 or if a and n are not coprime, since no value will result in 1 mod n. This corresponds to Pari's znorder(Mod(a,n)) function and Mathematica's MultiplicativeOrder[a,n] function.

znprimroot

Given a positive integer n, returns the smallest primitive root of (Z/nZ)^*, or undef if no root exists. A root exists when euler_phi($n) == carmichael_lambda($n), which will be true for all prime n and some composites.

OEIS A033948 is a sequence of integers where the primitive root exists, while OEIS A046145 is a list of the smallest primitive roots, which is what this function produces.

znlog

$k = znlog($a, $g, $p)

Returns the integer k that solves the equation a = g^k mod p, or undef if no solution is found. This is the discrete logarithm problem.

The implementation for native integers first applies Silver-Pohlig-Hellman on the group order to possibly reduce the problem to a set of smaller problems. The solutions are then performed using a relatively fast Shanks BSGS, as well as trial and Pollard's DLP Rho.

The PP implementation is less sophisticated, with only a memory-heavy BSGS being used.

legendre_phi

$phi = legendre_phi(1000000000, 41);

Given a non-negative integer n and a non-negative prime number a, returns the Legendre phi function (also called Legendre's sum). This is the count of positive integers <= n which are not divisible by any of the first a primes.

RANDOM PRIMES

random_prime

Returns a pseudo-randomly selected prime that will be greater than or equal to the lower limit and less than or equal to the upper limit. If no lower limit is given, 2 is implied. Returns undef if no primes exist within the range.

The goal is to return a uniform distribution of the primes in the range, meaning for each prime in the range, the chances are equally likely that it will be seen. This is removes from consideration such algorithms as PRIMEINC, which although efficient, gives very non-random output. This also implies that the numbers will not be evenly distributed, since the primes are not evenly distributed. Stated differently, the random prime functions return a uniformly selected prime from the set of primes within the range. Hence given random_prime(1000), the numbers 2, 3, 487, 631, and 997 all have the same probability of being returned.

The configuration option use_primeinc can be set to override this and use the PRIMEINC algorithm for non-trivial sizes. This applies to all random prime functions. Never use this for crypto or if uniformly random primes are desired, but if you really don't care and just want any old prime in the range, setting this may make this run 2-4x faster.

For small numbers, a random index selection is done, which gives ideal uniformity and is very efficient with small inputs. For ranges larger than this ~16-bit threshold but within the native bit size, a Monte Carlo method is used (multiple calls to irand will be made if necessary). This also gives ideal uniformity and can be very fast for reasonably sized ranges. For even larger numbers, we partition the range, choose a random partition, then select a random prime from the partition. This gives some loss of uniformity but results in many fewer bits of randomness being consumed as well as being much faster.

If no irand function was set, then Bytes::Random::Secure is used with a non-blocking seed. This will create good quality random numbers, so there should be little reason to change unless one is generating long-term keys, where using the blocking random source may be preferred.

random_ndigit_prime

say "My 4-digit prime number is: ", random_ndigit_prime(4);

Selects a random n-digit prime, where the input is an integer number of digits. One of the primes within that range (e.g. 1000 - 9999 for 4-digits) will be uniformly selected using the irand function as described above.

If the number of digits is greater than or equal to the maximum native type, then the result will be returned as a BigInt. However, if the nobigint configuration option is on, then output will be restricted to native size numbers, and requests for more digits than natively supported will result in an error. For better performance with large bit sizes, install Math::Prime::Util::GMP.

random_nbit_prime

my $bigprime = random_nbit_prime(512);

Selects a random n-bit prime, where the input is an integer number of bits. A prime with the nth bit set will be uniformly selected, with randomness supplied via calls to the irand function as described above.

For bit sizes of 64 and lower, "random_prime" is used, which gives completely uniform results in this range. For sizes larger than 64, Algorithm 1 of Fouque and Tibouchi (2011) is used, wherein we select a random odd number for the lower bits, then loop selecting random upper bits until the result is prime. This allows a more uniform distribution than the general "random_prime" case while running slightly faster (in contrast, for large bit sizes "random_prime" selects a random upper partition then loops on the values within the partition, which very slightly skews the results towards smaller numbers).

The irand function is used for randomness, so all the discussion in "random_prime" about that applies here. The result will be a BigInt if the number of bits is greater than the native bit size. For better performance with large bit sizes, install Math::Prime::Util::GMP.

random_strong_prime

my $bigprime = random_strong_prime(512);

Constructs an n-bit strong prime using Gordon's algorithm. We consider a strong prime p to be one where

p is large. This function requires at least 128 bits.

p-1 has a large prime factor r.

p+1 has a large prime factor s

r-1 has a large prime factor t

Using a strong prime in cryptography guards against easy factoring with algorithms like Pollard's Rho. Rivest and Silverman (1999) present a case that using strong primes is unnecessary, and most modern cryptographic systems agree. First, the smoothness does not affect more modern factoring methods such as ECM. Second, modern factoring methods like GNFS are far faster than either method so make the point moot. Third, due to key size growth and advances in factoring and attacks, for practical purposes, using large random primes offer security equivalent to strong primes.

Internally this additionally runs the BPSW probable prime test on every partial result, and constructs a primality certificate for the final result, which is verified. These provide additional checks that the resulting value has been properly constructed.

random_shawe_taylor_prime

my $bigprime = random_shawe_taylor_prime(8192);

Construct an n-bit provable prime, using the Shawe-Taylor algorithm in section C.6 of FIPS 186-4. This uses 512 bits of randomness and SHA-256 as the hash. This is a slightly simpler and older (1986) method than Maurer's 1999 construction. It is a bit faster than Maurer's method, and uses less system entropy for large sizes. The primary reason to use this rather than Maurer's method is to use the FIPS 186-4 algorithm.

Internally this additionally runs the BPSW probable prime test on every partial result, and constructs a primality certificate for the final result, which is verified. These provide additional checks that the resulting value has been properly constructed.

UTILITY FUNCTIONS

prime_precalc

prime_precalc( 1_000_000_000 );

Let the module prepare for fast operation up to a specific number. It is not necessary to call this, but it gives you more control over when memory is allocated and gives faster results for multiple calls in some cases. In the current implementation this will calculate a sieve for all numbers up to the specified number.

prime_memfree

prime_memfree;

Frees any extra memory the module may have allocated. Like with prime_precalc, it is not necessary to call this, but if you're done making calls, or want things cleanup up, you can use this. The object method might be a better choice for complicated uses.

Math::Prime::Util::MemFree->new

This is a more robust way of making sure any cached memory is freed, as it will be handled by the last MemFree object leaving scope. This means if your routines were inside an eval that died, things will still get cleaned up. If you call another function that uses a MemFree object, the cache will stay in place because you still have an object.

prime_get_config

my $cached_up_to = prime_get_config->{'precalc_to'};

Returns a reference to a hash of the current settings. The hash is copy of the configuration, so changing it has no effect. The settings include:

verbose verbose level. 1 or more will result in extra output.
precalc_to primes up to this number are calculated
maxbits the maximum number of bits for native operations
xs 0 or 1, indicating the XS code is available
gmp 0 or 1, indicating GMP code is available
maxparam the largest value for most functions, without bigint
maxdigits the max digits in a number, without bigint
maxprime the largest representable prime, without bigint
maxprimeidx the index of maxprime, without bigint
assume_rh whether to assume the Riemann hypothesis (default 0)
use_primeinc allow the PRIMEINC random prime algorithm

prime_set_config

prime_set_config( assume_rh => 1 );

Allows setting of some parameters. Currently the only parameters are:

verbose The default setting of 0 will generate no extra output.
Setting to 1 or higher results in extra output. For
example, at setting 1 the AKS algorithm will indicate
the chosen r and s values. At setting 2 it will output
a sequence of dots indicating progress. Similarly, for
random_maurer_prime, setting 3 shows real time progress.
Factoring large numbers is another place where verbose
settings can give progress indications.
xs Allows turning off the XS code, forcing the Pure Perl
code to be used. Set to 0 to disable XS, set to 1 to
re-enable. You probably will never want to do this.
gmp Allows turning off the use of L<Math::Prime::Util::GMP>,
which means using Pure Perl code for big numbers. Set
to 0 to disable GMP, set to 1 to re-enable.
You probably will never want to do this.
assume_rh Allows functions to assume the Riemann hypothesis is
true if set to 1. This defaults to 0. Currently this
setting only impacts prime count lower and upper
bounds, but could later be applied to other areas such
as primality testing. A later version may also have a
way to indicate whether no RH, RH, GRH, or ERH is to
be assumed.
irand Takes a code ref to an irand function returning a
uniform number between 0 and 2**32-1. This will be
used for all random number generation in the module.
use_primeinc When generating random primes, allow the PRIMEINC algorithm
to be used. This can be 2-4x faster than the default
methods, but gives bad uniformity.

FACTORING FUNCTIONS

factor

Produces the prime factors of a positive number input, in numerical order. The product of the returned factors will be equal to the input. n = 1 will return an empty list, and n = 0 will return 0. This matches Pari.

In scalar context, returns Ω(n), the total number of prime factors (OEIS A001222). This corresponds to Pari's bigomega(n) function and Mathematica's PrimeOmega[n] function. This is same result that we would get if we evaluated the resulting array in scalar context.

The current algorithm does a little trial division, a check for perfect powers, followed by combinations of Pollard's Rho, SQUFOF, and Pollard's p-1. The combination is applied to each non-prime factor found.

Factoring bigints works with pure Perl, and can be very handy on 32-bit machines for numbers just over the 32-bit limit, but it can be very slow for "hard" numbers. Installing the Math::Prime::Util::GMP module will speed up bigint factoring a lot, and all future effort on large number factoring will be in that module. If you do not have that module for some reason, use the GMP or Pari version of bigint if possible (e.g. use bigint try => 'GMP,Pari'), which will run 2-3x faster (though still 100x slower than the real GMP code).

factor_exp

Produces pairs of prime factors and exponents in numerical factor order. This is more convenient for some algorithms. This is the same form that Mathematica's FactorInteger[n] and Pari/GP's factorint functions return. Note that Math::Pari transposes the Pari result matrix.

In scalar context, returns ω(n), the number of unique prime factors (OEIS A001221). This corresponds to Pari's omega(n) function and Mathematica's PrimeNu[n] function. This is same result that we would get if we evaluated the resulting array in scalar context.

The internals are identical to "factor", so all comments there apply. Just the way the factors are arranged is different.

divisors

my @divisors = divisors(30); # returns (1, 2, 3, 5, 6, 10, 15, 30)

Produces all the divisors of a positive number input, including 1 and the input number. The divisors are a power set of multiplications of the prime factors, returned as a uniqued sorted list. The result is identical to that of Pari's divisors and Mathematica's Divisors[n] functions.

In scalar context this returns the sigma0 function, the sigma function (see Hardy and Wright section 16.7, or OEIS A000203). This is the same result as evaluating the array in scalar context.

trial_factor

my @factors = trial_factor($n);

Produces the prime factors of a positive number input. The factors will be in numerical order. For large inputs this will be very slow. Like all the specific-algorithm *_factor routines, this is not exported unless explicitly requested.

fermat_factor

my @factors = fermat_factor($n);

Produces factors, not necessarily prime, of the positive number input. The particular algorithm is Knuth's algorithm C. For small inputs this will be very fast, but it slows down quite rapidly as the number of digits increases. It is very fast for inputs with a factor close to the midpoint (e.g. a semiprime p*q where p and q are the same number of digits).

holf_factor

my @factors = holf_factor($n);

Produces factors, not necessarily prime, of the positive number input. An optional number of rounds can be given as a second parameter. It is possible the function will be unable to find a factor, in which case a single element, the input, is returned. This uses Hart's One Line Factorization with no premultiplier. It is an interesting alternative to Fermat's algorithm, and there are some inputs it can rapidly factor. Overall it has the same advantages and disadvantages as Fermat's method.

squfof_factor

my @factors = squfof_factor($n);

Produces factors, not necessarily prime, of the positive number input. An optional number of rounds can be given as a second parameter. It is possible the function will be unable to find a factor, in which case a single element, the input, is returned. This function typically runs very fast.

prho_factor

pbrent_factor

Produces factors, not necessarily prime, of the positive number input. An optional number of rounds can be given as a second parameter. These attempt to find a single factor using Pollard's Rho algorithm, either the original version or Brent's modified version. These are more specialized algorithms usually used for pre-factoring very large inputs, as they are very fast at finding small factors.

pminus1_factor

Produces factors, not necessarily prime, of the positive number input. This is Pollard's p-1 method, using two stages with default smoothness settings of 1_000_000 for B1, and 10 * B1 for B2. This method can rapidly find a factor p of n where p-1 is smooth (it has no large factors).

ecm_factor

Produces factors, not necessarily prime, of the positive number input. This is the elliptic curve method using two stages.

MATHEMATICAL FUNCTIONS

ExponentialIntegral

my $Ei = ExponentialIntegral($x);

Given a non-zero floating point input x, this returns the real-valued exponential integral of x, defined as the integral of e^t/t dt from -infinity to x.

If the bignum module has been loaded, all inputs will be treated as if they were Math::BigFloat objects.

For non-BigInt/BigFloat objects, the result should be accurate to at least 14 digits.

For BigInt / BigFloat objects, we first check to see if Math::MPFR is available. If so, then it is used since it is very fast and has high accuracy. Accuracy when using MPFR will be equal to the accuracy() value of the input (or the default BigFloat accuracy, which is 40 by default).

MPFR is used for positive inputs only. If Math::MPFR is not available or the input is negative, then other methods are used: continued fractions (x < -1), rational Chebyshev approximation ( -1 < x < 0), a convergent series (small positive x), or an asymptotic divergent series (large positive x). Accuracy should be at least 14 digits.

LogarithmicIntegral

my $li = LogarithmicIntegral($x)

Given a positive floating point input, returns the floating point logarithmic integral of x, defined as the integral of dt/ln t from 0 to x. If given a negative input, the function will croak. The function returns 0 at x = 0, and -infinity at x = 1.

This is often known as li(x). A related function is the offset logarithmic integral, sometimes known as Li(x) which avoids the singularity at 1. It may be defined as Li(x) = li(x) - li(2). Crandall and Pomerance use the term li0 for this function, and define li(x) = Li0(x) - li0(2). Due to this terminology confusion, it is important to check which exact definition is being used.

If the bignum module has been loaded, all inputs will be treated as if they were Math::BigFloat objects.

For non-BigInt/BigFloat objects, the result should be accurate to at least 14 digits.

For BigInt / BigFloat objects, we first check to see if Math::MPFR is available. If so, then it is used, as it will return results much faster and can be more accurate. Accuracy when using MPFR will be equal to the accuracy() value of the input (or the default BigFloat accuracy, which is 40 by default).

MPFR is used for inputs greater than 1 only. If Math::MPFR is not installed or the input is less than 1, results will be calculated as Ei(ln x).

RiemannZeta

my $z = RiemannZeta($s);

Given a floating point input s where s >= 0, returns the floating point value of ζ(s)-1, where ζ(s) is the Riemann zeta function. One is subtracted to ensure maximum precision for large values of s. The zeta function is the sum from k=1 to infinity of 1 / k^s. This function only uses real arguments, so is basically the Euler Zeta function.

If the bignum module has been loaded, all inputs will be treated as if they were Math::BigFloat objects.

For non-BigInt/BigFloat objects, the result should be accurate to at least 14 digits. The XS code uses a rational Chebyshev approximation between 0.5 and 5, and a series for other values. The PP code uses an identical series for all values.

For BigInt / BigFloat objects, we first check to see if the Math::MPFR module is installed. If so, then it is used, as it will return results much faster and can be more accurate. Accuracy when using MPFR will be equal to the accuracy() value of the input (or the default BigFloat accuracy, which is 40 by default).

If Math::MPFR is not installed, then results are calculated using either Borwein (1991) algorithm 2, or the basic series. Full input accuracy is attempted, but Math::BigFloat RT 43692 produces incorrect high-accuracy computations without the fix. It is also very slow. I highly recommend installing Math::MPFR for BigFloat computations.

RiemannR

my $r = RiemannR($x);

Given a positive non-zero floating point input, returns the floating point value of Riemann's R function. Riemann's R function gives a very close approximation to the prime counting function.

If the bignum module has been loaded, all inputs will be treated as if they were Math::BigFloat objects.

For non-BigInt/BigFloat objects, the result should be accurate to at least 14 digits.

For BigInt / BigFloat objects, we first check to see if the Math::MPFR module is installed. If so, then it is used, as it will return results much faster and can be more accurate. Accuracy when using MPFR will be equal to the accuracy() value of the input (or the default BigFloat accuracy, which is 40 by default). Accuracy without MPFR should be 35 digits.

LambertW

Returns the principal branch of the Lambert W function of a real value. Given a value k this solves for W in the equation k = We^W. The input must not be less than -1/e. This corresponds to Pari's lambertw function and Mathematica's LambertW function.

Pi

With no arguments, returns the value of Pi as an NV. With a positive integer argument, returns the value of Pi with the requested number of digits (including the leading 3). The return value will be an NV if the number of digits fits in an NV (typically 15 or less), or a Math::BigFloat object otherwise.

PRIMALITY TESTING NOTES

Above 2^64, "is_prob_prime" performs an extra-strong BPSW test which is fast (a little less than the time to perform 3 Miller-Rabin tests) and has no known counterexamples. If you trust the primality testing done by Pari, Maple, SAGE, FLINT, etc., then this function should be appropriate for you. "is_prime" will do the same BPSW test as well as some additional testing, making it slightly more time consuming but less likely to produce a false result. This is a little more stringent than Mathematica. "is_provable_prime" constructs a primality proof. If a certificate is requested, then either BLS75 theorem 5 or ECPP is performed. Without a certificate, the method is implementation specific (currently it is identical, but later releases may use APRCL). With Math::Prime::Util::GMP installed, this is quite fast through 300 or so digits.

Math systems 30 years ago typically used Miller-Rabin tests with k bases (usually fixed bases, sometimes random) for primality testing, but these have generally been replaced by some form of BPSW as used in this module. See Pinch's 1993 paper for examples of why using k M-R tests leads to poor results. The three exceptions in common contemporary use I am aware of are:

libtommath

Uses the first k prime bases. This is problematic for cryptographic use, as there are known methods (e.g. Arnault 1994) for constructing counterexamples. The number of bases required to avoid false results is unreasonably high, hence performance is slow even if one ignores counterexamples. Unfortunately this is the multi-precision math library used for Perl 6 and at least one CPAN Crypto module.

GMP/MPIR

Uses a set of k static-random bases. The bases are randomly chosen using a PRNG that is seeded identically each call (the seed changes with each release). This offers a very slight advantage over using the first k prime bases, but not much. See, for example, Nicely's mpz_probab_prime_p pseudoprimes page.

Pari 2.1.7 is the default version installed with the Math::Pari module. It uses 10 random M-R bases (the PRNG uses a fixed seed set at compile time). Pari 2.3.0 was released in May 2006 and it, like all later releases through at least 2.6.1, use BPSW / APRCL, after complaints of false results from using M-R tests. For example, it will indicate 9 is prime about 1 out of every 276k calls.

Basically the problem is that it is just too easy to get counterexamples from running k M-R tests, forcing one to use a very large number of tests (at least 20) to avoid frequent false results. Using the BPSW test results in no known counterexamples after 30+ years and runs much faster. It can be enhanced with one or more random bases if one desires, and will still be much faster.

Using k fixed bases has another problem, which is that in any adversarial situation we can assume the inputs will be selected such that they are one of our counterexamples. Now we need absurdly large numbers of tests. This is like playing "pick my number" but the number is fixed forever at the start, the guesser gets to know everyone else's guesses and results, and can keep playing as long as they like. It's only valid if the players are completely oblivious to what is happening.

LIMITATIONS

Perl versions earlier than 5.8.0 have problems doing exact integer math. Some operations will flip signs, and many operations will convert intermediate or output results to doubles, which loses precision on 64-bit systems. This causes numerous functions to not work properly. The test suite will try to determine if your Perl is broken (this only applies to really old versions of Perl compiled for 64-bit when using numbers larger than ~ 2^49). The best solution is updating to a more recent Perl.

The module is thread-safe and should allow good concurrency on all platforms that support Perl threads except Win32. With Win32, either don't use threads or make sure prime_precalc is called before using primes, prime_count, or nth_prime with large inputs. This is only an issue if you use non-Cygwin Win32 and call these routines from within Perl threads.

Because the loop functions like "forprimes" use MULTICALL, there is some odd behavior with anonymous sub creation inside the block. This is shared with most XS modules that use MULTICALL, and is rarely seen because it is such an unusual use. An example is:

This can be worked around by using double braces for the function, e.g. forprimes {{ ... }} 50.

SEE ALSO

This section describes other CPAN modules available that have some feature overlap with this one. Also see the "REFERENCES" section. Please let me know if any of this information is inaccurate. Also note that just because a module doesn't match what I believe are the best set of features doesn't mean it isn't perfect for someone else.

I will use SoE to indicate the Sieve of Eratosthenes, and MPU to denote this module (Math::Prime::Util). Some quick alternatives I can recommend if you don't want to use MPU:

Math::Prime::FastSieve is the alternative module I use for basic functionality with small integers. It's fast and simple, and has a good set of features.

Math::Primality is the alternative module I use for primality testing on bigints. The downside is that it can be slow, and the functions other than primality tests are very slow.

Math::Pari if you want the kitchen sink and can install it and handle using it. There are still some functions it doesn't do well (e.g. prime count and nth_prime).

Math::Prime::XS has is_prime and primes functionality. There is no bigint support. The is_prime function uses well-written trial division, meaning it is very fast for small numbers, but terribly slow for large 64-bit numbers. MPU is similarly fast with small numbers, but becomes faster as the size increases. MPXS's prime sieve is an unoptimized non-segmented SoE which returns an array. Sieve bases larger than 10^7 start taking inordinately long and using a lot of memory (gigabytes beyond 10^10). E.g. primes(10**9, 10**9+1000) takes 36 seconds with MPXS, but only 0.0001 seconds with MPU.

Math::Prime::FastSieve supports primes, is_prime, next_prime, prev_prime, prime_count, and nth_prime. The caveat is that all functions only work within the sieved range, so are limited to about 10^10. It uses a fast SoE to generate the main sieve. The sieve is 2-3x slower than the base sieve for MPU, and is non-segmented so cannot be used for larger values. Since the functions work with the sieve, they are very fast. The fast bit-vector-lookup functionality can be replicated in MPU using prime_precalc but is not required.

Bit::Vector supports the primes and prime_count functionality in a somewhat similar way to Math::Prime::FastSieve. It is the slowest of all the XS sieves, and has the most memory use. It is faster than pure Perl code.

Crypt::Primes supports random_maurer_prime functionality. MPU has more options for random primes (n-digit, n-bit, ranged, and strong) in addition to Maurer's algorithm. MPU does not have the critical bug RT81858. MPU has a more uniform distribution as well as return a larger subset of primes (RT81871). MPU does not depend on Math::Pari though can run slow for bigints unless the Math::BigInt::GMP or Math::BigInt::Pari modules are installed. Having Math::Prime::Util::GMP installed also helps performance for MPU. Crypt::Primes is hardcoded to use Crypt::Random, while MPU uses Bytes::Random::Secure, and also allows plugging in a random function. This is more flexible, faster, has fewer dependencies, and uses a CSPRNG for security. MPU can return a primality certificate. What Crypt::Primes has that MPU does not is the ability to return a generator.

Math::Factor::XS calculates prime factors and factors, which correspond to the "factor" and "divisors" functions of MPU. These functions do not support bigints. Both are implemented with trial division, meaning they are very fast for really small values, but become very slow as the input gets larger (factoring 19 digit semiprimes is over 1000 times slower). The function count_prime_factors can be done in MPU using scalar factor($n). See the "EXAMPLES" section for a 2-line function replicating matches.

Math::Big version 1.12 includes primes functionality. The current code is only usable for very tiny inputs as it is incredibly slow and uses lots of memory. RT81986 has a patch to make it run much faster and use much less memory. Since it is in pure Perl it will still run quite slow compared to MPU.

Math::Big::Factors supports factorization using wheel factorization (smart trial division). It supports bigints. Unfortunately it is extremely slow on any input that isn't the product of just small factors. Even 7 digit inputs can take hundreds or thousands of times longer to factor than MPU or Math::Factor::XS. 19-digit semiprimes will take hours versus MPU's single milliseconds.

Math::Factoring is a placeholder module for bigint factoring. Version 0.02 only supports trial division (the Pollard-Rho method does not work).

Math::Prime::TiedArray allows random access to a tied primes array, almost identically to what MPU provides in Math::Prime::Util::PrimeArray. MPU has attempted to fix Math::Prime::TiedArray's shift bug (RT58151). MPU is typically much faster and will use less memory, but there are some cases where MP:TA is faster (MP:TA stores all entries up to the largest request, while MPU:PA stores only a window around the last request).

List::Gen is very interesting and includes a built-in primes iterator as well as a is_prime filter for arbitrary sequences. Unfortunately both are very slow.

Math::Primality supports is_prime, is_pseudoprime, is_strong_pseudoprime, is_strong_lucas_pseudoprime, next_prime, prev_prime, prime_count, and is_aks_prime functionality. This is a great little module that implements primality functionality. It was the first CPAN module to support the BPSW test. All inputs are processed using GMP, so it of course supports bigints. In fact, Math::Primality was made originally with bigints in mind, while MPU was originally targeted to native integers, but both have added better support for the other. The main differences are extra functionality (MPU has more functions) and performance. With native integer inputs, MPU is generally much faster, especially with "prime_count". For bigints, MPU is slower unless the Math::Prime::Util::GMP module is installed, in which case MPU is ~2x faster. Math::Primality also installs a primes.pl program, but it has much less functionality than the one included with MPU.

Math::NumSeq does not have a one-to-one mapping between functions in MPU, but it does offer a way to get many similar results such as primes, twin primes, Sophie-Germain primes, lucky primes, moebius, divisor count, factor count, Euler totient, primorials, etc. Math::NumSeq is set up for accessing these values in order rather than for arbitrary values, though a few sequences support random access. The primary advantage I see is the uniform access mechanism for a lot of sequences. For those methods that overlap, MPU is usually much faster. Importantly, most of the sequences in Math::NumSeq are limited to 32-bit indices.

"cr_combine" in Math::ModInt::ChineseRemainder is similar to MPU's "chinese", and in fact they use the same algorithm. The former module uses caching of moduli to speed up further operations. MPU does not do this. This would only be important for cases where the lcm is larger than a native int (noting that use in cryptography would always have large moduli).

Math::Pari supports a lot of features, with a great deal of overlap. In general, MPU will be faster for native 64-bit integers, while it's differs for bigints (Pari will always be faster if Math::Prime::Util::GMP is not installed; with it, it varies by function). Note that Pari extends many of these functions to other spaces (Gaussian integers, complex numbers, vectors, matrices, polynomials, etc.) which are beyond the realm of this module. Some of the highlights:

isprime

The default Math::Pari is built with Pari 2.1.7. This uses 10 M-R tests with randomly chosen bases (fixed seed, but doesn't reset each invocation like GMP's is_probab_prime). This has a greater chance of false positives compared to the BPSW test -- some composites such as 9, 88831, 38503, etc. (OEIS A141768) have a surprisingly high chance of being indicated prime. Using isprime($n,1) will perform an n-1 proof, but this becomes unreasonably slow past 70 or so digits.

If Math::Pari is built using Pari 2.3.5 (this requires manual configuration) then the primality tests are completely different. Using ispseudoprime will perform a BPSW test and is quite a bit faster than the older test. isprime now does an APR-CL proof (fast, but no certificate).

Math::Primality uses a strong BPSW test, which is the standard BPSW test based on the 1980 paper. It has no known counterexamples (though like all these tests, we know some exist). Pari 2.3.5 (and through at least 2.6.2) uses an almost-extra-strong BPSW test for its ispseudoprime function. This is deterministic for native integers, and should be excellent for bigints, with a slightly lower chance of counterexamples than the traditional strong test. Math::Prime::Util uses the full extra-strong BPSW test, which has an even lower chance of counterexample. With Math::Prime::Util::GMP, is_prime adds 1 to 5 extra M-R tests using random bases, which further reduces the probability of a composite being allowed to pass.

primepi

Only available with version 2.3 of Pari. Similar to MPU's "prime_count" function in API, but uses a naive counting algorithm with its precalculated primes, so is not of practical use. Incidently, Pari 2.6 (not usable from Perl) has fixed the pre-calculation requirement so it is more useful, but is still thousands of times slower than MPU.

primes

Doesn't support ranges, requires bumping up the precalculated primes for larger numbers, which means knowing in advance the upper limit for primes. Support for numbers larger than 400M requires using Pari version 2.3.5. If that is used, sieving is about 2x faster than MPU, but doesn't support segmenting.

factorint

Similar to MPU's "factor_exp" though with a slightly different return. MPU offers "factor" for a linear array of prime factors where n = p1 * p2 * p3 * ... as (p1,p2,p3,...) and "factor_exp" for an array of factor/exponent pairs where: n = p1^e1 * p2^e2 * ... as ([p1,e1],[p2,e2],...) Pari/GP returns an array similar to the latter. Math::Pari returns a transposed matrix like: n = p1^e1 * p2^e2 * ... as ([p1,p2,...],[e1,e2,...]) Slower than MPU for all 64-bit inputs on an x86_64 platform, it may be faster for large values on other platforms. With the newer Math::Prime::Util::GMP releases, bigint factoring is slightly faster on average in MPU.

Similar to MPU's "euler_phi" and "moebius". MPU is 2-20x faster for native integers. MPU also supported range inputs, which can be much more efficient. Without Math::Prime::Util::GMP installed, MPU is very slow with bigints. With it installed, it is about 2x slower than Math::Pari.

gcd, lcm, kronecker, znorder, znprimroot, znlog

Similar to MPU's "gcd", "lcm", "kronecker", "znorder", "znprimroot", and "znlog". Pari's znprimroot only returns the smallest root for prime powers. The behavior is undefined when the group is not cyclic (sometimes it throws an exception, sometimes it returns an incorrect answer, sometimes it hangs). MPU's "znprimroot" will always return the smallest root if it exists, and undef otherwise. Similarly, MPU's "znlog" will return the smallest x and work with non-primitive-root g, which is similar to Pari/GP 2.6, but not the older versions in Math::Pari. The performance of "znlog" is fairly good compared to older Pari/GP, but much worse than 2.6's new methods.

sigma

Similar to MPU's "divisor_sum". MPU is ~10x faster when the result fits in a native integer. Once things overflow it is fairly similar in performance. However, using Math::BigInt can slow things down quite a bit, so for best performance in these cases using a Math::GMP object is best.

numbpart, forpart

Similar to MPU's "partitions" and "forpart". These functions were introduced in Pari 2.3 and 2.6, hence are not in Math::Pari. numbpart produce identical results to partitions, but Pari is much faster. forpart is very similar to Pari's function, but produces a different ordering (MPU is the standard anti-lexicographical, Pari uses a size sort). Currently Pari is somewhat faster due to Perl function call overhead. When using restrictions, Pari has much better optimizations.

Overall, Math::Pari supports a huge variety of functionality and has a sophisticated and mature code base behind it (noting that the Pari library used is about 10 years old now). For native integers, typically Math::Pari will be slower than MPU. For bigints, Math::Pari may be superior and it rarely has any performance surprises. Some of the unique features MPU offers include super fast prime counts, nth_prime, ECPP primality proofs with certificates, approximations and limits for both, random primes, fast Mertens calculations, Chebyshev theta and psi functions, and the logarithmic integral and Riemann R functions. All with fairly minimal installation requirements.

PERFORMANCE

First, for those looking for the state of the art non-Perl solutions:

Primality testing

For general numbers smaller than 2000 or so digits, MPU is the fastest solution I am aware of (it is faster than Pari 2.7, PFGW, and FLINT). For very large inputs, PFGW is the fastest primality testing software I'm aware of. It has fast trial division, and is especially fast on many special forms. It does not have a BPSW test however, and there are quite a few counterexamples for a given base of its PRP test, so it is commonly used for fast filtering of large candidates. A test such as the BPSW test in this module is then recommended.

Primality proofs

Primo is the best method for open source primality proving for inputs over 1000 digits. Primo also does well below that size, but other good alternatives are David Cleaver's mpzaprcl, the APRCL from the modern Pari package, or the standalone ECPP from this module with large polynomial set.

Factoring

yafu, msieve, and gmp-ecm are all good choices for large inputs. The factoring code in this module (and all other CPAN modules) is very limited compared to those.

Primes

primesieve and yafu are the fastest publically available code I am aware of. Primesieve will additionally take advantage of multiple cores with excellent efficiency. Tomás Oliveira e Silva's private code may be faster for very large values, but isn't available for testing.

Note that the Sieve of Atkin is not faster than the Sieve of Eratosthenes when both are well implemented. The only Sieve of Atkin that is even competitive is Bernstein's super optimized primegen, which runs on par with the SoE in this module. The SoE's in Pari, yafu, and primesieve are all faster.

Prime Counts and Nth Prime

Outside of private research implementations doing prime counts for n > 2^64, this module should be close to state of the art in performance, and supports results up to 2^64. Further performance improvements are planned, as well as expansion to larger values.

The fastest solution for small inputs is a hybrid table/sieve method. This module does this for values below 60M. As the inputs get larger, either the tables have to grow exponentially or speed must be sacrificed. Hence this is not a good general solution for most uses.

Python's standard modules are very slow: MPMATH v0.17 primepi takes 169.5s and 25+ GB of RAM. SymPy 0.7.1 primepi takes 292.2s. However there are very fast solutions written by Robert William Hanks (included in the xt/ directory of this distribution): pure Python in 12.1s and NUMPY in 2.8s.

MPU is consistently the fastest solution, and performs the most stringent probable prime tests on bigints.

Math::Primality has a lot of overhead that makes it quite slow for native size integers. With bigints we finally see it work well.

Math::Pari built with 2.3.5 not only has a better primality test versus the default 2.1.7, but runs faster. It still has quite a bit of overhead with native size integers. Pari/GP 2.5.0 takes 11.3s, 16.9s, and 2.9s respectively for the tests above. MPU is still faster, but clearly the time for native integers is dominated by the calling overhead.

FACTORING

Factoring performance depends on the input, and the algorithm choices used are still being tuned. Math::Factor::XS is very fast when given input with only small factors, but it slows down rapidly as the smallest factor increases in size. For numbers larger than 32 bits, Math::Prime::Util can be 100x or more faster (a number with only very small factors will be nearly identical, while a semiprime may be 3000x faster). Math::Pari is much slower with native sized inputs, probably due to calling overhead. For bigints, the Math::Prime::Util::GMP module is needed or performance will be far worse than Math::Pari. With the GMP module, performance is pretty similar from 20 through 70 digits, which the caveat that the current MPU factoring uses more memory for 60+ digit numbers.

This slide presentation has a lot of data on 64-bit and GMP factoring performance I collected in 2009. Assuming you do not know anything about the inputs, trial division and optimized Fermat or Lehman work very well for small numbers (<= 10 digits), while native SQUFOF is typically the method of choice for 11-18 digits (I've seen claims that a lightweight QS can be faster for 15+ digits). Some form of Quadratic Sieve is usually used for inputs in the 19-100 digit range, and beyond that is the General Number Field Sieve. For serious factoring, I recommend looking at yafu, msieve, gmp-ecm, GGNFS, and Pari. The latest yafu should cover most uses, with GGNFS likely only providing a benefit for numbers large enough to warrant distributed processing.

PRIMALITY PROVING

The n-1 proving algorithm in Math::Prime::Util::GMP compares well to the version included in Pari. Both are pretty fast to about 60 digits, and work reasonably well to 80 or so before starting to take many minutes per number on a fast computer. Version 0.09 and newer of MPU::GMP contain an ECPP implementation that, while not state of the art compared to closed source solutions, works quite well. It averages less than a second for proving 200-digit primes including creating a certificate. Times below 200 digits are faster than Pari 2.3.5's APR-CL proof. For larger inputs the bottleneck is a limited set of discriminants, and time becomes more variable. There is a larger set of discriminants on github that help, with 300-digit primes taking ~5 seconds on average and typically under a minute for 500-digits. For primality proving with very large numbers, I recommend Primo.

"random_nbit_prime" is reasonably fast, and for most purposes should suffice. If good uniformity isn't important, the use_primeinc config option can be set and double the speed. For cryptographic purposes, one may want additional tests or a proven prime. Additional tests are quite cheap, as shown by the time for three extra M-R and a Frobenius test. At these bit sizes, the chances a composite number passes BPSW, three more M-R tests, and a Frobenius test is extraordinarily small.

"random_proven_prime" provides a randomly selected prime with an optional certificate, without specifying the particular method. Below 512 bits, using "is_provable_prime"("random_nbit_prime") is typically faster than Maurer's algorithm, but becomes quite slow as the bit size increases. This leaves the decision of the exact method of proving the result to the implementation.

"random_maurer_prime" constructs a provable prime. A primality test is run on each intermediate, and it also constructs a complete primality certificate which is verified at the end (and can be returned). While the result is uniformly distributed, only about 10% of the primes in the range are selected for output. This is a result of the FastPrime algorithm and is usually unimportant.

"random_shawe_taylor_prime" similarly constructs a provable prime. It uses a simpler construction method. The implementation uses a single large random seed followed by SHA-256 as specified by FIPS 186-4. As seen, it is a bit faster than the Maurer implementation.

"maurer" in Crypt::Primes times are included for comparison. It is pretty fast for small sizes but gets slow as the size increases. It does not perform any primality checks on the intermediate results or the final result (I highly recommended you run a primality test on the output). Additionally important for servers, "maurer" in Crypt::Primes uses excessive system entropy and can grind to a halt if /dev/random is exhausted (it can take days to return). The times above are on a machine running HAVEGED so never waits for entropy. Without this, the times would be much higher.

AUTHORS

Dana Jacobsen <dana@acm.org>

ACKNOWLEDGEMENTS

Eratosthenes of Cyrene provided the elegant and simple algorithm for finding primes.

Terje Mathisen, A.R. Quesada, and B. Van Pelt all had useful ideas which I used in my wheel sieve.

The SQUFOF implementation being used is a slight modification to the public domain racing version written by Ben Buhrow. Enhancements with ideas from Ben's later code as well as Jason Papadopoulos's public domain implementations are planned for a later version.

The LMO implementation is based on the 2003 preprint from Christian Bau, as well as the 2006 paper from Tomás Oliveira e Silva. I also want to thank Kim Walisch for the many discussions about prime counting.

Manuel Benito and Juan L. Varona, "Recursive formulas related to the summation of the Möbius function", The Open Mathematics Journal, v1, pp 25-34, 2007. Among many other things, shows a simple formula for computing the Mertens functions with only n/3 Möbius values (not as fast as Deléglise and Rivat, but really simple). http://www.unirioja.es/cu/jvarona/downloads/Benito-Varona-TOMATJ-Mertens.pdf

Henri Cohen, "A Course in Computational Algebraic Number Theory", Springer, 1996. Practical computational number theory from the team lead of Pari. Lots of explicit algorithms.

Marc Deléglise and Joöl Rivat, "Computing the summation of the Möbius function", Experimental Mathematics, v5, n4, pp 291-295, 1996. Enhances the Möbius computation in Lioen/van de Lune, and gives a very efficient way to compute the Mertens function. http://projecteuclid.org/euclid.em/1047565447