1-dimensional non-rebinnable bin holding double elements with scalable quantile operations defined upon;
Using little main memory, quickly computes approximate quantiles over very large data sequences with and even without a-priori knowledge of the number of elements to be filled;
Conceptually a strongly lossily compressed multiset (or bag);
Guarantees to respect the worst case approximation error specified upon instance construction.
First see the package summary and javadoc tree view to get the broad picture.

Motivation and Problem:
Intended to help scale applications requiring quantile computation.
Quantile computation on very large data sequences is problematic, for the following reasons:
Computing quantiles requires sorting the data sequence.
To sort a data sequence the entire data sequence needs to be available.
Thus, data cannot be thrown away during filling (as done by static bins like StaticBin1D and MightyStaticBin1D).
It needs to be kept, either in main memory or on disk.
There is often not enough main memory available.
Thus, during filling data needs to be streamed onto disk.
Sorting disk resident data is prohibitively time consuming.
As a consequence, traditional methods either need very large memories (like DynamicBin1D) or time consuming disk based sorting.

This class proposes to efficiently solve the problem, at the expense of producing approximate rather than exact results.
It can deal with infinitely many elements without resorting to disk.
The main memory requirements are smaller than for any other known approximate technique by an order of magnitude.
They get even smaller if an upper limit on the maximum number of elements ever to be added is known a-priori.

Approximation error:
The approximation guarantees are parametrizable and explicit but probabilistic, and apply for arbitrary value distributions and arrival distributions of the data sequence.
In other words, this class guarantees to respect the worst case approximation error specified upon instance construction to a certain probability.
Of course, if it is specified that the approximation error should not exceed some number very close to zero,
this class will potentially consume just as much memory as any of the traditional exact techniques would do.
However, for errors larger than 10-5, its memory requirements are modest, as shown by the table below.

Main memory requirements:
Given in megabytes, assuming a single element (double) takes 8 byte.
The number of elements required is then MB*1024*1024/8.

Parameters:

epsilon - the maximum allowed approximation error on quantiles; in [0.0,1.0].
To get exact rather than approximate quantiles, set epsilon=0.0;

delta - the probability allowed that the approximation error fails to be smaller than epsilon; in [0.0,1.0].
To avoid probabilistic answers, set delta=0.0.
For example, delta = 0.0001 is equivalent to a confidence of 99.99%.

quantiles - the number of quantiles to be computed; in [0,Integer.MAX_VALUE].

is N known? - specifies whether the exact size of the dataset over which quantiles are to be computed is known.

Nmax - the exact dataset size, if known. Otherwise, an upper limit on the dataset size. If no upper limit is known set to infinity (Long.MAX_VALUE).

Nmax=inf - we are sure that exactly (known) or less than (unknown) infinity elements will be added.
Nmax=106 - we are sure that exactly (known) or less than (unknown) 106 elements will be added.
Nmax=107 - we are sure that exactly (known) or less than (unknown) 107 elements will be added.
Nmax=108 - we are sure that exactly (known) or less than (unknown) 108 elements will be added.

Gurmeet Singh Manku, Sridhar Rajagopalan and Bruce G. Lindsay,
Approximate Medians and other Quantiles in One Pass and with Limited Memory,
Proc. of the 1998 ACM SIGMOD Int. Conf. on Management of Data,
Paper available here.

The broad picture is as follows. Two concepts are used: Shrinking and Sampling.
Shrinking takes a data sequence, sorts it and produces a shrinked data sequence by picking every k-th element and throwing away all the rest.
The shrinked data sequence is an approximation to the original data sequence.

Imagine a large data sequence (residing on disk or being generated in memory on the fly) and a main memory block of n=b*k elements (b is the number of buffers, k is the number of elements per buffer).
Fill elements from the data sequence into the block until it is full or the data sequence is exhausted.
When the block (or a subset of buffers) is full and the data sequence is not exhausted, apply shrinking to lossily compress a number of buffers into one single buffer.
Repeat these steps until all elements of the data sequence have been consumed.
Now the block is a shrinked approximation of the original data sequence.
Treating it as if it would be the original data sequence, we can determine quantiles in main memory.

Now, the whole thing boils down to the question of: Can we choose b and k (the number of buffers and the buffer size) such that b*k is minimized,
yet quantiles determined upon the block are guaranteed to be away from the true quantiles no more than some epsilon?
It turns out, we can. It also turns out that the required main memory block size n=b*k is usually moderate (see the table above).

The theme can be combined with random sampling to further reduce main memory requirements, at the expense of probabilistic guarantees.
Sampling filters the data sequence and feeds only selected elements to the algorithm outlined above.
Sampling is turned on or off, depending on the parametrization.

This quick overview does not go into important details, such as assigning proper weights to buffers, how to choose subsets of buffers to shrink, etc.
For more information consult the papers cited above.

splitApproximately(DoubleArrayList percentages,
int k)
Divides (rebins) a copy of the receiver at the given percentage boundaries into bins and returns these bins, such that each bin approximately reflects the data elements of its range.

splitApproximately(IAxis axis,
int k)
Divides (rebins) a copy of the receiver at the given interval boundaries into bins and returns these bins, such that each bin approximately reflects the data elements of its range.

Constructs and returns an empty bin that, under the given constraints, minimizes the amount of memory needed.
Some applications exactly know in advance over how many elements quantiles are to be computed.
Provided with such information the main memory requirements of this class are small.
Other applications don't know in advance over how many elements quantiles are to be computed.
However, some of them can give an upper limit, which will reduce main memory requirements.
For example, if elements are selected from a database and filled into histograms, it is usually not known in advance how many elements are being filled, but one may know that at most S elements, the number of elements in the database, are filled.
A third type of application knowns nothing at all about the number of elements to be filled;
from zero to infinitely many elements may actually be filled.
This method efficiently supports all three types of applications.

Parameters:

known_N - specifies whether the number of elements over which quantiles are to be computed is known or not.

N - if known_N==true, the number of elements over which quantiles are to be computed.
if known_N==false, the upper limit on the number of elements over which quantiles are to be computed.
In other words, the maximum number of elements ever to be added.
If such an upper limit is a-priori unknown, then set N = Long.MAX_VALUE.

epsilon - the approximation error which is guaranteed not to be exceeded (e.g. 0.001) (0 <= epsilon <= 1).
To get exact rather than approximate quantiles, set epsilon=0.0;

quantiles - the number of quantiles to be computed (e.g. 100) (quantiles >= 1).
If unknown in advance, set this number large, e.g. quantiles >= 10000.

hasSumOfLogarithms - Tells whether MightyStaticBin1D.sumOfLogarithms() can return meaningful results.
Set this parameter to false if measures of sum of logarithms, geometric mean and product are not required.

hasSumOfInversions - Tells whether MightyStaticBin1D.sumOfInversions() can return meaningful results.
Set this parameter to false if measures of sum of inversions, harmonic mean and sumOfPowers(-1) are not required.

maxOrderForSumOfPowers - The maximum order k for which MightyStaticBin1D.sumOfPowers(int) can return meaningful results.
Set this parameter to at least 3 if the skew is required, to at least 4 if the kurtosis is required.
In general, if moments are required set this parameter at least as large as the largest required moment.
This method always substitutes Math.max(2,maxOrderForSumOfPowers) for the parameter passed in.
Thus, sumOfPowers(0..2) always returns meaningful results.

quantiles

Returns the quantiles of the specified percentages.
For implementation reasons considerably more efficient than calling quantile(double) various times.

Returns:

the quantiles.

sizeOfRange

public int sizeOfRange(double minElement,
double maxElement)

Returns how many elements are contained in the range [minElement,maxElement].
Does linear interpolation if one or both of the parameter elements are not contained.
Returns exact or approximate results, depending on the parametrization of this class or subclasses.

Parameters:

minElement - the minimum element to search for.

maxElement - the maximum element to search for.

Returns:

the number of elements in the range.

splitApproximately

Divides (rebins) a copy of the receiver at the given percentage boundaries into bins and returns these bins, such that each bin approximately reflects the data elements of its range.
The receiver is not physically rebinned (divided); it stays unaffected by this operation.
The returned bins are such that if one would have filled elements into multiple bins
instead of one single all encompassing bin only, those multiple bins would have approximately the same statistics measures as the one's returned by this method.

The split(...) methods are particularly well suited for real-time interactive rebinning (the famous "scrolling slider" effect).

Passing equi-distant percentages like (0.0, 0.2, 0.4, 0.6, 0.8, 1.0) into this method will yield bins of an equi-depth histogram, i.e. a histogram with bin boundaries adjusted such that each bin contains the same number of elements, in this case 20% each.
Equi-depth histograms can be useful if, for example, not enough properties of the data to be captured are known a-priori to be able to define reasonable bin boundaries (partitions).
For example, when guesses about minimas and maximas are strongly unreliable.
Or when chances are that by focussing too much on one particular area other important areas and characters of a data set may be missed.

Implementation:

The receiver is divided into s = percentages.size()-1 intervals (bins).
For each interval I, its minimum and maximum elements are determined based upon quantile computation.
Further, each interval I is split into k equi-percent-distant subintervals (sub-bins).
In other words, an interval is split into subintervals such that each subinterval contains the same number of elements.

For each subinterval S, its minimum and maximum are determined, again, based upon quantile computation.
They yield an approximate arithmetic mean am = (min+max)/2 of the subinterval.
A subinterval is treated as if it would contain only elements equal to the mean am.
Thus, if the subinterval contains, say, n elements, it is assumed to consist of n mean elements (am,am,...,am).
A subinterval's sum of elements, sum of squared elements, sum of inversions, etc. are then approximated using such a sequence of mean elements.

Finally, the statistics measures of an interval I are computed by summing up (integrating) the measures of its subintervals.

Accuracy:

Depending on the accuracy of quantile computation and the number of subintervals per interval (the resolution).
Objects of this class compute exact or approximate quantiles, depending on the parameters used upon instance construction.
Objects of subclasses may always compute exact quantiles, as is the case for DynamicBin1D.
Most importantly for this class QuantileBin1D, a reasonably small epsilon (e.g. 0.01, perhaps 0.001) should be used upon instance construction.
The confidence parameter delta is less important, you may find delta=0.00001 appropriate.
The larger the resolution, the smaller the approximation error, up to some limit.
Integrating over only a few subintervals per interval will yield very crude approximations.
If the resolution is set to a reasonably large number, say 10..100, more small subintervals are integrated, resulting in more accurate results.
Note that for good accuracy, the number of quantiles computable with the given approximation guarantees should upon instance construction be specified, so as to satisfy

bin 0 ranges from [0%..10%) and holds the smallest 10% of the sorted elements.

bin 1 ranges from [10%..20%) and holds the next smallest 10% of the sorted elements.

bin 2 ranges from [20%..50%) and holds the next smallest 30% of the sorted elements.

bin 3 ranges from [50%..90%) and holds the next smallest 40% of the sorted elements.

bin 4 ranges from [90%..100%) and holds the largest 10% of the sorted elements.

The statistics measures for each bin are to be computed at a resolution of 2 subbins per bin.
Thus, the statistics measures of a bin are the integrated measures over 2 subbins, each containing the same amount of elements:

bin 0 has a subbin ranging from [ 0%.. 5%) and a subbin ranging from [ 5%..10%).

bin 1 has a subbin ranging from [10%..15%) and a subbin ranging from [15%..20%).

bin 2 has a subbin ranging from [20%..35%) and a subbin ranging from [35%..50%).

bin 3 has a subbin ranging from [50%..70%) and a subbin ranging from [70%..90%).

bin 4 has a subbin ranging from [90%..95%) and a subbin ranging from [95%..100%).

Lets concentrate on the subbins of bin 0.

Assume the subbin A=[0%..5%) has a minimum of 300 and a maximum of 350 (0% of all elements are less than 300, 5% of all elements are less than 350).

Assume the subbin B=[5%..10%) has a minimum of 350 and a maximum of 550 (5% of all elements are less than 350, 10% of all elements are less than 550).

Finally, the statistics measures of bin 0 are computed by summing up (integrating) the measures of its subintervals:
Bin 0 has a size of N*(10%-0%)=10 elements (we knew that already), sum of 1625+2250=3875, sum of squares of 528125+1012500=1540625, sum of inversions of 0.015+0.01=0.025, etc.
From these follow other measures such as mean=3875/10=387.5, rms = sqrt(1540625 / 10)=392.5, etc.
The other bins are computes analogously.

Parameters:

percentages - the percentage boundaries at which the receiver shall be split.

splitApproximately

Divides (rebins) a copy of the receiver at the given interval boundaries into bins and returns these bins, such that each bin approximately reflects the data elements of its range.
For each interval boundary of the axis (including -infinity and +infinity), computes the percentage (quantile inverse) of elements less than the boundary.
Then lets splitApproximately(DoubleArrayList,int) do the real work.