Classic Bloom filters generally require a priori knowledge of the data set
in order to allocate an appropriately sized bit array. This works well for
offline processing, but online processing typically involves unbounded data
streams. With enough data, a traditional Bloom filter "fills up", after
which it has a false-positive probability of 1.

Boom Filters are useful for situations where the size of the data set isn't
known ahead of time. For example, a Stable Bloom Filter can be used to
deduplicate events from an unbounded event stream with a specified upper
bound on false positives and minimal false negatives. Alternatively, an
Inverse Bloom Filter is ideal for deduplicating a stream where duplicate
events are relatively close together. This results in no false positives
and, depending on how close together duplicates are, a small probability of
false negatives. Scalable Bloom Filters place a tight upper bound on false
positives while avoiding false negatives but require allocating memory
proportional to the size of the data set. Counting Bloom Filters and Cuckoo
Filters are useful for cases which require adding and removing elements to and
from a set.

For large or unbounded data sets, calculating the exact cardinality is
impractical. HyperLogLog uses a fraction of the memory while providing an
accurate approximation. Similarly, Count-Min Sketch provides an efficient way
to estimate event frequency for data streams. TopK tracks the top-k most
frequent elements.

MinHash is a probabilistic algorithm to approximate the similarity between two
sets. This can be used to cluster or compare documents by splitting the corpus
into a bag of words.

This can be used to cluster or compare documents by splitting the corpus
into a bag of words. MinHash returns the approximated similarity ratio of
the two bags. The similarity is less accurate for very small bags of words.

Test will test for membership of the data and returns true if it is a
member, false if not. This is a probabilistic test, meaning there is a
non-zero probability of false positives but a zero probability of false
negatives.

Increment will increment the value in the specified bucket by the provided
delta. A bucket can be decremented by providing a negative delta. The value
is clamped to zero and the maximum bucket value. Returns itself to allow for
chaining.

A Count-Min Sketch (CMS) is a probabilistic data structure which
approximates the frequency of events in a data stream. Unlike a hash map, a
CMS uses sub-linear space at the expense of a configurable error factor.
Similar to Counting Bloom filters, items are hashed to a series of buckets,
which increment a counter. The frequency of an item is estimated by taking
the minimum of each of the item's respective counter values.

Count-Min Sketches are useful for counting the frequency of events in
massive data sets or unbounded streams online. In these situations, storing
the entire data set or allocating counters for every event in memory is
impractical. It may be possible for offline processing, but real-time
processing requires fast, space-efficient solutions like the CMS. For
approximating set cardinality, refer to the HyperLogLog.

ReadDataFrom reads a binary representation of the CMS data written
by WriteDataTo() from io stream. It returns the number of bytes read
and error
If serialized CMS configuration is different it returns error with expected params

A Counting Bloom Filter (CBF) provides a way to remove elements by using an
array of n-bit buckets. When an element is added, the respective buckets are
incremented. To remove an element, the respective buckets are decremented. A
query checks that each of the respective buckets are non-zero. Because CBFs
allow elements to be removed, they introduce a non-zero probability of false
negatives in addition to the possibility of false positives.

Counting Bloom Filters are useful for cases where elements are both added
and removed from the data set. Since they use n-bit buckets, CBFs use
roughly n-times more memory than traditional Bloom filters.

NewCountingBloomFilter creates a new Counting Bloom Filter optimized to
store n items with a specified target false-positive rate and bucket size.
If you don't know how many bits to use for buckets, use
NewDefaultCountingBloomFilter for a sensible default.

Test will test for membership of the data and returns true if it is a
member, false if not. This is a probabilistic test, meaning there is a
non-zero probability of false positives and false negatives.

A Cuckoo Filter is a Bloom filter variation which provides support for
removing elements without significantly degrading space and performance. It
works by using a cuckoo hashing scheme for inserting items. Instead of
storing the elements themselves, it stores their fingerprints which also
allows for item removal without false negatives (if you don't attempt to
remove an item not contained in the filter).

Add will add the data to the Cuckoo Filter. It returns an error if the
filter is full. If the filter is full, an item is removed to make room for
the new item. This introduces a possibility for false negatives. To avoid
this, use Count and Capacity to check if the filter is full before adding an
item.

TestAndAdd is equivalent to calling Test followed by Add. It returns true if
the data is a member, false if not. An error is returned if the filter is
full. If the filter is full, an item is removed to make room for the new
item. This introduces a possibility for false negatives. To avoid this, use
Count and Capacity to check if the filter is full before adding an item.

A Deletable Bloom Filter compactly stores information on collisions when
inserting elements. This information is used to determine if elements are
deletable. This design enables false-negative-free deletions at a fraction
of the cost in memory consumption.

Deletable Bloom Filters are useful for cases which require removing elements
but cannot allow false negatives. This means they can be safely swapped in
place of traditional Bloom filters.

NewDeletableBloomFilter creates a new DeletableBloomFilter optimized to
store n items with a specified target false-positive rate. The r value
determines the number of bits to use to store collision information. This
controls the deletability of an element. Refer to the paper for selecting an
optimal value.

Test will test for membership of the data and returns true if it is a
member, false if not. This is a probabilistic test, meaning there is a
non-zero probability of false positives but a zero probability of false
negatives.

type Filter interface {
// Test will test for membership of the data and returns true if it is a
// member, false if not.Test([]byte) bool// Add will add the data to the Bloom filter. It returns the filter to
// allow for chaining.Add([]byte) Filter// TestAndAdd is equivalent to calling Test followed by Add. It returns
// true if the data is a member, false if not.TestAndAdd([]byte) bool
}

Filter is a probabilistic data structure which is used to test the
membership of an element in a set.

HyperLogLog implements the HyperLogLog cardinality estimation algorithm as
described by Flajolet, Fusy, Gandouet, and Meunier in HyperLogLog: the
analysis of a near-optimal cardinality estimation algorithm:

HyperLogLog is a probabilistic algorithm which approximates the number of
distinct elements in a multiset. It works by hashing values and calculating
the maximum number of leading zeros in the binary representation of each
hash. If the maximum number of leading zeros is n, the estimated number of
distinct elements in the set is 2^n. To minimize variance, the multiset is
split into a configurable number of registers, the maximum number of leading
zeros is calculated in the numbers in each register, and a harmonic mean is
used to combine the estimates.

For large or unbounded data sets, calculating the exact cardinality is
impractical. HyperLogLog uses a fraction of the memory while providing an
accurate approximation. For counting element frequency, refer to the
Count-Min Sketch.

ReadDataFrom reads a binary representation of the Hll data written
by WriteDataTo() from io stream. It returns the number of bytes read
and error.
If serialized Hll configuration is different it returns error with expected params

The InverseBloomFilter may report a false negative but can never report a
false positive. That is, it may report that an item has not been seen when
it actually has, but it will never report an item as seen which it hasn't
come across. This behaves in a similar manner to a fixed-size hashmap which
does not handle conflicts.

An example use case is deduplicating events while processing a stream of
data. Ideally, duplicate events are relatively close together.

ImportElementsFrom reads a binary representation of InverseBloomFilter (such as might
have been written by WriteTo()) from an i/o stream into a new bloom filter using the
Add() method (skipping empty elements, if any). It returns the number of
elements decoded from disk.

ReadFrom reads a binary representation of InverseBloomFilter (such as might
have been written by WriteTo()) from an i/o stream. ReadFrom replaces the
array of its filter with the one read from disk. It returns the number
of bytes read.

Test will test for membership of the data and returns true if it is a
member, false if not. This is a probabilistic test, meaning there is a
non-zero probability of false negatives but a zero probability of false
positives. That is, it may return false even though the data was added, but
it will never return true for data that hasn't been added.

This filter works by partitioning the M-sized bit array into k slices of
size m = M/k bits. Each hash function produces an index over m for its
respective slice. Thus, each element is described by exactly k bits, meaning
the distribution of false positives is uniform across all elements.

Test will test for membership of the data and returns true if it is a
member, false if not. This is a probabilistic test, meaning there is a
non-zero probability of false positives but a zero probability of false
negatives. Due to the way the filter is partitioned, the probability of
false positives is uniformly distributed across all elements.

A Scalable Bloom Filter dynamically adapts to the number of elements in the
data set while enforcing a tight upper bound on the false-positive rate.
This works by adding Bloom filters with geometrically decreasing
false-positive rates as filters become full. The tightening ratio, r,
controls the filter growth. The compounded probability over the whole series
converges to a target value, even accounting for an infinite series.

Scalable Bloom Filters are useful for cases where the size of the data set
isn't known a priori and memory constraints aren't of particular concern.
For situations where memory is bounded, consider using Inverse or Stable
Bloom Filters.

NewScalableBloomFilter creates a new Scalable Bloom Filter with the
specified target false-positive rate and tightening ratio. Use
NewDefaultScalableBloomFilter if you don't want to calculate these
parameters.

Test will test for membership of the data and returns true if it is a
member, false if not. This is a probabilistic test, meaning there is a
non-zero probability of false positives but a zero probability of false
negatives.

A Stable Bloom Filter (SBF) continuously evicts stale information so that it
has room for more recent elements. Like traditional Bloom filters, an SBF
has a non-zero probability of false positives, which is controlled by
several parameters. Unlike the classic Bloom filter, an SBF has a tight
upper bound on the rate of false positives while introducing a non-zero rate
of false negatives. The false-positive rate of a classic Bloom filter
eventually reaches 1, after which all queries result in a false positive.
The stable-point property of an SBF means the false-positive rate
asymptotically approaches a configurable fixed constant. A classic Bloom
filter is actually a special case of SBF where the eviction rate is zero, so
this package provides support for them as well.

Stable Bloom Filters are useful for cases where the size of the data set
isn't known a priori, which is a requirement for traditional Bloom filters,
and memory is bounded. For example, an SBF can be used to deduplicate
events from an unbounded event stream with a specified upper bound on false
positives and minimal false negatives.

NewDefaultStableBloomFilter creates a new Stable Bloom Filter with m 1-bit
cells and which is optimized for cases where there is no prior knowledge of
the input data stream while maintaining an upper bound using the provided
rate of false positives.

NewStableBloomFilter creates a new Stable Bloom Filter with m cells and d
bits allocated per cell optimized for the target false-positive rate. Use
NewDefaultStableFilter if you don't want to calculate d.

NewUnstableBloomFilter creates a new special case of Stable Bloom Filter
which is a traditional Bloom filter with m bits and an optimal number of
hash functions for the target false-positive rate. Unlike the stable
variant, data is not evicted and a cell contains a maximum of 1 hash value.

StablePoint returns the limit of the expected fraction of zeros in the
Stable Bloom Filter when the number of iterations goes to infinity. When
this limit is reached, the Stable Bloom Filter is considered stable.

Test will test for membership of the data and returns true if it is a
member, false if not. This is a probabilistic test, meaning there is a
non-zero probability of false positives and false negatives.