2 Answers
2

So, to start off, you need to be able to create a regular bloom filter that allows a finite number of elements with a maximum probability of a false positive. The addition of these features to your basic filter are required before attempting to build a scalable implementation.

Before we try to control and optimize what the probability is, lets figure out what the probability is for a given bloom filter size.

First we split up the bitfield by how many hash functions we have (total number of bits / number of hash functions = slices) to get k slices of bits which represent each hash function so every element is always described by k bits.

If you increase the number of slices or the number of bits per slice, the probability of false positives will decrease.

It also follows that as elements are added, more bits are set to 1, so false positives increase. We refer to this as the "fill ratio" of each slice.

When the filter holds a large amount of data, we can assume that the probability of false positives for this filter is the fill ratio raised to the number of slices (If we were to actually count the bits instead of using a ratio, this simplifies into a permutation with repetition problem).

So, how do we figure out how to pick a probability of false positives in a bloom filter? We can modify the number of slices (which will affect the fill ratio).

To figure out how many slices we should have, we start off with figuring out the optimal fill ratio for a slice. Since the fill ratio is determined by the number of bits in a slice which are 1 versus the number of bits which are 0, we can determine that each bit will remain unset with probability of (100% - (1 / bits in a slice)). Since we're going to have multiple items inserted, we have another permutation with reputation problem and we expand things out to the expected fill ratio, which is (100% - ((100% - (1 / bits in a slice)) ^ "elements inserted")). Well, it turns out that this is very similar to another equation. In the paper, they relate the fill ratio to another equation so it fits nicely into a taylor series (1-e^(-n/m)). After a bit of futzing with this, it turns out that the optimal fill ratio is always about 50%, regardless of any of the variables that you change.

So, since the probability of a filter is the fill ratio raised to the number of slices, we can fill in 50% and get P=(50%)^k or k=log_2(1/P). We can then use this function to compute the number of slices we should generate for a given filter in the list of filters for a scalable bloom filter.

Edit: After writing this, I came across a mention of the "fifty-percent rule" when reading up on buddy system based dynamic memory allocation in TAoCP Vol 1, pp 442-445 with a much cleaner reasoning versus fitting the curve to (1-e^(-n/m)). Knuth also references a paper "The fifty percent rule revisited" with a bit of background on the concept (pdf available here).

Old question, but it's in the top results from Google, and the accepted answer doesn't seem to answer the question. Here's a more concise answer.

An item is in the scalable bloom filter if any filter returns true. Hence, you can add filters without affecting membership queries for previous items.

To make sure you still have a worst-case false positive guarantee, new filters are added with false positive rates that decrease geometrically. For example, the first filter has false positive rate p, the second rp, the third r^2p, etc. The probability of a false positive over the scalable bloom filter is then bounded by the union bound: sum_{k>=0} r^k p = p/(1-r).