ImperialViolet

Smaller than Bloom filters (29 Apr 2011)

I've been looking at what would be needed in order to have a global view of CRLs in browsers. At the moment revocation has three problems: privacy (by asking the CA about the status of a certificate you're revealing to them what you're looking at), performance (OCSP checks take hundreds of milliseconds which adds up to thousands of milliseconds when subdomains are in play) and functionality (it doesn't work).

Having a global view of (nearly) all CRLs would mostly solve these issues. We could quickly determine that a certificate is good without leaking information and perform a hard fail revocation check if we believe that a certificate has been revoked. So that begs the question of how to get a global view of CRLs to the client. A Bloom filter would be the obvious choice since we can't accept false negatives (and a Bloom doesn't generate them) but we can cope with false positives (which it does produce).

But we really want to keep this structure small. Every extra KB has to be transferred to hundreds of millions of clients (including mobile devices) and takes up memory on each of those devices. We also have another trick up to play: we don't care much about query time. For a normal Bloom filter, query time is very important but here, if it took several milliseconds, that would be ok.

Given those limitations we can look at data structures other than a traditional Bloom filter and which are smaller and slower.

An optimum Bloom filter takes up n·log(e)ln(1/p) bits (where p is the false positive probability and n is the number of inserted elements). You can't always hit that because you can't have a fraction of a hash function but the theoretical lower bound is just n·log(1/p) bits, a factor if log(e)≅1.44 smaller. Other structures get closer to this bound.

First, there are compressed bloom filters[PDF]. The idea here is that one builds a larger Bloom filter with fewer hash functions and compresses it. In the paper they discuss compressing for transport, but I'm also thinking about it being compressed in memory. An interesting result from the paper is that the optimum number of hash functions for a normal Bloom filter is actually the worst case for a compressed filter.

The optimum number of hash functions for a compressed Bloom is one. Normally that would make the Bloom too large to deal with uncompressed, but if you're not actually going to store the uncompressed version, that's less of an issue.

A Bloom filter with a single hash function requires 1/p bits per element. From that, it's obvious that the density of the filter is p. So a perfect entropy coder (and a range coder is basically perfect) should be able to compress that to just n·1/p·H(p) bits, where H is the binary entropy. Now we can consider how close a compressed Bloom gets to the lower bound that we considered before

A compressed bloom takes 1/p·H(p) per element and the lower bound is ln(1/p). Here's a Wolfram Alpha plot of the expansion factor. The expansion factor goes from about 0.1 to 0.2 over a range of reasonable values of p. For p = 1/1024, it's 0.144. Remember that the expansion factor of a normal Bloom filter is 0.44, so that's quite a bit better.

However, the uncompressed size, even if you're just streaming it, is a problem. For 400K revoked certificates with a 1 in 1024 false positive rate, you're looking at 51MB of uncompressed data. In order to process that you would have to call into the entropy coder 410 million times.

There's an obvious solution to this: split the Bloom into chunks and hash the input in order to find the correct chunk. That works fine, but there's also another answer which may be cleaner: Golomb Compressed Sets. (The idea was first mentioned in this paper, I believe.)

A GCS mirrors the structure of the compressed Bloom filter: we're hashing the elements into a space of size n/p. But, while a compressed Bloom filter treats this as a bitmap, a GCS treats it as a list of values. Since the values are the result of hashing, we can assume that they are uniformly distributed, sort them and build a list of differences. The differences will be geometrically distributed with a parameter of p. Golomb coding is the optimal encoding for geometrically distributed values: you divide by 1/p, encode that in unary then encode the remainder in binary.

A GCS is going to be slightly larger than a compressed bloom filter because the Golomb coding uses a whole number of bits for each value. I don't have an equation for the overhead, but I found it to be between 0.1 and 0.2 bits per set element, which is pretty minimal. It's also an advantage because it means that the user can index the GCS to any degree. With a chunked, compressed Bloom filter the generator has to decide on the chunking and also deal with the inefficiencies resulting from imperfect balance.

But what about that lower bound, can anything hit it? Yes, matrix filters (paper). The idea here is that you hash elements into n values of a finite field. Then, from n such elements you have an n by n matrix where the rows are independent with very high probability. You then perform Gaussian elimination to solve M·b = 1 (where M is the matrix, b is an n column vector and 1 is an n column vector of ones).

After this, b is your matrix filter. Given b, one can hash an element to generate a vector, multiply by b and get the original value back. If the equation was in M then the result has to be 1. Otherwise, the result is a random element from the field. If you use GF(210) as the field, then the false positive probability is 1 in 1024.

As an example, consider the case of a two element set. You would hash these elements into equations that define lines in the plane and the matrix filter is the intersection of these lines. Querying the filter means hashing the element to form a line and seeing if it intersects that point.

Matrix filters can also be used to associate values with the elements. In this case, for an element in the set you'll always get the correct value and for anything else you'll get a random value. (So you don't get to know if the element was in the set in this case.)

Matrix filters hit the lower bound, but have some issues. First, in order to query the filter you have to read and process the whole filter, so you will have to chunk it. Secondly, if your false positive rate is not of the form 1/2n, then the field arithmetic is slower. But I'll deal with the most important problem, from my point of view, next.

Back to the motivation for a moment: we're looking to sync a global view of CRLs to browsers. The initial download is one thing, but we also have to push updates frequently (probably daily). That means that delta encoding is terribly important.

Delta compressing a compressed Bloom filter or a GCS is pretty obvious with a range coder in hand. Additions and deletions from the set are just additions and deletions of bits or values, respectively. But changes to a matrix filter mean rewriting everything. Consider the example of lines in a plane: changing one of the lines moves the intersection point in both dimensions.

So, in order to add elements you would either have to retransmit the chunks that they fall into or send an additional chunk (that would have to be checked every time). Deletions can't be realised without retransmitting a whole chunk so, over time, the useful fraction of elements in a chunk would fall. Since a GCS is only about 15% bigger, it wouldn't take long for the matrix filter to be worse off.

So, although matrix filters are beautiful, I think a GCS works best. I suspect that there's a deep reason why a structure which hits the lower size bound cannot be delta compressed, but I'm too dumb to figure it out if there is.

In the process of playing with this, I implemented everything to some degree. I'll throw the code on github at some point.