Please help transcribe this video using our simple transcription tool. You need to be logged in to do so.


An approximate membership data structure is a randomized data structure for representing a set which supports membership queries. It allows for a small false positive error rate but has no false negative errors. Such data structures were first introduced by Bloom~cite{Bloom70} in the 1970's, and have since had numerous applications, mainly in distributed systems, database systems, and networks.

The algorithm of Bloom is quite effective: it can store a set $S$ of size $n$ by using only $approx 1.44 n log_2(1/epsilon)$ bits while having false positive error $epsilon$. This is within a constant factor of the entropy lower bound of $n log_2(1/epsilon)$ for storing such sets~cite{CarterFlGiMaWe78}. Closing this gap is an important open problem, as Bloom filters are widely used is situations were storage is at a premium.

Bloom filters have another property: they are dynamic. That is, they support the iterative insertions of up to $n$ elements. In fact, if one removes this requirement, there exist static data structures which receive the entire set at once and can almost achieve the entropy lower bound~cite{DietzfelbingerPa08,Porat09:bloom_static}; they require only $n log_2(1/epsilon)(1+o(1))$ bits.

Our main result is a new lower bound for the memory requirements of any dynamic approximate membership data structure. We show that for any constant $epsilon>0$, any such data structure which achieves false positive error rate of $epsilon$ must use at least $C(epsilon) cdot n log_2(1/epsilon)$ memory bits, where $C(epsilon)>1$ depends only on $epsilon$. This shows that the entropy lower bound cannot be achieved by dynamic data structures for any constant error rate.

Questions and Answers

You need to be logged in to be able to post here.