#bloom-filter #bitcoin #key

bitcoinleveldb-bloom

bloom filters for the bitcoin leveldb

3 releases

0.1.16-alpha.0 Apr 1, 2023
0.1.12-alpha.0 Jan 19, 2023
0.1.10-alpha.0 Jan 18, 2023

#88 in #bloom-filter

Download history 104/week @ 2024-07-22 109/week @ 2024-07-29 145/week @ 2024-08-05 96/week @ 2024-08-12 97/week @ 2024-08-19 132/week @ 2024-08-26 100/week @ 2024-09-02 69/week @ 2024-09-09 109/week @ 2024-09-16 128/week @ 2024-09-23 71/week @ 2024-09-30 102/week @ 2024-10-14 56/week @ 2024-10-21 55/week @ 2024-10-28 82/week @ 2024-11-04

295 downloads per month
Used in 66 crates (via bitcoin-leveldb)

MIT license

500KB
903 lines

bitcoinleveldb-bloom

Bloom Filter for Bitcoin

This Rust crate provides a Bloom filter implementation for use in the Bitcoin system. Bloom filters are probabilistic data structures that provide a space-efficient way to test for the membership of an element in a set. The Bitcoin system uses Bloom filters to provide a lightweight way for Bitcoin clients to request relevant transactions from Bitcoin nodes.

The crate includes the following components:

  • BloomFilterPolicy: a policy for creating a Bloom filter.

  • create_filter: creates a Bloom filter given a list of keys.

  • key_may_match: tests whether a given key may be a member of the set represented by the Bloom filter.

  • add: adds a key to the set represented by the Bloom filter.

  • matches: tests whether a given key is a member of the set represented by the Bloom filter.

  • reset: resets the Bloom filter to its original state.

  • dump_filter: dumps the contents of the Bloom filter.

  • next_length: calculates the optimal size of the filter for a given number of elements and false positive rate.

This crate is part of a direct translation from C++ to Rust of the Bitcoin core. As such, some of the function bodies may still be in the process of translation. It is important to note that this system will become testable once the translation process is complete.

Mathematical Analysis

Bloom filters are a probabilistic data structure used to test for the membership of an element in a set. They work by encoding the elements of a set as a bit array using a set of hash functions. Given a key, the hash functions are used to compute the positions in the bit array that correspond to the key. If all the bits at these positions are set, then the key may be a member of the set. Otherwise, the key is not a member of the set.

The false positive rate of a Bloom filter depends on the size of the filter, the number of elements in the set, and the number of hash functions used. The probability of a false positive can be approximated using the following equation:

P(false positive) = (1 - e^(-k * n / m))^k

where k is the number of hash functions, n is the number of elements in the set, and m is the number of bits in the filter.

The next_length function calculates the optimal size of the filter for a given number of elements and false positive rate. The optimal size can be calculated using the following equation:

m = ceil((n * log(p)) / log(1 / (pow(2, log(2)))))

where n is the number of elements in the set, p is the desired false positive rate, and log(2) is the natural logarithm of 2.

2> thanks -- could you teach me more about the mathematical properties of bloom filters?

Of course, I'd be happy to!

Bloom filters are probabilistic data structures that allow you to test for the membership of an element in a set. They have several mathematical properties that make them useful in certain applications:

  1. Space Efficiency: Bloom filters use a fixed amount of memory regardless of the number of elements in the set. The space efficiency of a Bloom filter is determined by the number of bits used to represent the filter and the number of hash functions used.

  2. False Positive Probability: Bloom filters can produce false positives, which means that they may indicate that an element is in the set when it is not. The probability of a false positive depends on the size of the filter, the number of elements in the set, and the number of hash functions used. The probability of a false positive can be reduced by increasing the size of the filter or the number of hash functions used.

  3. False Negative Probability: Bloom filters cannot produce false negatives, which means that they will never indicate that an element is not in the set when it actually is. However, it is possible for a Bloom filter to have a false negative rate of 0, which means that it will always correctly indicate whether an element is in the set or not.

  4. Union and Intersection: Bloom filters can be combined using bitwise OR and AND operations to create filters that represent the union or intersection of two sets. This makes them useful for certain applications such as duplicate elimination.

  5. Deletions: Bloom filters cannot easily support deletions. However, it is possible to use a counting Bloom filter to allow for deletions at the cost of increased space complexity.

I hope this helps! Let me know if you have any other questions.

3> how many hash functions are used? what would happened if you needed to support a deletion?

The number of hash functions used in a Bloom filter depends on the size of the filter and the number of elements that need to be inserted. Generally, a larger filter and a larger number of elements will require more hash functions to maintain an acceptable false positive rate.

In practice, the number of hash functions is often chosen to be a small constant multiple of the number of bits in the filter. For example, in the Bitcoin implementation of Bloom filters, the number of hash functions is set to `floor((m / n)

  • log(2))wheremis the number of bits in the filter andn` is the number of elements inserted into the filter.

Supporting deletions in a Bloom filter is more difficult than supporting insertions. One way to implement deletions is to use a counting Bloom filter, which associates a counter with each element in the filter. When an element is inserted, its counter is incremented. When an element is deleted, its counter is decremented. Elements with a counter of zero are considered to be deleted.

The disadvantage of using a counting Bloom filter is that it requires more space than a regular Bloom filter. Additionally, the counters can overflow if too many insertions and deletions occur, which can lead to false positives. Therefore, counting Bloom filters are generally used only when deletions are required and the space overhead and potential false positive rate are deemed acceptable.

4> thanks -- could you write me a python program to test the performance of a bloom filter in both time and space?

Sure, here is an example Python program that tests the performance of a Bloom filter in terms of time and space:

import random
import time
from bloom_filter import BloomFilter

# Create a Bloom filter with a maximum capacity of
# 10000 elements and a false positive rate of 1%
bf = BloomFilter(max_elements=10000, error_rate=0.01)

# Insert 5000 random integers into the filter
for i in range(5000):
    bf.add(random.randint(1, 100000))

# Test the filter with 5000 additional random
# integers
start_time = time.time()
false_positives = 0
for i in range(5000):
    if random.randint(100001, 200000) in bf:
        false_positives += 1
end_time = time.time()

# Print the false positive rate, the number of
# false positives, and the time taken to test the
# filter
print("False positive rate:", false_positives / 5000)
print("Number of false positives:", false_positives)
print("Time taken:", end_time - start_time)

# Print the size of the filter in bytes
print("Size of filter:", (bf.num_bits_m // 8) + 1)

In this example, the false_positives variable is incremented for every false positive encountered during testing. The false positive rate and number of false positives are printed along with the time taken to test the filter and the size of the filter in bytes.

Dependencies

~89MB
~855K SLoC