#lru-cache #cache #lru #capacity #items #flip

no-std fliplru

A LRU cache that shows effectiveness of cache capacity

7 releases

0.1.6 Jul 25, 2023
0.1.5 Jul 25, 2023

#102 in Caching

Download history 70/week @ 2023-12-18 55/week @ 2024-01-01 103/week @ 2024-01-08 86/week @ 2024-01-15 73/week @ 2024-01-22 111/week @ 2024-01-29 87/week @ 2024-02-05 52/week @ 2024-02-12 97/week @ 2024-02-19 125/week @ 2024-02-26 113/week @ 2024-03-04 172/week @ 2024-03-11 137/week @ 2024-03-18 98/week @ 2024-03-25 277/week @ 2024-04-01

687 downloads per month

MIT license

17KB
176 lines

fliplru

flip LRU is a LRU cache that has some profiling built-in to help tune the cache capacity.

The goals of this cache data structure are two-fold:

  1. The get API should be fast with low overhead as it is the main API of a cache.
  2. Expose some profiling metrics to help with deciding on the appropriate capacity.

There is another goal but it is implementation related and I didn't want to use a linked list.

The implementation is based on 2 hashmaps to provide the LRU functionality. So the total capacity of this cache is 2*cap LRU items. The cap LRU items are guaranteed to be in the cache. The cap+1 to 2*cap LRU items maybe in the cache, but this is not guaranteed.

The hashbrown map is used along with the 2 cache design to provide a fast get API.

A flips metric is exposed to help tune the cache capacity. Flips represent the number of times the cache capacity is reached. It empties the cache and refills it in a way the performance is not affected (see benchmarks below). If the flips count is 0, then the cache is oversized. If the flip count is very high and close to the number of accesses/capacity then the cache is not being used effectively and the capacity has to be increased.

The API has been inspired by lru by Jerome Froelich.


Find if cache capacity is too small for use case

Below is example of a check to find if the cache has been configured with less capacity. In this example the cache is accessed for 20 times and the flip count is 8 for a cache capacity of 2. This indicates there is no caching occurring for the access pattern.

let mut cache = LruCache::new(NonZeroUsize::new(2).unwrap());
for i in 0..5 {
    cache.put(i, i);
}
for i in 0..20 {
    cache.get(&(i % 5));
}

assert_eq!(cache.get_flips(), 8);

Find if cache capacity is too large for use case

Below is example of a check to find if the cache has been configured with more than enough capacity. In this example the cache is accessed for 20 times and the number of flips is 0 for a cache capacity of 5. This indicates the cache fully satisfies every access made.

let mut cache = LruCache::new(NonZeroUsize::new(5).unwrap());
for i in 0..5 {
    cache.put(i, i);
}
for i in 0..20 {
    cache.get(&(i % 5));
}

assert_eq!(cache.get_flips(), 0);

Status

It is a basic LRU cache with metrics to help with cache capacity tuning. Provides a fast get API.

Benchmarks

The benchmarks has been inspired by HashLRU by Christian Mauduit

Benchmark comparisons of the get API using various caches implementations configured with 100000 capacity. This was run on a 2020 MacBook Air.

running 8 tests
test tests::bench_read_usize_builtin_hashmap    ... bench:          16 ns/iter (+/- 0)
test tests::bench_read_usize_caches             ... bench:          37 ns/iter (+/- 16)
test tests::bench_read_usize_fastlru            ... bench:          48 ns/iter (+/- 7)
test tests::bench_read_usize_fliplru            ... bench:           7 ns/iter (+/- 0)
test tests::bench_read_usize_hashlru_cache      ... bench:          10 ns/iter (+/- 1)
test tests::bench_read_usize_hashlru_sync_cache ... bench:          15 ns/iter (+/- 0)
test tests::bench_read_usize_lru                ... bench:          10 ns/iter (+/- 0)
test tests::bench_read_usize_lru_cache          ... bench:          38 ns/iter (+/- 0)

test result: ok. 0 passed; 0 failed; 0 ignored; 8 measured; 0 filtered out; finished in 14.09s

To run the benchmarks:

cd bench
rustup default nightly
cargo bench

Dependencies

~2MB
~25K SLoC