#lru-cache #lru #lazy-evaluation #linked-list #eviction

no-std lazy-lru

A least-recently-used (LRU) cache implementation with lazy eviction

4 releases

0.1.3 Aug 8, 2024
0.1.2 Apr 7, 2024
0.1.1 Apr 7, 2024
0.1.0 Mar 13, 2024

#103 in Caching

Download history 14153/week @ 2024-07-29 14267/week @ 2024-08-05 13277/week @ 2024-08-12 13943/week @ 2024-08-19 14993/week @ 2024-08-26 12553/week @ 2024-09-02 12385/week @ 2024-09-09 4717/week @ 2024-09-16 11746/week @ 2024-09-23 10793/week @ 2024-09-30 12385/week @ 2024-10-07 16713/week @ 2024-10-14 16526/week @ 2024-10-21 18350/week @ 2024-10-28 18152/week @ 2024-11-04 17859/week @ 2024-11-11

71,511 downloads per month
Used in 18 crates (2 directly)

MIT license

23KB
425 lines

Lazy LRU cache

Typically, an LRU cache is implemented using a combination of a hash map and a doubly linked list. The doubly linked list keeps track of the order in which the items were accessed. When an entry is accessed or inserted into the cache, its respective reference is moved to (or inserted at) the front of the linked list. Doing so, the least recently used item is always at the back of the linked list and is evicted as soon as the cache size exceeds its designated capacity.

This crate instead implements an alternative variant of LRU cache with lazy eviction:

  • Each entry maintains an associated ordinal value representing when the entry was last accessed.
  • The cache is allowed to grow up to 2 times the specified capacity with no evictions, at which point, the excess entries are evicted based on LRU policy in linear time resulting in an amortized O(1) performance.

In many use cases which can allow the cache to store 2 times the capacity and can tolerate the amortized nature of performance, this results in a better average performance as shown by the benchmarks in this crate:

test bench_get_eager ... bench:      21,434 ns/iter (+/- 3,565)
test bench_get_lazy  ... bench:      16,514 ns/iter (+/- 385)
test bench_put_eager ... bench:      52,277 ns/iter (+/- 25,473)
test bench_put_lazy  ... bench:      33,117 ns/iter (+/- 5,057)

Additionally, with the eager implementation, lookups require a mutable reference &mut self to allow updating internal linked list. In a multi-threaded setting, this requires an exclusive write-lock on the cache even on the read path, which can exacerbate lock contentions. With lazy eviction, the ordinal values can be updated using atomic operations, allowing shared lock for lookups.

Dependencies

~2MB
~27K SLoC