#cache #lru-cache #lru #concurrency #page #pseudolru

nightly no-std plru

Implementation of concurrent (lockless) pseudo-LRU cache replacement policy

2 releases

Uses old Rust 2015

0.1.1 May 19, 2017
0.1.0 Oct 18, 2016

#336 in Caching

28 downloads per month

MIT license

16KB
171 lines

plru

A concurrent implementation of pseudo-LRU cache tracking.

Migrated from TFS.


lib.rs:

An efficient implementation of concurrent (lockless) the PLRU cache replacement policy.

Cache replacement

Cache replacement policies are used to determine which cache lines (buffers), that should be replaced because they're no longer "hot" (likely to be used in the near future). A number of different approaches exist.

LRU is the name of the cache replacement policies which will replace the cache line which was used the longest time ago.

PLRU is the name of the class of cache replacement deriving from LRU, but instead of using an exact measure they use an approximate measure of the recently used lines. This allows for much more efficient cache management (LRU cache management is notoriously slow) on the cost of possibly worse cache policies. PLRU caches are used in many major CPUs to maintain CPU cache lines.

We use a specific version of PLRU: BPLRU or bit PLRU. Tree PLRU is an alternative, but it is not easy to make concurrent.

Usage

Runtime defined number of cache lines

This can be achieved through plru::create() unless the no_std feature is enabled.

How fixed-size arrays are handled

Our implementation is generic over fixed-size arrays (or heap allocated vectors) without relying on any external library. We abstract over AsRef<[AtomicU64]> in order to allow a wide varity of different forms of arrays.

For convenience, we define a bunch of type aliases for various sizes of caches.

Implementation details

This implementation of BPLRU makes use of atomic integers to store the bit flags, and thus make up a concurrent and entirely lockless cache manager with excellent performance.

The cache lines are organized into 64-bit integers storing the bit flags ("hot" and "cold"). Each of these flags are called "bulks". Whenever a cache line is touched (used), its bit flag is said.

Finding the cache line to replace is simply done by finding an unset bit in one of the bulks. If all cache lines in the bulk are hot, they're flipped so that they're all cold. This process is called cache inflation.

In order to avoid double cache replacement, we use a counter which is used to maximize the time between the same bulk being used to find a cache line replacement.

The algorithms are described in detail in the documentation of each method.

In short...

In short, the algorithm works by cache lines holding an entry until their slot (bulk) is filled, which will (upon cache replacement) lead to every cache line in that bulk being assumed to be inactive until they reclaim their place.

An analogy is parking lots. We only care about parking lot space if it is filled (if it isn't, we can just place our car, and we're good). When it's filled, we need to make sure that the owner of every car has been there in the last N hours (or: since the parking lot got filled) to avoid owners occupying places forever. When new cars come in and a car have been there for enough time, it have to be forcibly removed to open up new space.

Now, think of each parking lot as a bulk. And each car as a cache line.

Code quality

✔ Well-tested

✔ Well-documented

✔ Examples

✔ Conforms to Rust's style guidelines

✔ Idiomatic Rust

✔ Production-quality

No runtime deps

Features