#lru #cache

lru-mem

An LRU cache implementation bounded by memory

9 releases

0.3.0 Jul 30, 2023
0.2.1 Jun 11, 2023
0.2.0 Jan 16, 2022
0.1.5 Dec 30, 2021

#342 in Data structures

Download history 102/week @ 2023-11-07 163/week @ 2023-11-14 277/week @ 2023-11-21 209/week @ 2023-11-28 88/week @ 2023-12-05 211/week @ 2023-12-12 406/week @ 2023-12-19 219/week @ 2023-12-26 282/week @ 2024-01-02 474/week @ 2024-01-09 476/week @ 2024-01-16 480/week @ 2024-01-23 414/week @ 2024-01-30 331/week @ 2024-02-06 467/week @ 2024-02-13 575/week @ 2024-02-20

1,844 downloads per month
Used in 2 crates

MIT/Apache

165KB
3K SLoC

lru-mem

An implementation of a memory-bounded LRU (least-recently-used) cache for Rust. It supports average-case O(1) insert, get, and remove. There are also additional utility methods such as iterators, capacity management, and mutable access.

Note that the memory required for each entry is only an estimate and some auxiliary structure is disregarded. Therefore, the actual data structure can take more memory than was assigned, however this should not be an excessive amount in most cases.

Motivating example

Imagine we are building a web server that sends large responses to clients. To reduce the load, they are split into sections and the client is given a token to access the different sections individually. However, recomputing the sections on each request leads to too much server load, so they need to be cached. An LRU cache is useful in this situation, as clients are most likely to request new sections temporally localized.

Now consider the situation when most responses are very small, but some may be large. This would either lead to the cache being conservatively sized and allow for less cached responses than would normally be possible, or to the cache being liberally sized and potentially overflow memory if too many large responses have to be cached. To prevent this, the cache is designed with an upper bound on its memory instead of the number of elements.

The code below shows how the basic structure might look like.

use lru_mem::LruCache;

struct WebServer {
    cache: LruCache<u128, Vec<String>>
}

fn random_token() -> u128 {
    // A cryptographically secure random token.
    42
}

fn generate_sections(input: String) -> Vec<String> {
    // A complicated set of sections that is highly variable in size.
    vec![input.clone(), input]
}

impl WebServer {
    fn new(max_size: usize) -> WebServer {
        // Create a new web server with a cache that holds at most max_size
        // bytes of elements.
        WebServer {
            cache: LruCache::new(max_size)
        }
    }

    fn on_query(&mut self, input: String) -> u128 {
        // Generate sections, store them in the cache, and return token.
        let token = random_token();
        let sections = generate_sections(input);
        self.cache.insert(token, sections)
            .expect("sections do not fit in the cache");

        token
    }

    fn on_section_request(&mut self, token: u128, index: usize)
            -> Option<&String> {
        // Lookup the token and get the section with given index.
        self.cache.get(&token).and_then(|s| s.get(index))
    }
}

For more details, check out the documentation.

Links

Dependencies

~1.5MB
~24K SLoC