#lru #cache #hashmap


An LRU cache implementation with constant time operations and weighted semantic

5 releases (3 breaking)

0.4.0 Mar 2, 2021
0.3.0 Nov 3, 2020
0.2.1 Nov 2, 2020
0.2.0 Nov 2, 2020
0.1.0 Oct 27, 2020

#20 in Caching

Download history 47/week @ 2020-12-22 89/week @ 2020-12-29 191/week @ 2021-01-05 377/week @ 2021-01-12 282/week @ 2021-01-19 178/week @ 2021-01-26 46/week @ 2021-02-02 109/week @ 2021-02-09 68/week @ 2021-02-16 37/week @ 2021-02-23 652/week @ 2021-03-02 571/week @ 2021-03-09 608/week @ 2021-03-16 230/week @ 2021-03-23 165/week @ 2021-03-30 785/week @ 2021-04-06

815 downloads per month
Used in 2 crates (via cosmwasm-vm)

MIT license



Actions Crate Docs License

Another LRU cache implementation in rust. It has two main characteristics that differentiates it from other implementation:

  1. It is backed by a HashMap: it offers a O(1) time complexity (amortized average) for common operations like:

    • get / get_mut
    • put / pop
    • peek / peek_mut
  2. It is a weighted cache: each key-value pair has a weight and the capacity serves as both as:

    • a limit to the number of elements
    • and as a limit to the total weight of its elements

    using the following formula:

    CLruCache::len + CLruCache::weight <= CLruCache::capacity

Even though most operations don't depend on the number of elements in the cache, CLruCache::put_with_weight has a special behavior: because it needs to make room for the new element, it will remove enough least recently used elements. In the worst case, that will require to fully empty the cache. Additionally, if the weight of the new element is too big, the insertion can fail.

For the common case of an LRU cache whose elements don't have a weight, a default ZeroWeightScale is provided and unlocks some useful APIs like:


Most of the API, documentation, examples and tests have been heavily inspired by the lru crate. I want to thank jeromefroe for his work without which this crate would have probably never has been released.

Differences with lru

The main differences are:

  • Smaller amount of unsafe code. Unsafe code is not bad in itself as long as it is thoroughly reviewed and understood but can be surprisingly hard to get right. Reducing the amount of unsafe code should hopefully reduce bugs or undefined behaviors.
  • API closer to the standard HashMap collection which allows to lookup with Borrow-ed version of the key.


Below are simple examples of how to instantiate and use this LRU cache.

Using the default ZeroWeightScale:

use std::num::NonZeroUsize;
use clru::CLruCache;

let mut cache = CLruCache::new(NonZeroUsize::new(2).unwrap());
cache.put("apple".to_string(), 3);
cache.put("banana".to_string(), 2);

assert_eq!(cache.get("apple"), Some(&3));
assert_eq!(cache.get("banana"), Some(&2));

assert_eq!(cache.put("banana".to_string(), 4), Some(2));
assert_eq!(cache.put("pear".to_string(), 5), None);

assert_eq!(cache.get("pear"), Some(&5));
assert_eq!(cache.get("banana"), Some(&4));

    let v = cache.get_mut("banana").unwrap();
    *v = 6;

assert_eq!(cache.get("banana"), Some(&6));

Using a custom WeightScale implementation:

use std::num::NonZeroUsize;
use clru::{CLruCache, CLruCacheConfig, WeightScale};

struct CustomScale;

impl WeightScale<String, &str> for CustomScale {
    fn weight(&self, _key: &String, value: &&str) -> usize {

let mut cache = CLruCache::with_config(

assert_eq!(cache.put_with_weight("apple".to_string(), "red").unwrap(), None);
    cache.put_with_weight("apple".to_string(), "green").unwrap(),

assert_eq!(cache.len(), 1);
assert_eq!(cache.get("apple"), Some(&"green"));


Each contribution is tested with regular compiler, miri, and 4 flavors of sanitizer (address, memory, thread and leak). This should help catch bugs sooner than later.


  • improve documentation and add examples
  • figure out Send/Sync traits support

No runtime deps