#cache #lru-cache #lock-free #concurrency #lfu #shared-state

bin+lib cache-advisor

scan-resistant concurrent cache eviction manager

9 stable releases

1.0.16 Aug 5, 2023
1.0.15 Jul 11, 2023
1.0.14 Jun 25, 2023
1.0.13 Apr 29, 2023
1.0.11 Jun 12, 2022

#28 in Caching

Download history 43/week @ 2024-03-11 48/week @ 2024-03-18 41/week @ 2024-03-25 78/week @ 2024-04-01 61/week @ 2024-04-08 53/week @ 2024-04-15 34/week @ 2024-04-22 54/week @ 2024-04-29 42/week @ 2024-05-06 43/week @ 2024-05-13 34/week @ 2024-05-20 88/week @ 2024-05-27 88/week @ 2024-06-03 54/week @ 2024-06-10 80/week @ 2024-06-17 73/week @ 2024-06-24

321 downloads per month

MIT/Apache

34KB
620 lines

cache-advisor

docs

Tells you when to evict items from a cache. Should be able to sustain dozens of millions of accesses per second on modern server hardware without any blocking.

features

  • two-segment LRU, protects against cache pollution from single-hit items
  • 256 shards accessed via non-blocking flatcombining
  • local access buffer that must fill up before accessing shared state
  • compresses the costs associated with each item to a u8 using a compression technique that will converge to the overall true sum of costs over time, but allows for much less memory to be used for accounting.

api

impl CacheAdvisor {
  /// Instantiates a new two-segment `CacheAdvisor` eviction manager.
  ///
  /// Choose an overall size and the percentage 0..=100 that should
  /// be devoted to the entry cache. 20% is a safe default.
  pub fn new(capacity: usize, entry_percent: u8) -> CacheAdvisor { .. }

  /// Mark items that are accessed with a certain cost.
  /// Returns the items that should be evicted and their associated costs.
  /// The returned costs are always a compressed power of two and may not
  /// be the exact cost that you set for an item. Over time it converges
  /// to a correct value, however.
  pub fn accessed(&mut self, id: u64, cost: usize) -> Vec<(u64, usize)> { .. }

  /// Similar to `accessed` except this will reuse an internal vector for storing
  /// items to be evicted, which will be passed by reference to callers. If the
  /// returned slice is huge and you would like to reclaim underlying memory, call
  /// the `reset_internal_access_buffer` method. This can improve throughput by around
  /// 10% in some cases compared to the simpler `accessed` method above (which may
  /// need to copy items several times as the returned vector is expanded).
  pub fn accessed_reuse_buffer(&mut self, id: u64, cost: usize) -> &[(u64, usize)] { .. }

  /// Resets the internal access buffer, freeing any memory it may have been holding
  /// onto. This should only be called in combination with `accessed_reuse_buffer` if
  /// you want to release the memory that the internal buffer may be consuming. You
  /// probably don't need to call this unless the previous slice returned by
  /// `accessed_reuse_buffer` is over a few thousand items long, if not an order of magnitude
  /// or two larger than that, which should ideally be rare events in workloads where
  /// most items being inserted are somewhat clustered in size.
  pub fn reset_internal_access_buffer(&mut self) { .. }
}

Dependencies

~155KB