#cache #pingora #policy #eviction #algorithm #high #ratio

TinyUFO

In-memory cache implementation with TinyLFU as the admission policy and S3-FIFO as the eviction policy

3 unstable releases

0.2.0 May 10, 2024
0.1.1 Apr 18, 2024
0.1.0 Feb 27, 2024

#109 in Caching

Download history 164/week @ 2024-02-26 12/week @ 2024-03-04 87/week @ 2024-03-11 48/week @ 2024-03-18 21/week @ 2024-03-25 47/week @ 2024-04-01 37/week @ 2024-04-08 144/week @ 2024-04-15 29/week @ 2024-04-22 148/week @ 2024-05-06 62/week @ 2024-05-13 63/week @ 2024-05-20 87/week @ 2024-05-27 59/week @ 2024-06-03 25/week @ 2024-06-10

316 downloads per month
Used in 2 crates

Apache-2.0

41KB
769 lines

TinyUFO

TinyUFO is a fast and efficient in-memory cache. It adopts the state-of-the-art S3-FIFO as well as TinyLFU algorithms to achieve high throughput and high hit ratio as the same time.

Usage

See docs

Performance Comparison

We compare TinyUFO with lru, the most commonly used cache algorithm and moka, another great cache library that implements TinyLFU.

Hit Ratio

The table below show the cache hit ratio of the compared algorithm under different size of cache, zipf=1.

cache size / total assets TinyUFO TinyUFO - LRU TinyUFO - moka (TinyLFU)
0.5% 45.26% +14.21pp -0.33pp
1% 52.35% +13.19pp +1.69pp
5% 68.89% +10.14pp +1.91pp
10% 75.98% +8.39pp +1.59pp
25% 85.34% +5.39pp +0.95pp

Both TinyUFO and moka greatly improves hit ratio from lru. TinyUFO is the one better in this workload. This paper contains more thorough cache performance evaluations S3-FIFO, which TinyUFO varies from, against many caching algorithms under a variety of workloads.

Speed

The table below shows the number of operations performed per second for each cache library. The tests are performed using 8 threads on a x64 Linux desktop.

Setup TinyUFO LRU moka
Pure read 148.7 million ops 7.0 million ops 14.1 million ops
Mixed read/write 80.9 million ops 6.8 million ops 16.6 million ops

Because of TinyUFO's lock-free design, it greatly outperforms the others.

Memory overhead

TinyUFO provides a compact mode to trade raw read speed for more memory efficiency. Whether the saving worthy the trade off depends on the actual size and the work load. For small in-memory assets, the saved memory means more things can be cached.

The table below show the memory allocation (in bytes) of the compared cache library under certain workloads to store zero-sized assets.

cache size TinyUFO TinyUFO compact LRU moka
100 39,409 19,000 9,408 354,376
1000 236,053 86,352 128,512 535,888
10000 2,290,635 766,024 1,075,648 2,489,088

Dependencies

~2–7.5MB
~40K SLoC