10 releases (stable)

1.3.1 Jan 29, 2024
1.3.0 Jan 23, 2024
0.1.2 Jan 10, 2024

#221 in Concurrency

Download history 24/week @ 2024-01-05 9/week @ 2024-01-12 127/week @ 2024-01-19 203/week @ 2024-01-26 180/week @ 2024-02-02 64/week @ 2024-02-09 5/week @ 2024-02-16 47/week @ 2024-02-23 26/week @ 2024-03-01 71/week @ 2024-03-08 20/week @ 2024-03-15 3/week @ 2024-03-22 8/week @ 2024-03-29

112 downloads per month
Used in roster

MIT/Apache

21KB
215 lines

sharded-thread

release Crates.io version dependency status docs.rs docs PRs Welcome

"Application tail latency is critical for services to meet their latency expectations. We have shown that the thread-per-core approach can reduce application tail latency of a key-value store by up to 71% compared to baseline Memcached running on commodity hardware and Linux."[^1]

[^1]: The Impact of Thread-Per-Core Architecture on Application Tail Latency

Introduction

This library is mainly made for io-uring and monoio. There are no dependency on the runtime, so you should be able to use it with other runtime and also without io-uring.

The purpose of this library is to have a performant way to send data between thread when threads are following a thread per core architecture. Even if the aim is to be performant remember it's a core to core passing, (or thread to thread), which is really slow.

Thanks to Glommio for the inspiration.

Example

Originally, the library was made when you had multiple thread listening to the same TcpStream and depending on what is sent through the TcpStream you might want to change the thread handling the stream.

You can check some examples in the tests.

Benchmarks

Those benchmarks are only indicative, they are running in GA. You should run your own on the targeted hardware.

It shows that sharded-thread based on utility.sharded_queue is faster (~6%) than if we built the mesh based on flume.

Flume vs Sharded-thread for sharded-thread - Bencher

References

License

Licensed under either of

Dependencies

~1.2–1.9MB
~37K SLoC