#uring #reactor #thread-per-core


A set of utilities to allow one to write thread per core applications

1 unstable release

0.2.0-alpha Nov 6, 2020

#390 in Asynchronous

Apache-2.0 OR MIT

24K SLoC

C 14K SLoC // 0.1% comments Rust 10K SLoC // 0.1% comments Shell 85 SLoC // 0.2% comments RPM Specfile 52 SLoC


ATTENTION If you are confused between this and Scipio, this project was previously called Scipio but had to be very unfortunately renamed due to a trademark dispute. This disclaimer will self-destruct as soon as people have enough time to switch over. We would also like to make sure to clarify that this doesn't change our opinion on Scipio Africanus being such a great and underrated general that deserved a lot better from Rome than what we got.


Join our Zulip community!

If you are interested in Glommio consider joining our Zulip community. If the link above doesn't work, this could be due to the temporary migration from the name Scipio to Glommio, in which case please try again later or shoot us a message on github.

Come tell us about exciting applications you are building, ask for help, or really just chat with friends

What is Glommio?

Glommio (pronounced glo-mee-jow or |glomjəʊ|) is a Cooperative Thread-per-Core crate for Rust & Linux based on io_uring. Like other rust asynchronous crates it allows one to write asynchronous code that takes advantage of rust async/await, but unlike its counterparts it doesn't use helper threads anywhere.

Using Glommio is not hard if you are familiar with rust async. All you have to do is:

    use glommio::prelude::*;
    LocalExecutorBuilder::new().spawn(|| async move {
        /// your code here

For more details check out our docs page


Glommio is still considered an alpha release. The main reasons are:

  • The existing API is still evolving
  • There are still some uses of unsafe that can be avoided
  • There are features that are critical for a good thread per core system that are not implemented yet. The top two are:
    • communication channels between executors so we can pass Send data.
    • per-shard memory allocator.

Want to help bring us to production status sooner? PRs are welcome!


Licensed under either of

at your option.


Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.


~44K SLoC