13 releases (4 breaking)

Uses new Rust 2021

new 0.5.2 Nov 28, 2021
0.5.1 Nov 26, 2021
0.4.1 Nov 24, 2021
0.3.0 Nov 23, 2021
0.1.1 Nov 20, 2021

#91 in Concurrency

Download history 95/week @ 2021-11-16 141/week @ 2021-11-23

236 downloads per month

MIT license

150 lines

norpc = not remote procedure call

Crates.io documentation CI MIT licensed Tokei


Developing an async application is often a very difficult task but building an async application as a set of microservices makes both designing and implementation much easier.

gRPC is a great tool in microservices. You can use this for communication over network but this isn't a good idea unless networking involves.

In such case, in-process microservices is a way to go. The services run on async runtime and communicate each other through in-memory async channel which doesn't occur serialization thus much more efficient than gRPC. I believe in-process microservices is a revolution for designing local async applications.

However, defining microservices in Rust does need a lot of coding for each services and they are mostly boilerplates. It will be helpful if these tedious tasks are swiped away by code generation.

tarpc is a previous work in this area however it is not a best framework for in-process microservices because it tries to support both in-process and networking microservices under the same abstraction. This isn't a good idea because the both implementation will because sub-optimal. In my opinion, networking microservices should use gRPC and in-process microservices should use dedicated framework for the specific purpose.

Also, tarpc doesn't use Tower's Service but define a similar abstraction called Serve by itself. This leads to reimplementing functions like rate-limiting and timeout which can be realized by just stacking Service decorators if depends on Tower. Since tarpc needs huge rework to become Tower-based, there is a chance to implement my own framework from scratch which will be much smaller and cleaner than tarpc because it only supports in-process microservices and is able to exploit the Tower ecosystem.


スクリーンショット 2021-11-18 1 11 53

  • Red: Code generated by norpc compiler
  • Cyan: Reusable components provided by norpc library
  • Yellow: Components from external libraries like Tokio and Tower

norpc utilizes Tower ecosystem. The core of the Tower ecosystem is an abstraction called Service which is like a function from Request to Response. The ecosystem has many decorators to add new behavior to an existing Service.

In the diagram, the client requests is coming from the top-left of the stacks and flows down to the bottom-right. The client and server is connected by async channel driven by Tokio runtime so there is no overhead for the serialization and copying because the message just "moves".

Here is how to generate codes for a simple service:

trait HelloWorld {
    fn hello(s: String) -> String;

For more detail, please read Hello-World Example (README).

Performance (Compared to tarpc)

The RPC overhead is x1.7 lower than tarpc. With norpc, you can send more than 100k requests per second.

The benchmark program launches a noop server and send requests from the client. In measurement, Criterion is used.

noop request/1          time:   [8.9181 us 8.9571 us 9.0167 us]
noop request (tarpc)/1  time:   [15.476 us 15.514 us 15.554 us]


Akira Hayakawa (@akiradeveloper)


~70K SLoC