#raft #distributed-systems #ha


The rust language implementation of Raft algorithm

8 releases (5 breaking)

0.5.0 Feb 20, 2019
0.4.1 Apr 23, 2019
0.4.0 Sep 21, 2018
0.3.1 Jul 12, 2018
0.0.0 Jun 17, 2015

#32 in Algorithms

Download history 327/week @ 2019-01-08 243/week @ 2019-01-15 162/week @ 2019-01-22 131/week @ 2019-01-29 81/week @ 2019-02-05 163/week @ 2019-02-12 273/week @ 2019-02-19 135/week @ 2019-02-26 170/week @ 2019-03-05 132/week @ 2019-03-12 104/week @ 2019-03-19 154/week @ 2019-03-26 109/week @ 2019-04-02 92/week @ 2019-04-09 148/week @ 2019-04-16

605 downloads per month
Used in 1 crate




Build Status Documentation

Problem and Importance

When building a distributed system one principal goal is often to build in fault-tolerance. That is, if one particular node in a network goes down, or if there is a network partition, the entire cluster does not fall over. The cluster of nodes taking part in a distributed consensus protocol must come to agreement regarding values, and once that decision is reached, that choice is final.

Distributed Consensus Algorithms often take the form of a replicated state machine and log. Each state machine accepts inputs from its log, and represents the value(s) to be replicated, for example, a hash table. They allow a collection of machines to work as a coherent group that can survive the failures of some of its members.

Two well known Distributed Consensus Algorithms are Paxos and Raft. Paxos is used in systems like Chubby by Google, and Raft is used in things like tikv or etcd. Raft is generally seen as a more understandable and simpler to implement than Paxos.


Raft replicates the state machine through logs. If you can ensure that all the machines have the same sequence of logs, after applying all logs in order, the state machine will reach a consistent state.

A complete Raft model contains 4 essential parts:

  1. Consensus Module, the core consensus algorithm module;

  2. Log, the place to keep the Raft logs;

  3. State Machine, the place to save the user data;

  4. Transport, the network layer for communication.

The design of the Raft crate

Note: This Raft implementation in Rust includes the core Consensus Module only, not the other parts. The core Consensus Module in the Raft crate is customizable, flexible, and resilient. You can directly use the Raft crate, but you will need to build your own Log, State Machine and Transport components.

Developing the Raft crate

raft is intended to track the latest stable, though you'll need to use nightly to simulate a full CI build with clippy.

Using rustup you can get started this way:

rustup component add clippy-preview
rustup component add rustfmt-preview

In order to have your PR merged running the following must finish without error:

cargo test --all && \
cargo clippy --all -- -D clippy && \
cargo fmt --all -- --check

You may optionally want to install cargo-watch to allow for automated rebuilding while editing:

cargo watch -s "cargo check"

Modifying Protobufs

If proto file eraftpb.proto changed, run the command to regenerate eraftpb.rs:

protoc proto/eraftpb.proto --rust_out=src

You can check Cargo.toml to find which version of protobuf-codegen is required.


We use Criterion for benchmarking.

It's currently an ongoing effort to build an appropriate benchmarking suite. If you'd like to help out please let us know! Interested?

You can run the benchmarks by installing gnuplot then running:

cargo bench

You can check target/criterion/report/index.html for plots and charts relating to the benchmarks.

You can check the performance between two branches:

git checkout master
cargo bench --bench benches -- --save-baseline master
git checkout other
cargo bench --bench benches -- --baseline master

This will report relative increases or decreased for each benchmark.


Thanks etcd for providing the amazing Go implementation!

Projects using the Raft crate

  • TiKV, a distributed transactional key value database powered by Rust and Raft.

Links for Further Research


~68K SLoC