3 unstable releases
| 0.1.1 | Feb 21, 2026 |
|---|---|
| 0.1.0 | Jan 24, 2026 |
| 0.0.1 | Jan 14, 2026 |
#777 in Caching
3MB
56K
SLoC
Crema
A strongly consistent distributed cache built on Raft consensus and Moka local cache.
- Strong consistency — writes go through Raft consensus (linearizable)
- Multi-Raft sharding — horizontal scaling across independent Raft groups
- Gossip discovery — automatic peer discovery via memberlist (SWIM protocol)
- Pluggable storage — in-memory (default) or RocksDB for persistence
Quick Start
use crema::{CacheConfig, DistributedCache};
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = CacheConfig::new(1, "127.0.0.1:9000".parse()?)
.with_max_capacity(100_000)
.with_default_ttl(Duration::from_secs(3600));
let cache = DistributedCache::new(config).await?;
// Writes go through Raft consensus
cache.put("user:123", "Alice").await?;
// Fast local read (may be stale on followers)
let val = cache.get(b"user:123").await;
// Strongly consistent read (leader roundtrip)
let val = cache.consistent_get(b"user:123").await?;
cache.delete("user:123").await?;
cache.shutdown().await;
Ok(())
}
Running Examples
# Single node
cargo run --example basic
# 3-node cluster (run each in a separate terminal)
RUST_LOG=info cargo run --example cluster -- 1
RUST_LOG=info cargo run --example cluster -- 2
RUST_LOG=info cargo run --example cluster -- 3
# Multi-Raft mode
RUST_LOG=info cargo run --example multiraft-cluster -- 1
Cluster Setup
Add gossip-based discovery so nodes find each other automatically:
use crema::{CacheConfig, DistributedCache, MemberlistConfig, MemberlistDiscovery, PeerManagementConfig};
let memberlist_config = MemberlistConfig {
enabled: true,
bind_addr: Some("127.0.0.1:9100".parse()?),
seed_addrs: vec!["127.0.0.1:9101".parse()?, "127.0.0.1:9102".parse()?],
node_name: Some("node-1".into()),
..Default::default()
};
let peers = vec![(2, "127.0.0.1:9001".parse()?), (3, "127.0.0.1:9002".parse()?)];
let discovery = MemberlistDiscovery::new(1, "127.0.0.1:9000".parse()?, &memberlist_config, &peers);
let config = CacheConfig::new(1, "127.0.0.1:9000".parse()?)
.with_seed_nodes(peers)
.with_cluster_discovery(discovery);
let cache = DistributedCache::new(config).await?;
Writes to any node are automatically forwarded to the Raft leader.
Multi-Raft Mode
For higher write throughput, shard the keyspace across multiple independent Raft groups:
use crema::MultiRaftCacheConfig;
let config = CacheConfig::new(1, "127.0.0.1:9000".parse()?)
.with_cluster_discovery(discovery)
.with_multiraft_config(MultiRaftCacheConfig {
enabled: true,
num_shards: 16, // 16 independent Raft groups
shard_capacity: 100_000,
auto_init_shards: true,
leader_broadcast_debounce_ms: 200,
});
Keys are routed via xxhash64(key) % num_shards. Each shard elects its own leader, distributing write load across the cluster.
Storage Backends
| Backend | Feature flag | Persistence | Recovery | Use case |
|---|---|---|---|---|
| Memory | (default) | No | Full replay | Dev, ephemeral caching |
| RocksDB | rocksdb |
Yes | Instant | Production |
use crema::RaftStorageType;
let config = CacheConfig::new(1, "127.0.0.1:9000".parse()?)
.with_raft_storage_type(RaftStorageType::RocksDb)
.with_data_dir("./data/node1");
Consistency Model
| Operation | Guarantee |
|---|---|
put() / delete() |
Linearizable (Raft consensus) |
get() |
Eventually consistent (local read) |
consistent_get() |
Linearizable (leader roundtrip) |
Feature Flags
crema = "0.1.0" # default (memory + memberlist)
crema = { version = "0.1.0", features = ["rocksdb"] } # + persistent storage
crema = { version = "0.1.0", features = ["full"] } # everything
Development
cargo build --all-features # Build
cargo nextest run --all-features # Test (720 tests)
cargo clippy -- -D warnings # Lint
cargo fmt # Format
License
Licensed under Apache 2.0 or MIT, at your option.
Dependencies
~25–50MB
~700K SLoC