55 releases (14 breaking)

✓ Uses Rust 2018 edition

new 0.26.0 Aug 19, 2019
0.24.1 May 12, 2019
0.20.1 Mar 31, 2019
0.16.11 Dec 14, 2018
0.14.1 Sep 9, 2017

#2 in Database implementations

Download history 383/week @ 2019-05-06 366/week @ 2019-05-13 818/week @ 2019-05-20 976/week @ 2019-05-27 620/week @ 2019-06-03 521/week @ 2019-06-10 389/week @ 2019-06-17 317/week @ 2019-06-24 425/week @ 2019-07-01 330/week @ 2019-07-08 229/week @ 2019-07-15 243/week @ 2019-07-22 422/week @ 2019-07-29 615/week @ 2019-08-05 439/week @ 2019-08-12

2,070 downloads per month
Used in 15 crates (13 directly)

MIT/Apache

445KB
10K SLoC

sled - it's all downhill from here!!!

Build Status crates.io documentation chat

An (alpha) modern embedded database. Doesn't your data deserve an (alpha) beautiful new home?

use sled::Db;

let tree = Db::open(path)?;

// set and get
tree.insert(k, v1);
assert_eq!(tree.get(&k), Ok(Some(v1)));

// compare and swap
tree.cas(k, Some(v1), Some(v2));

// range queries
let mut iter = tree.range(k..);
assert_eq!(iter.next(), Some(Ok((k, v2))));
assert_eq!(iter.next(), None);

// deletion
tree.remove(&k);

// block until all operations are on-disk
tree.flush();

We also support reactive subscription semantics and merge operators!

features

  • API similar to a threadsafe BTreeMap<Vec<u8>, Vec<u8>>
  • fully atomic single-key operations, supports compare and swap
  • zero-copy reads
  • write batch support
  • subscription/watch semantics on key prefixes
  • multiple keyspace support
  • merge operators
  • forward and reverse iterators
  • a crash-safe monotonic ID generator capable of generating 75-125 million unique ID's per second
  • zstd compression (use the compression build feature)
  • cpu-scalable lock-free implementation
  • SSD-optimized log-structured storage
  • prefix encodes stored keys, reducing the storage cost of complex keys

architecture

lock-free tree on a lock-free pagecache on a lock-free log. the pagecache scatters partial page fragments across the log, rather than rewriting entire pages at a time as B+ trees for spinning disks historically have. on page reads, we concurrently scatter-gather reads across the log to materialize the page from its fragments. check out the architectural outlook for a more detailed overview of where we're at and where we see things going!

goals

  1. don't make the user think. the interface should be obvious.
  2. don't surprise users with performance traps.
  3. don't wake up operators. bring reliability techniques from academia into real-world practice.
  4. don't use so much electricity. our data structures should play to modern hardware's strengths.

plans

  • LSM tree-like write performance with traditional B+ tree-like read performance
  • MVCC, serializable transactions, and snapshots
  • forward-compatible binary format
  • concurrent snapshot delta generation and recovery
  • first-class programmatic access to replication stream
  • consensus protocol for PC/EC systems
  • pluggable conflict detection and resolution strategies for PA/EL systems
  • multiple collection types like tables, queues, Merkle trees, bloom filters, etc... unified under a single transactional and operational domain

fund feature development and get commercial support

Want to support the project, prioritize a specific feature, or get commercial help with using sled in your project? Ferrous Systems provides commercial support for sled, and can work with you to solve a wide variety of storage problems across the latency-throughput, consistency, and price performance spectra. Get in touch!

Ferrous Systems

special thanks

Meili

Special thanks to Meili for providing engineering effort and other support to the sled project. They are building an event store backed by sled, and they offer a full-text search system which has been a valuable case study helping to focus the sled roadmap for the future.

Additional thanks to Arm, Works on Arm and Packet, who have generously donated a 96 core monster machine to assist with intensive concurrency testing of sled. Each second that sled does not crash while running your critical stateful workloads, you are encouraged to thank these wonderful organizations. Each time sled does crash and lose your data, blame Intel.

known issues, warnings

  • the on-disk format is going to change in non-forward compatible ways before the 1.0.0 release! after that, we will always support forward migrations.
  • quite young, should be considered unstable for the time being
  • the C API is likely to change rapidly
  • writepath is not well optimized yet. readpath is essentially wait-free and zero-copy.

contribution welcome!

want to help advance the state of the art in open source embedded databases? check out CONTRIBUTING.md!

References

Dependencies

~2.1–3.5MB
~73K SLoC