4 releases (2 breaking)
0.2.0 | Nov 1, 2024 |
---|---|
0.1.0 | Aug 14, 2024 |
0.0.1 | Aug 12, 2024 |
0.0.0 | Aug 1, 2024 |
#33 in Database implementations
420KB
11K
SLoC
Tonbo
Website | Rust Doc | Blog | Community
Introduction
Tonbo is an embedded, persistent database offering fast KV-like methods for conveniently writing and scanning type-safe structured data. Tonbo can be used to build data-intensive applications, including other types of databases.
Tonbo is implemented with a Log-Structured Merge Tree, constructed using Apache Arrow & Apache Parquet data blocks. Leveraging Arrow and Parquet, Tonbo supports:
- Pushdown limits, predicates, and projection operators
- Zero-copy deserialization
- Various storage backends: OPFS, S3, etc. (to be supported in v0.2.0)
These features enhance the efficiency of queries on structured data.
Tonbo is designed to integrate seamlessly with other Arrow analytical tools, such as DataFusion. For an example, refer to this preview (official support for DataFusion will be included in v0.2.0).
Note: Tonbo is currently unstable; API and file formats may change in upcoming minor versions. Please avoid using it in production and stay tuned for updates.
Example
use std::ops::Bound;
use futures_util::stream::StreamExt;
use tonbo::{executor::tokio::TokioExecutor, Record, Projection, DB};
/// Use macro to define schema of column family just like ORM
/// It provides type-safe read & write API
#[derive(Record, Debug)]
pub struct User {
#[record(primary_key)]
name: String,
email: Option<String>,
age: u8,
}
#[tokio::main]
async fn main() {
// pluggable async runtime and I/O
let db = DB::new("./db_path/users".into(), TokioExecutor::default())
.await
.unwrap();
// insert with owned value
db.insert(User {
name: "Alice".into(),
email: Some("alice@gmail.com".into()),
age: 22,
})
.await
.unwrap();
{
// tonbo supports transaction
let txn = db.transaction().await;
// get from primary key
let name = "Alice".into();
// get the zero-copy reference of record without any allocations.
let user = txn
.get(
&name,
// tonbo supports pushing down projection
Projection::All,
)
.await
.unwrap();
assert!(user.is_some());
assert_eq!(user.unwrap().get().age, Some(22));
{
let upper = "Blob".into();
// range scan of users
let mut scan = txn
.scan((Bound::Included(&name), Bound::Excluded(&upper)))
.await
// tonbo supports pushing down projection
.projection(vec![1])
.take()
.await
.unwrap();
while let Some(entry) = scan.next().await.transpose().unwrap() {
assert_eq!(
entry.value(),
Some(UserRef {
name: "Alice",
email: Some("alice@gmail.com"),
age: Some(22),
})
);
}
}
// commit transaction
txn.commit().await.unwrap();
}
}
Features
- Fully asynchronous API.
- Zero-copy rusty API ensuring safety with compile-time type and lifetime checks.
- Vendor-agnostic:
- Various usage methods, async runtimes, and file systems:
- Rust library:
- Python library (via PyO3 & pydantic):
- asyncio (via pyo3-asyncio).
- JavaScript library:
- WASM and OPFS.
- Dynamic library with a C interface.
- Most lightweight implementation to Arrow / Parquet LSM Trees:
- Define schema using just Arrow schema and store data in Parquet files.
- (Optimistic) Transactions.
- Leveled compaction strategy.
- Push down filter, limit and projection.
- Various usage methods, async runtimes, and file systems:
- Runtime schema definition (in next release).
- SQL (via Apache DataFusion).
- Fusion storage across RAM, flash, SSD, and remote Object Storage Service (OSS) for each column-family, balancing performance and cost efficiency per data block:
- Remote storage (via Arrow object_store or Apache OpenDAL).
- Distributed query and compaction.
- Blob storage (like BlobDB in RocksDB).
Contributing to Tonbo
Follow the Contributing Guide to contribute. Please feel free to ask any question or contact us on Github Discussions or issues.
Dependencies
~43–65MB
~1.5M SLoC