9 releases (4 breaking)
Uses old Rust 2015
0.5.0 | Sep 25, 2017 |
---|---|
0.4.0 | Sep 23, 2017 |
0.3.3 | Sep 21, 2017 |
0.3.0 | Aug 9, 2017 |
0.1.1 | Jun 12, 2017 |
#911 in Asynchronous
40KB
688 lines
rad: High-level Rust library for interfacing with RADOS
This library provides a typesafe and extremely high-level Rust interface to
RADOS, the Reliable Autonomous Distributed Object Store. It uses the raw C
bindings from ceph-rust
.
Installation
To build and use this library, a working installation of the Ceph librados development files is required. On systems with apt-get, this can be acquired like so:
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
sudo apt-add-repository 'deb https://download.ceph.com/debian-luminous/ `lsb_release -sc` main'
sudo apt-get update
sudo apt-get install librados-dev
N.B. luminous
is the current Ceph release. This library will not work
correctly or as expected with earlier releases of Ceph/librados (Jewel or
earlier; Kraken is fine.)
For more information on installing Ceph packages, see the Ceph documentation.
Examples
Connecting to a cluster
The following shows how to connect to a RADOS cluster, by providing a path to a
ceph.conf
file, a path to the client.admin
keyring, and requesting to
connect with the admin
user. This API bares little resemblance to the
bare-metal librados API, but it is easy to trace what's happening under the
hood: ConnectionBuilder::with_user
or ConnectionBuilder::new
allocates a new rados_t
. read_conf_file
calls rados_conf_read_file
,
conf_set
calls rados_conf_set
, and connect
calls rados_connect
.
use rad::ConnectionBuilder;
let cluster = ConnectionBuilder::with_user("admin").unwrap()
.read_conf_file("/etc/ceph.conf").unwrap()
.conf_set("keyring", "/etc/ceph.client.admin.keyring").unwrap()
.connect()?;
The type returned from .connect()
is a Cluster
handle, which is a wrapper around a rados_t
which guarantees a rados_shutdown
on the connection when dropped.
Writing a file to a cluster with synchronous I/O
use std::fs::File;
use std::io::Read;
use rad::ConnectionBuilder;
let cluster = ConnectionBuilder::with_user("admin")?
.read_conf_file("/etc/ceph.conf")?
.conf_set("keyring", "/etc/ceph.client.admin.keyring")?
.connect()?;
// Read in bytes from some file to send to the cluster.
let file = File::open("/path/to/file")?;
let mut bytes = Vec::new();
file.read_to_end(&mut bytes)?;
let pool = cluster.get_pool_context("rbd")?;
pool.write_full("object-name", &bytes)?;
// Our file is now in the cluster! We can check for its existence:
assert!(pool.exists("object-name")?);
// And we can also check that it contains the bytes we wrote to it.
let mut bytes_from_cluster = vec![0u8; bytes.len()];
let bytes_read = pool.read("object-name", &mut bytes_from_cluster, 0)?;
assert_eq!(bytes_read, bytes_from_cluster.len());
assert!(bytes_from_cluster == bytes);
Writing multiple objects to a cluster with asynchronous I/O and futures-rs
rad-rs
also supports the librados AIO interface, using the futures
crate.
This example will start NUM_OBJECTS
writes concurrently and then wait for
them all to finish.
use std::fs::File;
use std::io::Read;
use rand::{Rng, SeedableRng, XorShiftRng};
use rad::ConnectionBuilder;
const NUM_OBJECTS: usize = 8;
let cluster = ConnectionBuilder::with_user("admin")?
.read_conf_file("/etc/ceph.conf")?
.conf_set("keyring", "/etc/ceph.client.admin.keyring")?
.connect()?;
let pool = cluster.get_pool_context("rbd")?;
stream::iter_ok((0..NUM_OBJECTS)
.map(|i| {
let bytes = XorShiftRng::from_seed([i as u32 + 1, 2, 3, 4])
.gen_iter::<u8>()
.take(1 << 16).collect();
let name = format!("object-{}", i);
pool.write_full_async(name, &bytes)
}))
.buffer_unordered(NUM_OBJECTS)
.collect()
.wait()?;
Running tests
Integration tests against a demo cluster are provided, and the test suite
(which is admittedly a little bare at the moment) uses Docker and a container
derived from the Ceph ceph/demo
container to bring a small Ceph cluster
online, locally. A script is provided for launching the test suite:
./tests/run-all-tests.sh
Launching the test suite requires Docker to be installed.
License
This project is licensed under the Mozilla Public License, version 2.0.
Dependencies
~7.5MB
~127K SLoC