#concurrency #lock-free

left-right

A concurrency primitive for high concurrency reads over a single-writer data structure

7 unstable releases (3 breaking)

0.11.2 Apr 7, 2021
0.11.0 Dec 17, 2020
0.10.0 Dec 11, 2020
0.9.2 Dec 6, 2020
0.1.0 Nov 22, 2020

#64 in Concurrency

Download history 147/week @ 2020-12-24 65/week @ 2020-12-31 127/week @ 2021-01-07 223/week @ 2021-01-14 190/week @ 2021-01-21 333/week @ 2021-01-28 316/week @ 2021-02-04 405/week @ 2021-02-11 211/week @ 2021-02-18 331/week @ 2021-02-25 273/week @ 2021-03-04 308/week @ 2021-03-11 248/week @ 2021-03-18 395/week @ 2021-03-25 527/week @ 2021-04-01 405/week @ 2021-04-08

1,243 downloads per month

MIT/Apache

67KB
780 lines

Build Status Codecov Crates.io Documentation

Left-right is a concurrency primitive for high concurrency reads over a single-writer data structure. The primitive keeps two copies of the backing data structure, one that is accessed by readers, and one that is access by the (single) writer. This enables all reads to proceed in parallel with minimal coordination, and shifts the coordination overhead to the writer. In the absence of writes, reads scale linearly with the number of cores.


lib.rs:

A concurrency primitive for high concurrency reads over a single-writer data structure.

The primitive keeps two copies of the backing data structure, one that is accessed by readers, and one that is access by the (single) writer. This enables all reads to proceed in parallel with minimal coordination, and shifts the coordination overhead to the writer. In the absence of writes, reads scale linearly with the number of cores.

When the writer wishes to expose new changes to the datastructure (see [WriteHandle::publish]), it "flips" the two copies so that subsequent reads go to the old "write side", and future writers go to the old "read side". This process does cause two cache line invalidations for the readers, but does not stop them from making progress (i.e., reads are wait-free).

In order to keep both copies up to date, left-right keeps an operational log ("oplog") of all the modifications to the data structure, which it uses to bring the old read data up to date with the latest writes on a flip. Since there are two copies of the data, each oplog entry is applied twice: once to the write copy and again to the (stale) read copy.

Trade-offs

Few concurrency wins come for free, and this one is no exception. The drawbacks of this primitive are:

  • Increased memory use: since we keep two copies of the backing data structure, we are effectively doubling the memory use of the underlying data. With some clever de-duplication, this cost can be ameliorated to some degree, but it's something to be aware of. Furthermore, if writers only call publish infrequently despite adding many writes to the operational log, the operational log itself may grow quite large, which adds additional overhead.
  • Deterministic operations: as the entries in the operational log are applied twice, once to each copy of the data, it is essential that the operations are deterministic. If they are not, the two copies will no longer mirror one another, and will continue to diverge over time.
  • Single writer: left-right only supports a single writer. To have multiple writers, you need to ensure exclusive access to the [WriteHandle] through something like a Mutex.
  • Slow writes: Writes through left-right are slower than they would be directly against the backing datastructure. This is both because they have to go through the operational log, and because they must each be applied twice.

How does it work?

Take a look at this YouTube video which goes through the basic concurrency algorithm, as well as the initial development of this library. Alternatively, there's a shorter (but also less complete) description in this talk.

At a glance, left-right is implemented using two regular Ts, an operational log, epoch counting, and some pointer magic. There is a single pointer through which all readers go. It points to a T that the readers access in order to read data. Every time a read has accessed the pointer, they increment a local epoch counter, and they update it again when they have finished the read. When a write occurs, the writer updates the other T (for which there are no readers), and also stores a copy of the change in a log. When [WriteHandle::publish] is called, the writer, atomically swaps the reader pointer to point to the other T. It then waits for the epochs of all current readers to change, and then replays the operational log to bring the stale copy up to date.

The design resembles this left-right concurrency scheme from 2015, though I am not aware of any follow-up to that work.

How do I use it?

If you just want a data structure for fast reads, you likely want to use a crate that uses this crate, like evmap. If you want to develop such a crate yourself, here's what you do:

use left_right::{Absorb, ReadHandle, WriteHandle};

// First, define an operational log type.
// For most real-world use-cases, this will be an `enum`, but we'll keep it simple:
struct CounterAddOp(i32);

// Then, implement the unsafe `Absorb` trait for your data structure type,
// and provide the oplog type as the generic argument.
// You can read this as "`i32` can absorb changes of type `CounterAddOp`".
impl Absorb<CounterAddOp> for i32 {
    // See the documentation of `Absorb::absorb_first`.
    //
    // Essentially, this is where you define what applying
    // the oplog type to the datastructure does.
    fn absorb_first(&mut self, operation: &mut CounterAddOp, _: &Self) {
        *self += operation.0;
    }

    // See the documentation of `Absorb::absorb_second`.
    //
    // This may or may not be the same as `absorb_first`,
    // depending on whether or not you de-duplicate values
    // across the two copies of your data structure.
    fn absorb_second(&mut self, operation: CounterAddOp, _: &Self) {
        *self += operation.0;
    }

    // See the documentation of `Absorb::drop_first`.
    fn drop_first(self: Box<Self>) {}

    fn sync_with(&mut self, first: &Self) {
        *self = *first
    }
}

// Now, you can construct a new left-right over an instance of your data structure.
// This will give you a `WriteHandle` that accepts writes in the form of oplog entries,
// and a (cloneable) `ReadHandle` that gives you `&` access to the data structure.
let (write, read) = left_right::new::<i32, CounterAddOp>();

// You will likely want to embed these handles in your own types so that you can
// provide more ergonomic methods for performing operations on your type.
struct Counter(WriteHandle<i32, CounterAddOp>);
impl Counter {
    // The methods on you write handle type will likely all just add to the operational log.
    pub fn add(&mut self, i: i32) {
        self.0.append(CounterAddOp(i));
    }

    // You should also provide a method for exposing the results of any pending operations.
    //
    // Until this is called, any writes made since the last call to `publish` will not be
    // visible to readers. See `WriteHandle::publish` for more details. Make sure to call
    // this out in _your_ documentation as well, so that your users will be aware of this
    // "weird" behavior.
    pub fn publish(&mut self) {
        self.0.publish();
    }
}

// Similarly, for reads:
#[derive(Clone)]
struct CountReader(ReadHandle<i32>);
impl CountReader {
    pub fn get(&self) -> i32 {
        // The `ReadHandle` itself does not allow you to access the underlying data.
        // Instead, you must first "enter" the data structure. This is similar to
        // taking a `Mutex`, except that no lock is actually taken. When you enter,
        // you are given back a guard, which gives you shared access (through the
        // `Deref` trait) to the "read copy" of the data structure.
        //
        // Note that `enter` may yield `None`, which implies that the `WriteHandle`
        // was dropped, and took the backing data down with it.
        //
        // Note also that for as long as the guard lives, a writer that tries to
        // call `WriteHandle::publish` will be blocked from making progress.
        self.0.enter().map(|guard| *guard).unwrap_or(0)
    }
}

// These wrapper types are likely what you'll give out to your consumers.
let (mut w, r) = (Counter(write), CountReader(read));

// They can then use the type fairly ergonomically:
assert_eq!(r.get(), 0);
w.add(1);
// no call to publish, so read side remains the same:
assert_eq!(r.get(), 0);
w.publish();
assert_eq!(r.get(), 1);
drop(w);
// writer dropped data, so reads yield fallback value:
assert_eq!(r.get(), 0);

One additional noteworthy detail: much like with Mutex, RwLock, and RefCell from the standard library, the values you dereference out of a ReadGuard are tied to the lifetime of that ReadGuard. This can make it awkward to write ergonomic methods on the read handle that return references into the underlying data, and may tempt you to clone the data out or take a closure instead. Instead, consider using [ReadGuard::map] and [ReadGuard::try_map], which (like RefCell's Ref::map) allow you to provide a guarded reference deeper into your data structure.

Dependencies

~130KB