#ring-buffer #circular-buffer #buffer #lock-free #ring #async #circular

no-std mutringbuf

A simple lock-free SPSC ring buffer, with in-place mutability

6 releases (3 breaking)

0.4.0 Nov 15, 2024
0.3.1 Jun 27, 2024
0.3.0 May 13, 2024
0.2.0 May 6, 2024
0.0.1-alpha.1 Mar 15, 2024

#141 in Concurrency

Download history 10/week @ 2024-09-11 5/week @ 2024-09-18 3/week @ 2024-09-25 111/week @ 2024-11-13 13/week @ 2024-11-20

124 downloads per month

MIT/Apache

105KB
2K SLoC

MutRingBuf

crates.io Documentation Rust + Miri

A simple lock-free SPSC FIFO ring buffer, with in-place mutability.

Should I use it?

If you are in search of a ring buffer to use in production environment, take a look at one of these, before returning here:

If you find any mistakes with this project, please, open an issue; I'll be glad to take a look!

Performance

According to benchmarks, ringbuf should be a little bit faster than this crate, when executing certain operations.

On the other hand, according to tests I've made by myself using Instants, mutringbuf seems to be slightly faster.

I frankly don't know why, so my suggestion is to try both and decide, bearing in mind that, for typical producer-consumer use, ringbuf is certainly more stable and mature than this crate.

What is the purpose of this crate?

I've written this crate to perform real-time computing over audio streams, you can find a (simple) meaningful example here. To run it, jump here.

Features

  • default: alloc
  • alloc: uses alloc crate, enabling heap-allocated buffers
  • async: enables async/await support

Usage

A note about uninitialised items

This buffer can handle uninitialised items. They are produced either when the buffer is created with new_zeroed methods, or when an initialised item is moved out of the buffer via ConsIter::pop or AsyncConsIter::pop.

As also stated in ProdIter doc page, there are two ways to push an item into the buffer:

  • normal methods can be used only when the location in which we are pushing the item is initialised;
  • *_init methods must be used when that location is not initialised.

That's because normal methods implicitly drop the old value, which is a good thing if it is initialised, but it becomes a terrible one if it's not. To be more precise, dropping an uninitialised value results in UB, mostly in a SIGSEGV.

Initialisation of buffer and iterators

First, a buffer has to be created.

Local buffers should be faster, due to the use of plain integers as indices, but can't obviously be used in a concurrent environment.

Stack-allocated buffers

use mutringbuf::{ConcurrentStackRB, LocalStackRB};
// buffers filled with default values
let concurrent_buf = ConcurrentStackRB::<usize, 10>::default();
let local_buf = LocalStackRB::<usize, 10>::default();
// buffers built from existing arrays
let concurrent_buf = ConcurrentStackRB::from([0; 10]);
let local_buf = LocalStackRB::from([0; 10]);
// buffers with uninitialised (zeroed) items
unsafe {
    let concurrent_buf = ConcurrentStackRB::<usize, 10>::new_zeroed();
    let local_buf = LocalStackRB::<usize, 10>::new_zeroed();
}

Heap-allocated buffer

use mutringbuf::{ConcurrentHeapRB, LocalHeapRB};
// buffers filled with default values
let concurrent_buf: ConcurrentHeapRB<usize> = ConcurrentHeapRB::default(10);
let local_buf: LocalHeapRB<usize> = LocalHeapRB::default(10);
// buffers built from existing vec
let concurrent_buf = ConcurrentHeapRB::from(vec![0; 10]);
let local_buf = LocalHeapRB::from(vec![0; 10]);
// buffers with uninitialised (zeroed) items
unsafe {
    let concurrent_buf: ConcurrentHeapRB <usize> = ConcurrentHeapRB::new_zeroed(10);
    let local_buf: LocalHeapRB <usize> = LocalHeapRB::new_zeroed(10);
}

Please, note that the buffer uses a location to synchronise the iterators.

Thus, a buffer of size SIZE can keep a max amount of SIZE - 1 values!

Then such buffer can be used in two ways:

Sync immutable

The normal way to make use of a ring buffer: a producer inserts values that will eventually be taken by a consumer.

use mutringbuf::{LocalHeapRB, HeapSplit};
let buf = LocalHeapRB::from(vec![0; 10]);
let (mut prod, mut cons) = buf.split();

Sync mutable

As in the immutable case, but a third iterator work stands between prod and cons.

This iterator mutates elements in place.

use mutringbuf::{LocalHeapRB, HeapSplit};
let buf = LocalHeapRB::from(vec![0; 10]);
let (mut prod, mut work, mut cons) = buf.split_mut();

Async immutable

use mutringbuf::LocalHeapRB;
let buf = LocalHeapRB::from(vec![0; 10]);
let (mut as_prod, mut as_cons) = buf.split_async();

Async mutable

use mutringbuf::LocalHeapRB;
let buf = LocalHeapRB::from(vec![0; 10]);
let (mut as_prod, mut as_work, mut as_cons) = buf.split_mut_async();

Iterators can also be wrapped in a Detached, as well as in an AsyncDetached, indirectly pausing the consumer, in order to explore produced data back and forth.


Each iterator can then be passed to a thread to do its job. More information can be found in the relative pages:

Note that a buffer, no matter its type, lives until the last of the iterators does so.

Tests, benchmarks and examples

Miri test can be found within script.

The following commands must be run starting from the root of the crate.

Tests can be run with:

cargo test

Benchmarks can be run with:

RUSTFLAGS="--cfg bench" cargo +nightly bench

CPAL example can be run with:

RUSTFLAGS="--cfg cpal" cargo run --example cpal

If you run into something like: ALSA lib pcm_dsnoop.c:567:(snd_pcm_dsnoop_open) unable to open slave, please, take a look here.

Async example can be run with:

cargo run --example simple_async --features async

Every other example_name can be run with:

cargo run --example `example_name`

Dependencies

~110KB