#ring-buffer #circular-buffer #buffer #ring #dsp #circular

slice_ring_buf

A ring buffer implementation optimized for working with slices

12 releases

0.2.7 Mar 24, 2023
0.2.6 Aug 25, 2020
0.1.3 Aug 23, 2020

#257 in Data structures

Download history 112/week @ 2024-03-14 2/week @ 2024-03-21 19/week @ 2024-03-28 10/week @ 2024-04-04 4/week @ 2024-04-11 20/week @ 2024-04-18 7/week @ 2024-04-25 3/week @ 2024-05-16 7/week @ 2024-05-23 9/week @ 2024-05-30 8/week @ 2024-06-06 36/week @ 2024-06-13 29/week @ 2024-06-20 3/week @ 2024-06-27

77 downloads per month
Used in 2 crates

MIT license

150KB
2K SLoC

Slice Ring Buffer

Test Documentation Crates.io License

A ring buffer implementation that is optimized for working with slices. Note this pretty much does the same thing as VecDeque, but with the added ability to index using negative values, as well as working with buffers allocated on the stack.

This crate has no consumer/producer logic, and is meant to be used as a raw data structure or a base for other data structures.

This is optimized for manipulating data in chunks with slices. If your algorithm instead indexes elements one at a time and only uses buffers that have a size that is a power of two, then consider my crate bit_mask_ring_buf.

Installation

Add slice_ring_buf as a dependency in your Cargo.toml:

slice_ring_buf = 0.2

Example

use slice_ring_buf::{SliceRB, SliceRbRef};

// Create a ring buffer with type u32. The data will be
// initialized with the default value (0 in this case).
let mut rb = SliceRB::<u32>::from_len(4);

// Memcpy data from a slice into the ring buffer at
// arbitrary `isize` indexes. Earlier data will not be
// copied if it will be overwritten by newer data,
// avoiding unecessary memcpy's. The correct placement
// of the newer data will still be preserved.
rb.write_latest(&[0, 2, 3, 4, 1], 0);
assert_eq!(rb[0], 1);
assert_eq!(rb[1], 2);
assert_eq!(rb[2], 3);
assert_eq!(rb[3], 4);

// Memcpy into slices at arbitrary `isize` indexes
// and length.
let mut read_buffer = [0u32; 7];
rb.read_into(&mut read_buffer, 2);
assert_eq!(read_buffer, [3, 4, 1, 2, 3, 4, 1]);

// Read/write by retrieving slices directly.
let (s1, s2) = rb.as_slices_len(1, 4);
assert_eq!(s1, &[2, 3, 4]);
assert_eq!(s2, &[1]);

// Read/write to buffer by indexing. Performance will be
// limited by the modulo (remainder) operation on an
// `isize` value.
rb[0] = 0;
rb[1] = 1;
rb[2] = 2;
rb[3] = 3;

// Wrap when reading/writing outside of bounds.
// Performance will be limited by the modulo (remainder)
// operation on an `isize` value.
assert_eq!(rb[-1], 3);
assert_eq!(rb[10], 2);

// Aligned/stack data may also be used.
let mut stack_data = [0u32, 1, 2, 3];
let mut rb_ref = SliceRbRef::new(&mut stack_data);
rb_ref[-4] = 5;
let (s1, s2) = rb_ref.as_slices_len(0, 3);
assert_eq!(s1, &[5, 1, 2]);
assert_eq!(s2, &[]);

No runtime deps