5 unstable releases

0.3.2 Oct 15, 2024
0.3.1 Oct 15, 2024
0.2.1 Apr 5, 2024
0.2.0 Mar 31, 2024
0.0.1 Mar 3, 2024

#120 in Concurrency

Download history 3938/week @ 2024-07-22 4648/week @ 2024-07-29 4290/week @ 2024-08-05 3242/week @ 2024-08-12 2861/week @ 2024-08-19 3920/week @ 2024-08-26 5405/week @ 2024-09-02 4126/week @ 2024-09-09 2940/week @ 2024-09-16 1990/week @ 2024-09-23 2101/week @ 2024-09-30 3730/week @ 2024-10-07 3494/week @ 2024-10-14 1709/week @ 2024-10-21 2201/week @ 2024-10-28 1736/week @ 2024-11-04

9,159 downloads per month

MIT license

23KB
421 lines

aarc

Quickstart

  • Arc / Weak: drop-in replacements for the standard library's Arc and Weak, but implemented with deferred reclamation semantics.
  • AtomicArc / AtomicWeak: variants of Arc and Weak with atomically updatable pointers, supporting standard atomic operations like load and compare_exchange.
  • Guard: A novel smart pointer that can be loaded from AtomicArc or AtomicWeak, designed to reduce contention when multiple threads operate on the same atomic variable. It prevents deallocation but does not contribute to reference counts. (This was renamed from Snapshot in an earlier version, to reduce confusion.)

Motivation

Data structures built with Arc typically require locks for synchronization, as only the reference counts may be atomically updated, not the pointer nor the contained data. While locks are often the right approach, lock-free data structures can have better theoretical and practical performance guarantees in highly-contended settings.

Instead of protecting in-place updates with locks, an alternative approach is to perform copy-on-write updates by atomically installing pointers. To avoid use-afer-free, mechanisms for safe memory reclamation (SMR) are typically utilized (i.e. hazard pointers, epoch-based reclamation). aarc uses the blazingly fast algorithm provided by the fast-smr crate and builds on top of it, hiding unsafety and providing convenient RAII semantics through reference-counted pointers.

Examples

Example 1: Treiber Stack

use std::ptr::null;
use aarc::{Arc, AsPtr, AtomicArc, Guard};

struct StackNode {
    val: usize,
    next: Option<Arc<Self>>,
}

struct Stack {
    top: AtomicArc<StackNode>,
}

impl Stack {
    fn push(&self, val: usize) {
        let mut top = self.top.load();
        loop {
            let top_ptr = top.as_ref().map_or(null(), AsPtr::as_ptr);
            let new_node = Arc::new(StackNode {
                val,
                next: top.as_ref().map(Arc::from),
            });
            match self.top.compare_exchange(top_ptr, Some(&new_node)) {
                Ok(()) => break,
                Err(before) => top = before,
            }
        }
    }
    fn pop(&self) -> Option<Guard<StackNode>> {
        let mut top = self.top.load();
        while let Some(top_node) = top.as_ref() {
            match self
                .top
                .compare_exchange(top_node.as_ptr(), top_node.next.as_ref())
            {
                Ok(()) => return top,
                Err(actual_top) => top = actual_top,
            }
        }
        None
    }
}

Roadmap

  • relax atomic orderings from SeqCst to Acq/Rel
  • add tagged pointers
  • add more tests and stabilize APIs

Resources

  1. Anderson, Daniel, et al. "Concurrent Deferred Reference Counting with Constant-Time Overhead."
  2. Anderson, Daniel, et al. "Turning Manual Concurrent Memory Reclamation into Automatic Reference Counting."

Dependencies

~22KB