4 releases (2 breaking)

0.2.0 Apr 17, 2024
0.1.0 Apr 17, 2024
0.0.2 Apr 14, 2024
0.0.1 Apr 14, 2024

#125 in Concurrency

Download history 380/week @ 2024-04-12 24296/week @ 2024-04-19 39436/week @ 2024-04-26 47613/week @ 2024-05-03 53761/week @ 2024-05-10 55511/week @ 2024-05-17 57261/week @ 2024-05-24 62438/week @ 2024-05-31 68658/week @ 2024-06-07 65474/week @ 2024-06-14

263,596 downloads per month
Used in 252 crates (2 directly)

Apache-2.0

105KB
2K SLoC

Scalable Delayed Dealloc

Cargo Crates.io GitHub Workflow Status

A scalable lock-free delayed memory reclaimer that deallocates virtual addresses only after it makes sure that there are no potential readers.

Its delayed deallocation algorithm is a variant of epoch-based reclamation where retired memory chunks are stored in the thread-local storage until certain criteria are met. The crossbeam_epoch create offers very similar functionality, however users will find this crate easier to use as the lifetime of a memory chunk is safely managed. For instance, sdd::AtomicOwned and sdd::Owned retire the contained instance when they are dropped, and sdd::AtomicShared and sdd::Shared retire the instance when the last strong reference is dropped.

Memory Overhead

Retired instances are stored in intrusive queues in thread-local storage, and therefore additional 16-byte space for Option<NonNull<dyn Collectible>> is allocated per instance.

Examples

This crate can be used without an unsafe block.

use sdd::{suspend, AtomicOwned, AtomicShared, Guard, Ptr, Shared, Tag};
use std::sync::atomic::Ordering::Relaxed;

// `atomic_shared` holds a strong reference to `17`.
let atomic_shared: AtomicShared<usize> = AtomicShared::new(17);

// `atomic_owned` owns `19`.
let atomic_owned: AtomicOwned<usize> = AtomicOwned::new(19);

// `guard` prevents the garbage collector from dropping reachable instances.
let guard = Guard::new();

// `ptr` cannot outlive `guard`.
let mut ptr: Ptr<usize> = atomic_shared.load(Relaxed, &guard);
assert_eq!(*ptr.as_ref().unwrap(), 17);

// `atomic_shared` can be tagged.
atomic_shared.update_tag_if(Tag::First, |p| p.tag() == Tag::None, Relaxed, Relaxed);

// `ptr` is not tagged, so CAS fails.
assert!(atomic_shared.compare_exchange(
    ptr,
    (Some(Shared::new(18)), Tag::First),
    Relaxed,
    Relaxed,
    &guard).is_err());

// `ptr` can be tagged.
ptr.set_tag(Tag::First);

// The ownership of the contained instance is transferred to the return value of CAS.
let prev: Shared<usize> = atomic_shared.compare_exchange(
    ptr,
    (Some(Shared::new(18)), Tag::Second),
    Relaxed,
    Relaxed,
    &guard).unwrap().0.unwrap();
assert_eq!(*prev, 17);

// `17` will be garbage-collected later.
drop(prev);

// `sdd::AtomicShared` can be converted into `sdd::Shared`.
let shared: Shared<usize> = atomic_shared.into_shared(Relaxed).unwrap();
assert_eq!(*shared, 18);

// `18` and `19` will be garbage-collected later.
drop(shared);
drop(atomic_owned);

// `17` is still valid as `guard` keeps the garbage collector from dropping it.
assert_eq!(*ptr.as_ref().unwrap(), 17);

// Execution of a closure can be deferred until all the current readers are gone.
guard.defer_execute(|| println!("deferred"));
drop(guard);

// If the thread is expected to lie dormant for a while, call `suspend()` to allow
// others to reclaim the memory.
suspend();

Performance

  • The average time taken to enter and exit a protected region: 2.0 nanoseconds on Apple M2.

Changelog

No runtime deps