#mutex #spin-lock #local-node #no-std #order

no-std clhlock

An implementation of Craig and, indenpendently, Magnussen, Landin, and Hagersten queue lock for mutual exclusion, referred to as CLH lock

1 unstable release

0.1.0 Aug 28, 2024

#364 in Concurrency

Download history 131/week @ 2024-08-25 7/week @ 2024-09-01

138 downloads per month

MIT/Apache

98KB
1.5K SLoC

A simple and correct implementation of the CLH lock

MIT Apache 2.0 Crates Docs CI Codecov No_std

CLH lock is a List-Based Queuing Lock that avoids network contention by having threads spin and on locally accessible memory locations. The main properties of this mechanism are:

  • guarantees FIFO ordering of lock acquisitions;
  • spins on locally-accessible flag variables only;
  • requires a small constant amount of space per lock; and
  • works equally well (requiring only O(1) network transactions per lock acquisition) on machines with and without coherent caches.
  • avoids the "handshake" runtime overhead between the lock holder and its successor during lock release.

This algorithm was indenpendently introduced by Craig and Magnussen, Landin, and Hagersten papers.

Spinlock use cases

It is noteworthy to mention that spinlocks are usually not what you want. The majority of use cases are well covered by OS-based mutexes like std::sync::Mutex, parking_lot::Mutex. These implementations will notify the system that the waiting thread should be parked, freeing the processor to work on something else.

Spinlocks are only efficient in very few circunstances where the overhead of context switching or process rescheduling are greater than busy waiting for very short periods. Spinlocks can be useful inside operating-system kernels, on embedded systems or even complement other locking designs.

Install

Run the following Cargo command in your project directory:

cargo add clhlock

Or add a entry under the [dependencies] section in your Cargo.toml:

# Cargo.toml

[dependencies]
# Available features: `yield` and thread_local`.
clhlock = { version = "0.1", features = ["yield, thread_local"] }

Documentation

This project documentation is hosted at docs.rs. Or you can build it locally with the following command:

RUSTDOCFLAGS="--cfg docsrs" cargo +nightly doc --all-features --open

Waiting queue node allocations

Queue nodes are allocated in the heap, and their ownership is transparently moved from the lock holding thread to it successor. Allocationg the nodes in the stack is not allowed since the CLH lock protocol does not guarantee that a predecessor thread will be live by the time a successor access its associated locking node. Locking operations require exclusive access to local node handles that own the heap allocations. Therefore, this crate requires linking with Rust's core alloc library.

Locking with a raw CLH spinlock

This implementation operates under FIFO. Raw locking APIs require exclusive access to a locally accessible handle to a heap allocated queue node. This node handle is represented by the raw::MutexNode type. This implementation is no_std compatible. See the raw module for more information.

use std::sync::Arc;
use std::thread;

// `spins::Mutex` simply spins during contention.
use clhlock::raw::{spins::Mutex, MutexNode};

fn main() {
    let mutex = Arc::new(Mutex::new(0));
    let c_mutex = Arc::clone(&mutex);

    thread::spawn(move || {
        // A queue node handle must be mutably accessible.
        let mut node = MutexNode::new();
        *c_mutex.lock(&mut node) = 10;
    })
    .join().expect("thread::spawn failed");

    // A queue node handle must be mutably accessible.
    let mut node = MutexNode::new();
    assert_eq!(*mutex.lock(&mut node), 10);
}

Thread local raw CLH spinlock nodes

Enables raw::Mutex locking APIs that operate over queue node handles that are stored at the thread local storage. These locking APIs require a static reference to a raw::LocalMutexNode key. Keys must be generated by the thread_local_node! macro. Thread local nodes are not no_std compatible and can be enabled through the thread_local feature.

use std::sync::Arc;
use std::thread;

// `spins::Mutex` simply spins during contention.
use clhlock::raw::spins::Mutex;

// Requires `thread_local` feature.
clhlock::thread_local_node!(static NODE);

fn main() {
    let mutex = Arc::new(Mutex::new(0));
    let c_mutex = Arc::clone(&mutex);
    thread::spawn(move || {
        // Local node handles are provided by reference.
        // Critical section must be defined as a closure.
        c_mutex.lock_with_local(&NODE, |mut guard| *guard = 10);
    })
    .join().expect("thread::spawn failed");

    // Local node handles are provided by reference.
    // Critical section must be defined as a closure.
    assert_eq!(mutex.lock_with_local(&NODE, |guard| *guard), 10);
}

Features

This crate dos not provide any default features. Features that can be enabled are:

yield

The yield feature requires linking to the standard library, so it is not suitable for no_std environments. By enabling the yield feature, instead of busy-waiting during lock acquisitions and releases, this will call std::thread::yield_now, which cooperatively gives up a timeslice to the OS scheduler. This may cause a context switch, so you may not want to enable this feature if your intention is to to actually do optimistic spinning. The default implementation calls core::hint::spin_loop, which does in fact just simply busy-waits. This feature is not not_std compatible.

thread_local

The thread_local feature enables raw::Mutex locking APIs that operate over queue node handles that are stored at the thread local storage. These locking APIs require a static reference to raw::LocalMutexNode keys. Keys must be generated by the thread_local_node! macro. This feature is not no_std compatible.

Minimum Supported Rust Version (MSRV)

This crate is guaranteed to compile on a Minimum Supported Rust Version (MSRV) of 1.70.0 and above. This version will not be changed without a minor version bump.

License

Licensed under either of

Contributing

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

Code review

It is recommended to always use cargo-crev to verify the trustworthiness of each of your dependencies, including this one.

No runtime deps