#cache #lock-free #padding #atomic #shared-data #data-access

no-std cache-padded

Prevent false sharing by padding and aligning to the length of a cache line

5 stable releases

1.3.0 May 12, 2023
1.2.0 Dec 19, 2021
1.1.1 Jul 7, 2020
1.1.0 May 31, 2020
1.0.0 May 25, 2020

#571 in Concurrency

Download history 45912/week @ 2024-07-20 37503/week @ 2024-07-27 43083/week @ 2024-08-03 44478/week @ 2024-08-10 37776/week @ 2024-08-17 40875/week @ 2024-08-24 35922/week @ 2024-08-31 37462/week @ 2024-09-07 36253/week @ 2024-09-14 37011/week @ 2024-09-21 35386/week @ 2024-09-28 41208/week @ 2024-10-05 44201/week @ 2024-10-12 43800/week @ 2024-10-19 35137/week @ 2024-10-26 33162/week @ 2024-11-02

163,990 downloads per month
Used in 325 crates (15 directly)

Apache-2.0 OR MIT

11KB
71 lines

cache-padded (deprecated)

Build License Cargo Documentation

This crate is now deprecated in favor of crossbeam-utils::CachePadded.

Prevent false sharing by padding and aligning to the length of a cache line.

In concurrent programming, sometimes it is desirable to make sure commonly accessed shared data is not all placed into the same cache line. Updating an atomic value invalides the whole cache line it belongs to, which makes the next access to the same cache line slower for other CPU cores. Use CachePadded to ensure updating one piece of data doesn't invalidate other cached data.

Size and alignment

Cache lines are assumed to be N bytes long, depending on the architecture:

  • On x86-64, aarch64, and powerpc64, N = 128.
  • On arm, mips, mips64, and riscv64, N = 32.
  • On s390x, N = 256.
  • On all others, N = 64.

Note that N is just a reasonable guess and is not guaranteed to match the actual cache line length of the machine the program is running on.

The size of CachePadded<T> is the smallest multiple of N bytes large enough to accommodate a value of type T.

The alignment of CachePadded<T> is the maximum of N bytes and the alignment of T.

Examples

Alignment and padding:

use cache_padded::CachePadded;

let array = [CachePadded::new(1i8), CachePadded::new(2i8)];
let addr1 = &*array[0] as *const i8 as usize;
let addr2 = &*array[1] as *const i8 as usize;

assert!(addr2 - addr1 >= 64);
assert_eq!(addr1 % 64, 0);
assert_eq!(addr2 % 64, 0);

When building a concurrent queue with a head and a tail index, it is wise to place indices in different cache lines so that concurrent threads pushing and popping elements don't invalidate each other's cache lines:

use cache_padded::CachePadded;
use std::sync::atomic::AtomicUsize;

struct Queue<T> {
    head: CachePadded<AtomicUsize>,
    tail: CachePadded<AtomicUsize>,
    buffer: *mut T,
}

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

No runtime deps