8 releases

0.3.5 Jul 14, 2022
0.3.4 Jun 27, 2022
0.2.1 Sep 21, 2019
0.2.0 Jun 16, 2019

#120 in Memory management

Download history 1038/week @ 2024-07-25 1635/week @ 2024-08-01 1249/week @ 2024-08-08 1230/week @ 2024-08-15 1457/week @ 2024-08-22 1054/week @ 2024-08-29 1788/week @ 2024-09-05 1448/week @ 2024-09-12 2156/week @ 2024-09-19 3664/week @ 2024-09-26 1156/week @ 2024-10-03 1973/week @ 2024-10-10 2607/week @ 2024-10-17 1872/week @ 2024-10-24 1367/week @ 2024-10-31 1358/week @ 2024-11-07

7,629 downloads per month
Used in 18 crates (2 directly)

MIT license

17KB
237 lines

The thread's stack is a high performance way to manage memory. But, it cannot be used for large or dynamically sized allocations. What if the thread had a second stack suitable for that purpose?

We've had one, yes. What about second stack? ...Pippin, probably.

second-stack is an allocator for short-lived, potentially large values and slices. It is often faster to use than Vec for the same reason using the thread's stack is faster than using the heap most of the time.

The internal representation is a thread local stack that grows as necessary. Once the capacity saturates, the same allocation will be re-used for many consumers, making it more efficient as more libraries adopt it.

second-stack was originally developed for writing dynamic buffers in WebGL (eg: procedurally generate some triangles/colors, write them to a buffer, and hand them off to the graphics card many times per frame without incurring the cost of many heap allocations). But, over time I found that needing a short-lived slice was common and using second-stack all over the place allowed for the best memory re-use and performance.

There are two ways to use this API. The preferred way is to use methods which delegate to a shared thread local (like buffer, and uninit_slice. Using these methods ensures that multiple libraries efficiently re-use allocations without passing around context and exposing this implementation detail in their public API. Alternatively, you can use Stack::new() to create your own managed stack if you need more control.

Example using buffer:

// Buffer fully consumes an iterator,
// writes each item to a slice on the second stack,
// and gives you mutable access to the slice.
// This API supports Drop.
buffer(0..1000, |items| {
    assert_eq!(items.len(), 1000);
    assert_eq!(items[19], 19);
})

Example using uninit_slice:

uninit_slice(100, |slice| {
    // Write to the 100 element slice here
})

Example using Stack:

let stack = Stack::new();
stack.buffer(std::iter::repeat(5).take(100), |slice| {
    // Same as second_stack::buffer, but uses an
    // owned stack instead of the threadlocal one.
    // Not recommended unless you have a specific reason
    // because this limits passive sharing.
})

Example placing a huge value:

struct Huge {
    bytes: [u8; 4194304]
}

uninit::<Huge>(|huge| {
    // Do something with this very large
    // value that would cause a stack overflow if
    // we had used the thread stack
});

FAQ

How is this different from a bump allocator like bumpalo?

Bump allocators like bumpalo are arena allocators designed for phase-oriented allocations, whereas second-stack is a stack.

This allows second-stack to:

  • Support Drop
  • Dynamically up-size the allocation as needed rather than requiring the size be known up-front
  • Free and re-use memory earlier
  • Conveniently support "large local variables", which does not require architecting the program to fit the arena model

No runtime deps