#memory-safety #object-pool #heap-memory #memory #lock-free #memory-pool

syncpool

A thread-friendly library for recycle of heavy and heap-based objects to reduce allocation and memory pressure

7 releases

0.1.6 Feb 2, 2021
0.1.5 Sep 14, 2019
0.1.2 Aug 29, 2019

#190 in Memory management

Download history 11/week @ 2024-06-17 65/week @ 2024-06-24 102/week @ 2024-07-01 49/week @ 2024-07-08 47/week @ 2024-07-15 7/week @ 2024-07-22 72/week @ 2024-07-29 15/week @ 2024-08-05 63/week @ 2024-08-12 31/week @ 2024-08-19 68/week @ 2024-08-26 38/week @ 2024-09-02 6/week @ 2024-09-09 63/week @ 2024-09-16 75/week @ 2024-09-23 47/week @ 2024-09-30

193 downloads per month
Used in 3 crates

MIT license

71KB
1K SLoC

SyncPool

SyncPool on crates.io SyncPool on docs.rs

A simple and thread-safe objects pool to reuse heavy objects placed in the heap.

What this crate is for

Inspired by Go's sync.Pool module, this crate provides a multithreading-friendly library to recycle and reuse heavy, heap-based objects, such that the overall allocation and memory pressure will be reduced, and hence boosting the performance.

What this crate is NOT for

There is no such thing as the silver bullet when designing a multithreading project, programmer has to judge use cases on a case-by-case base.

As shown by a few (hundred) benchmarks we have run, it is quite clear that the library can reliably beat the allocator in the following case:

  • The object is large enough that it makes sense to live in the heap.
  • The clean up operations required to sanitize the written data before putting the element back to the pool is simple and fast to run.
  • The estimation on the maximum number of elements simultaneously checked out during the program run is good enough, i.e. the parallelism is deterministic; otherwise when the pool is starving (i.e. it doesn't have enough elements left to provide), the performance will suffer because we will need to create (and allocate in the heap for) new elements.

If your struct is nibble enough to live in the stack without blowing it, or if it's not in middle of the hottest code path, you most likely won't need the library to labor for you, allocators nowadays work quite marvelously, especially on the stack.

Example

extern crate syncpool;

use std::collections::HashMap;
use std::sync::mpsc::{self, SyncSender};
use std::thread;
use std::time::Duration;
use syncpool::prelude::*;

/// For simplicity and illustration, here we use the most simple but unsafe way to 
/// define the shared pool: make it static mut. Other safer implementation exists 
/// but may require some detour depending on the business logic and project structure.
static mut POOL: Option<SyncPool<ComplexStruct>> = None;

/// Number of producers that runs in this test
const COUNT: usize = 128;

/// The complex data struct for illustration. Usually such a heavy element could also
/// contain other nested struct, and should almost always be placed in the heap. If 
/// your struct is *not* heavy enough to be living in the heap, you most likely won't
/// need this library -- the allocator will work better on the stack. The only requirement
/// for the struct is that it has to implement the `Default` trait, which can be derived
/// in most cases, or implemented easily.
#[derive(Default, Debug)]
struct ComplexStruct {
    id: usize,
    name: String,
    body: Vec<String>,
    flags: Vec<usize>,
    children: Vec<usize>,
    index: HashMap<usize, String>,
    rev_index: HashMap<String, usize>,
}

fn main() {
    // Must initialize the pool first
    unsafe { POOL.replace(SyncPool::with_size(COUNT / 2)); }

    // use the channel that create a concurrent pipeline.
    let (tx, rx) = mpsc::sync_channel(64);

    // data producer loop
    thread::spawn(move || {
        let mut producer = unsafe { POOL.as_mut().unwrap() };

        for i in 0..COUNT {
            // take a pre-init element from the pool, we won't allocate in this 
            // call since the boxed element is already placed in the heap, and 
            // here we only reuse the one. 
            let mut content: Box<ComplexStruct> = producer.get();
            content.id = i;
        
            // simulating busy/heavy calculations we're doing in this time period, 
            // usually involving the `content` object.
            thread::sleep(Duration::from_nanos(32));
        
            // done with the stuff, send the result out.
            tx.send(content).unwrap_or_default();
        }
    });

    // data consumer logic
    let handler = thread::spawn(move || {
        let mut consumer = unsafe { POOL.as_mut().unwrap() };
    
        // `content` has the type `Box<ComplexStruct>`
        for content in rx {
            println!("Receiving struct with id: {}", content.id);
            consumer.put(content);
        }
    });

    // wait for the receiver to finish and print the result.
    handler.join().unwrap_or_default();

    println!("All done...");

}

You can find more complex (i.e. practical) use cases in the examples folder.

No runtime deps