#pipeline #iterator #thread #parallel #pool #parallel-processing #worker-thread

pipeliner

Provides a nice interface for parallel programming with iterators

3 releases (stable)

Uses old Rust 2015

1.0.1 Feb 12, 2020
1.0.0 Nov 18, 2018
0.1.1 Dec 5, 2016

#972 in Concurrency

Download history 11/week @ 2024-02-19 42/week @ 2024-02-26 16/week @ 2024-03-04 14/week @ 2024-03-11 13/week @ 2024-03-18 9/week @ 2024-03-25 53/week @ 2024-04-01

92 downloads per month
Used in 2 crates

Apache-2.0

26KB
462 lines

Pipeliner

Pipeliner is a Rust library to help you create multithreaded work pipelines. You can choose how many threads each step of the pipeline uses to tune performance for I/O- or CPU-bound workloads.

The API docs contain code examples.

Comparison with Rayon

Rayon is another Rust library for parallel computation. If you're doing purely CPU-bound work, you may want to try that out to see if it offers better performance.

Pipeliner, IMHO, offers a simpler interface. That simpler interface makes it easier to combine parts of a data pipeline that may be I/O-bound and CPU-bound. Usually in those cases, your bottleneck is I/O, not the speed of your parallel execution library, so having a nice API may be preferable.


lib.rs:

This crate provides a high-level framework for parallel processing.

Main features:

  • Accept input lazily from an Iterator.
  • Performs work in a user-specified number of threads.
  • Return all output via an Iterator.
  • Optionally buffer output.
  • panics in your worker threads are propagated out of the output Iterator. (No silent loss of data.)
  • No unsafe code.
// Import the Pipeline trait to give all Iterators and IntoIterators the 
// .with_threads() method:
use pipeliner::Pipeline;

for result in (0..100).with_threads(10).map(|x| x + 1) {
    println!("result: {}", result);
}

And, since the output is also an iterator, you can easily create a pipeline with varying number of threads for each step of work:

use pipeliner::Pipeline;
// You might want a high number of threads for high-latency work:
let results = (0..100).with_threads(50).map(|x| {
    x + 1 // Let's pretend this is high latency. (ex: network access)
})
// But you might want lower thread usage for cpu-bound work:
.with_threads(4).out_buffer(100).map(|x| {
    x * x // ow my CPUs :p
}); 
for result in results {
    println!("result: {}", result);
}

Dependencies

~395KB