8 releases (breaking)

0.7.0 Nov 8, 2023
0.6.0 May 30, 2023
0.5.0 Aug 15, 2022
0.4.2 Jun 7, 2022
0.1.0 Apr 8, 2022

#87 in Concurrency

Download history 7/week @ 2023-12-14 1/week @ 2023-12-21 7/week @ 2024-01-04 4/week @ 2024-01-11 4/week @ 2024-01-25 13/week @ 2024-02-01 86/week @ 2024-02-08 24/week @ 2024-02-15 66/week @ 2024-02-22 36/week @ 2024-02-29 33/week @ 2024-03-07 65/week @ 2024-03-14 52/week @ 2024-03-21 56/week @ 2024-03-28

206 downloads per month
Used in 4 crates (3 directly)

MIT license

37KB
760 lines

Choir

Crates.io Docs.rs Build Status MSRV codecov.io

Choir is a task orchestration framework. It helps you to organize all the CPU workflow in terms of tasks.

Example:

let mut choir = choir::Choir::new();
let _worker = choir.add_worker("worker");
let task1 = choir.spawn("foo").init_dummy().run();
let mut task2 = choir.spawn("bar").init(|_| { println!("bar"); });
task2.depend_on(&task1);
task2.run().join();

Selling Pitch

What makes Choir elegant? Generally when we need to encode the semantics of "wait for dependencies", we think of some sort of a counter. Maybe an atomic, for the dependency number. When it reaches zero (or one), we schedule a task for execution. In Choir, the internal data for a task (i.e. the functor itself!) is placed in an Arc. Whenever we are able to extract it from the Arc (which means there are no other dependencies), we move it to a scheduling queue. I think Rust type system shows its best here.

Note: it turns out Arc doesn't fully support such a "linear" usage as required here, and it's impossible to control where the last reference gets destructed (without logic in drop()). For this reason, we introduce our own Linearc to be used internally.

You can also add or remove workers at any time to balance the system load, which may be running other applications at the same time.

API

General workflow is about creating tasks and setting up dependencies between them. There is a few different kinds of tasks:

  • single-run tasks, initialized with init() and represented as FnOnce()
  • dummy tasks, initialized with init_dummy(), and having no function body
  • multi-run tasks, executed for every index in a range, represented as Fn(SubIndex), and initialized with init_multi()
  • iteration tasks, executed for every item produced by an iterator, represented as Fn(T), and initialized with init_iter()

Just calling run() is done automatically on IdleTask::drop() if not called explicitly. This object also allows adding dependencies before scheduling the task. The running task can be also used as a dependency for others.

Note that all tasks are pre-empted at the Fn() execution boundary. Thus, for example, a long-running multi task will be pre-empted by any incoming single-run tasks.

Users

Blade heavily relies on Choir for parallelizing the asset loading. See blade-asset talk at Rust Gamedev meetup for details.

TODO:

Overhead

Machine: MBP 2016, 3.3 GHz Dual-Core Intel Core i7

  • functions spawn()+init() (optimized): 237ns
  • "steal" task: 61ns
  • empty "execute": 37ns
  • dummy "unblock": 78ns

Executing 100k empty tasks:

  • individually: 28ms
  • as a multi-task: 6ms

Profiling workflow example

With Tracy: Add this line to the start of the benchmark:

let _ = profiling::tracy_client::Client::start();

Then run in command prompt:

cargo bench --features "profiling/profile-with-tracy"

Dependencies

~440KB