8 releases (breaking)

0.7.0 Nov 8, 2023
0.6.0 May 30, 2023
0.5.0 Aug 15, 2022
0.4.2 Jun 7, 2022
0.1.0 Apr 8, 2022

#139 in Concurrency

Download history 47/week @ 2024-07-20 86/week @ 2024-07-27 52/week @ 2024-08-03 12/week @ 2024-08-10 34/week @ 2024-08-17 76/week @ 2024-08-24 99/week @ 2024-08-31 39/week @ 2024-09-07 45/week @ 2024-09-14 102/week @ 2024-09-21 47/week @ 2024-09-28 73/week @ 2024-10-05 81/week @ 2024-10-12 5/week @ 2024-10-19 3/week @ 2024-10-26 30/week @ 2024-11-02

123 downloads per month
Used in 4 crates (3 directly)

MIT license

37KB
760 lines

Choir

Crates.io Docs.rs Build Status MSRV codecov.io

Choir is a task orchestration framework. It helps you to organize all the CPU workflow in terms of tasks.

Example:

let mut choir = choir::Choir::new();
let _worker = choir.add_worker("worker");
let task1 = choir.spawn("foo").init_dummy().run();
let mut task2 = choir.spawn("bar").init(|_| { println!("bar"); });
task2.depend_on(&task1);
task2.run().join();

Selling Pitch

What makes Choir elegant? Generally when we need to encode the semantics of "wait for dependencies", we think of some sort of a counter. Maybe an atomic, for the dependency number. When it reaches zero (or one), we schedule a task for execution. In Choir, the internal data for a task (i.e. the functor itself!) is placed in an Arc. Whenever we are able to extract it from the Arc (which means there are no other dependencies), we move it to a scheduling queue. I think Rust type system shows its best here.

Note: it turns out Arc doesn't fully support such a "linear" usage as required here, and it's impossible to control where the last reference gets destructed (without logic in drop()). For this reason, we introduce our own Linearc to be used internally.

You can also add or remove workers at any time to balance the system load, which may be running other applications at the same time.

API

General workflow is about creating tasks and setting up dependencies between them. There is a few different kinds of tasks:

  • single-run tasks, initialized with init() and represented as FnOnce()
  • dummy tasks, initialized with init_dummy(), and having no function body
  • multi-run tasks, executed for every index in a range, represented as Fn(SubIndex), and initialized with init_multi()
  • iteration tasks, executed for every item produced by an iterator, represented as Fn(T), and initialized with init_iter()

Just calling run() is done automatically on IdleTask::drop() if not called explicitly. This object also allows adding dependencies before scheduling the task. The running task can be also used as a dependency for others.

Note that all tasks are pre-empted at the Fn() execution boundary. Thus, for example, a long-running multi task will be pre-empted by any incoming single-run tasks.

Users

Blade heavily relies on Choir for parallelizing the asset loading. See blade-asset talk at Rust Gamedev meetup for details.

TODO:

Overhead

Machine: MBP 2016, 3.3 GHz Dual-Core Intel Core i7

  • functions spawn()+init() (optimized): 237ns
  • "steal" task: 61ns
  • empty "execute": 37ns
  • dummy "unblock": 78ns

Executing 100k empty tasks:

  • individually: 28ms
  • as a multi-task: 6ms

Profiling workflow example

With Tracy: Add this line to the start of the benchmark:

let _ = profiling::tracy_client::Client::start();

Then run in command prompt:

cargo bench --features "profiling/profile-with-tracy"

Dependencies

~440KB