2 unstable releases

0.1.0 Nov 13, 2024
0.0.0 Sep 27, 2024

#214 in Procedural macros

Download history 93/week @ 2024-09-21 56/week @ 2024-09-28 4/week @ 2024-10-05 106/week @ 2024-11-09 19/week @ 2024-11-16 3/week @ 2024-11-23

128 downloads per month

MIT/Apache

10KB
86 lines

cargo-needy and #[needy::requirements(...)]

Usage

1. Add requirements_ids to your dev-dependencies.

$ cargo add --dev requirements_ids

[!TIP] If you want to trace non-test functions, you will need to add it as a normal dependency.

2. Annotate your test functions with the corresponding requirement ids

The requirement can be any valid identifier (see syntax in FLS).

use needy::requirements;

// test a single requirement
#[requirements(REQ_001)]
#[test]
fn it_works() {
    assert_eq!(1 + 1, 2);
}

// test multiple requirements
#[requirements(REQ_001, REQ_002)]
#[test]
fn it_works_also() {
    assert_eq!(1 + 1, 2);
}

3. Use cargo-needy to export a JSON of all requirements

[!WARNING] At the moment cargo-needy requires Rust nightly from 2024-09-26 or later, because of some rust-analyzer features we rely on.

This restriction will be lifted as soon as that rust-analyzer version becomes stable.

Until then run set your Rust toolchain to nightly:

$ # set nightly globally
$ rustup default nightly
$
$ # set nightly for the current directory
$ rustup override set nightly
$
$ # set nightly for the command only
$ cargo +nightly needy
$ rustup component add rust-analyzer
$ cargo install cargo-needy
$ cargo needy <path>

This outputs the collected requirement ids as JSON.

For example for cargo-needy/tests/test-crate it looks like this:

$ cargo needy cargo-needy/tests/test-crate/
Running `cargo check --manifest-path cargo-needy/tests/test-crate/Cargo.toml --all-targets`
  took 45.6ms
Running `rust-analyzer lsif cargo-needy/tests/test-crate/`
  took 11.5s
Analyzing lsif output
  took 89.6ms
{"file":"file:///home/urhengulas/Documents/github.com/ferrocene/cargo-needy/cargo-needy/tests/test-crate/src/main.rs","function_name":"test_main","module":"test_crate","requirement_id":"REQ_001","span":{"start":{"line":10,"character":22},"end":{"line":10,"character":29}},"version":"v1"}
{"file":"file:///home/urhengulas/Documents/github.com/ferrocene/cargo-needy/cargo-needy/tests/test-crate/tests/it_works.rs","function_name":"it_works","module":"it_works","requirement_id":"REQ_002","span":{"start":{"line":0,"character":35},"end":{"line":0,"character":42}},"version":"v1"}
{"file":"file:///home/urhengulas/Documents/github.com/ferrocene/cargo-needy/cargo-needy/tests/test-crate/tests/it_works.rs","function_name":"it_works","module":"it_works","requirement_id":"REQ_003","span":{"start":{"line":0,"character":44},"end":{"line":0,"character":51}},"version":"v1"}

Benchmark

In order to optimize the project there is a benchmark crate which heavily uses the needy::requirements macro.

To get end-to-end timings you can use bench.py which runs cargo-needy multiple iterations via hyperfine.

$ python bench.py
Benchmark 1: cargo +nightly run --release -q -- tests/bench-crate/ 1> /dev/null
  Time (mean ± σ):     8288.6 ms ± 327.0 ms    [User: 8056.2 ms, System: 497.2 ms]
  Range (min … max):   7509.9 ms … 8674.0 ms    10 runs

$ # To get timings of the analysis stage, pass the `--bench-analyze` flag.
$ python bench.py --bench-analyze

$ # To benchmark more iterations, pass the `--more-runs` flag.
$ python bench.py --bench-analyze

Please take care to have a similar setup when you compare timing runs and be aware that there is noise.

No runtime deps