6 releases

0.1.5 Apr 17, 2024
0.1.4 Aug 21, 2023

#101 in Testing

Download history 9/week @ 2024-02-26 61/week @ 2024-03-04 4/week @ 2024-03-11 5/week @ 2024-04-01 150/week @ 2024-04-15

155 downloads per month

Apache-2.0 OR MIT

77KB
1.5K SLoC

Crates.io docs.rs pipeline status coverage report license dependency status lines of code

rtest - Resource based test framework

There are many unit-test frameworks in rust. This framework focuses on integration-testing, that means external software, not necessarily written in rust.

rtest works by using stateful resources. It uses macros to build a executable binary that can handle all your filters and returns a nice output.

Imagine you are hosting a webshop and want to verify it works with integration-tests.

struct Orderinfo{item: String}
struct Order{id: String}

const SHOP: &str = "http://shop.example.com";

#[rtest]
async fn place_order(info: Orderinfo) -> Result<Order, std::error::Error> {
    let client = reqwest::Client::new();
    let id = client.post(format!("{}/v1/order/{}", SHOP, info.item)).send().await?.text().await?;
    Ok(Order{id})
}

#[rtest]
async fn check_order(order: Order) -> Result<Order, std::error::Error> {
    let res = reqwest::get(format!("{}/v1/order/{}", SHOP, order.id)).await?;
    assert_ne!(res.status(), 404);
    Ok(order)
}

#[rtest]
async fn cancel_order(order: Order) -> Result<(), std::error::Error> {
    let client = reqwest::Client::new();
    let id = client.delete(format!("{}/v1/order/{}", SHOP, order.id)).send().await?;
    assert_ne!(res.status(), 200);
    Ok(())
}

#[tokio::main]
async fn main() {
    let water = Orderinfo{
        item: "water".to_string()
    };
    let pizza = Orderinfo{
        item: "pizza".to_string()
    };
    let context = rtest::Context::default().with_resource(water).with_resource(pizza);
    rtest::run!(context)
}

The test framework will know that in order to test the check_order function it first needs to have a Order. But the only way to generate such an order is through the place_order test. cancel_order will consume the order and no longer make it useable.

Yes, you can trick it by removing all tests that generate an Order - the framework will notice that on runtime and fail. It might be possible that multiple routes are valid to test all functions, in case of an error the route it took will be dumped. Let's assume checking an order with water will fail. The framework might decide to create another Order with pizza because it cannot verify deletion otherwise, tests might be executed multiple times.

rtest Results:
[✓]
  [✓] delete_file
[✓] create
  [✓] create_file
  [✓] setup_fileinfo
[x] read
  [✓] read_metadata
  [x] test_that_should_fail
      --- Run: 1/1 ---
      Error: test failure: No such file or directory (os error 2)
      Logs:
        2024-04-17T13:56:48.078417Z  INFO filesystem: Wubba Lubba dub-dub
  [x] test_that_should_panic - 177ms
      --- Run: 1/1 ---
      Panic: 'Yes Rico, Kaboom'
      at rtest/examples/filesystem/main.rs:88
      Stacktrace:
         0: rust_begin_unwind
                   at /rustc/098d4fd74c078b12bfc2e9438a2a04bc18b393bc/library/std/src/panicking.rs:647:5
         1: core::panicking::panic_fmt
                   at /rustc/098d4fd74c078b12bfc2e9438a2a04bc18b393bc/library/core/src/panicking.rs:72:14
         2: filesystem::test_that_should_panic
                   at rtest/examples/filesystem/main.rs:88:5
      Logs:
        2024-04-17T13:56:47.900938Z  INFO filesystem: Kaboom?
Total Tests: 6. Total Runs: 7 Fails: 2
Failed

Features:

  • Allow any Input/Output Resources (up to 5)
  • Custom Errors
  • Custom Context (though rarely needed)
  • Reorder execution and no longer depend on random execution model
  • Multithread support
  • Async Support
  • Automatically create new Resources on demand
  • Capture Panics (needed for asserts)
  • Capture println
  • Capture logs
  • External Log Capturing, e.g. Annotate a Test with a custom string, which is propagated to an Adapter which then does work in parallel to a test. if a test fails the logs are stored, if the tests succeeds the logs are dropped. E.g. We specify a kubernetes watcher and say, watch the Pod "foobar" during a test execution.
  • Markdown Output
  • Json Input/Output to persistent runs, retry runs, compare with previous runs

Dependencies

~4–17MB
~196K SLoC