#neural-network #tensorflow #inference #onnx

tract-core

Tiny, no-nonsense, self contained, TensorFlow and ONNX inference

141 releases

new 0.21.13 May 15, 2025
0.21.11 Mar 19, 2025
0.21.8 Dec 5, 2024
0.21.7 Sep 23, 2024
0.1.1 Nov 2, 2018

#40 in #tensorflow

Download history 4758/week @ 2025-01-24 5770/week @ 2025-01-31 4094/week @ 2025-02-07 4733/week @ 2025-02-14 4074/week @ 2025-02-21 4422/week @ 2025-02-28 4594/week @ 2025-03-07 5806/week @ 2025-03-14 6561/week @ 2025-03-21 7785/week @ 2025-03-28 7574/week @ 2025-04-04 5800/week @ 2025-04-11 7189/week @ 2025-04-18 6640/week @ 2025-04-25 3341/week @ 2025-05-02 3591/week @ 2025-05-09

22,270 downloads per month
Used in 39 crates (10 directly)

MIT/Apache

2MB
54K SLoC

Rust 44K SLoC // 0.0% comments Templ 9K SLoC // 0.1% comments GNU Style Assembly 12 SLoC // 0.2% comments

Tract

Tiny, no-nonsense, self contained, portable TensorFlow and ONNX inference.

Example

use tract_core::internal::*;

// build a simple model that just add 3 to each input component
let mut model = TypedModel::default();

let input_fact = f32::fact(&[3]);
let input = model.add_source("input", input_fact).unwrap();
let three = model.add_const("three".to_string(), tensor1(&[3f32])).unwrap();
let add = model.wire_node("add".to_string(),
    tract_core::ops::math::add(),
    [input, three].as_ref()
    ).unwrap();

model.auto_outputs().unwrap();

// We build an execution plan. Default inputs and outputs are inferred from
// the model graph.
let plan = SimplePlan::new(&model).unwrap();

// run the computation.
let input = tensor1(&[1.0f32, 2.5, 5.0]);
let mut outputs = plan.run(tvec![input.into()]).unwrap();

// take the first and only output tensor
let mut tensor = outputs.pop().unwrap();

assert_eq!(tensor, tensor1(&[4.0f32, 5.5, 8.0]).into());

While creating a model from Rust code is useful for testing the library, real-life use-cases will usually load a TensorFlow or ONNX model using tract-tensorflow or tract-onnx crates.

Dependencies

~12–24MB
~370K SLoC