#tensor-flow #neural-networks

tract-core

Tiny, no-nonsense, self contained, TensorFlow and ONNX inference

97 releases

0.19.2 Jan 27, 2023
0.19.0-alpha.19 Dec 19, 2022
0.18.4 Nov 23, 2022
0.17.3 Jul 25, 2022
0.1.1 Nov 2, 2018

#229 in Machine learning

Download history 2776/week @ 2022-10-14 3187/week @ 2022-10-21 2360/week @ 2022-10-28 2092/week @ 2022-11-04 2034/week @ 2022-11-11 2077/week @ 2022-11-18 1846/week @ 2022-11-25 1751/week @ 2022-12-02 2428/week @ 2022-12-09 1898/week @ 2022-12-16 717/week @ 2022-12-23 1095/week @ 2022-12-30 1975/week @ 2023-01-06 2834/week @ 2023-01-13 3126/week @ 2023-01-20 2611/week @ 2023-01-27

10,848 downloads per month
Used in 26 crates (6 directly)

MIT/Apache

1.5MB
33K SLoC

Tract

Tiny, no-nonsense, self contained, portable TensorFlow and ONNX inference.

Example

# extern crate tract_core;
# fn main() {
use tract_core::internal::*;

// build a simple model that just add 3 to each input component
let mut model = TypedModel::default();

let input_fact = f32::fact(&[3]);
let input = model.add_source("input", input_fact).unwrap();
let three = model.add_const("three".to_string(), tensor1(&[3f32])).unwrap();
let add = model.wire_node("add".to_string(),
    tract_core::ops::math::add(),
    [input, three].as_ref()
    ).unwrap();

model.auto_outputs().unwrap();

// We build an execution plan. Default inputs and outputs are inferred from
// the model graph.
let plan = SimplePlan::new(&model).unwrap();

// run the computation.
let input = tensor1(&[1.0f32, 2.5, 5.0]);
let mut outputs = plan.run(tvec![input.into()]).unwrap();

// take the first and only output tensor
let mut tensor = outputs.pop().unwrap();

assert_eq!(tensor, tensor1(&[4.0f32, 5.5, 8.0]).into());
# }

While creating a model from Rust code is useful for testing the library, real-life use-cases will usually load a TensorFlow or ONNX model using tract-tensorflow or tract-onnx crates.

Dependencies

~9.5MB
~196K SLoC