#tensor-flow #neural-networks #onnx

tract-tensorflow

Tiny, no-nonsense, self contained, TensorFlow and ONNX inference

99 releases

0.19.7 Feb 24, 2023
0.19.2 Jan 27, 2023
0.19.0-alpha.19 Dec 19, 2022
0.18.4 Nov 23, 2022
0.1.1 Nov 2, 2018

#10 in #tensor-flow

Download history 58/week @ 2022-12-01 132/week @ 2022-12-08 206/week @ 2022-12-15 66/week @ 2022-12-22 46/week @ 2022-12-29 86/week @ 2023-01-05 62/week @ 2023-01-12 94/week @ 2023-01-19 97/week @ 2023-01-26 76/week @ 2023-02-02 279/week @ 2023-02-09 345/week @ 2023-02-16 233/week @ 2023-02-23 69/week @ 2023-03-02 72/week @ 2023-03-09 116/week @ 2023-03-16

622 downloads per month
Used in 3 crates

MIT/Apache

1.5MB
14K SLoC

Tract TensorFlow module

Tiny, no-nonsense, self contained, portable inference.

Example

# extern crate tract_tensorflow;
# fn main() {
use tract_tensorflow::prelude::*;

// build a simple model that just add 3 to each input component
let tf = tensorflow();
let mut model = tf.model_for_path("tests/models/plus3.pb").unwrap();

// set input input type and shape, then optimize the network.
model.set_input_fact(0, f32::fact(&[3]).into()).unwrap();
let model = model.into_optimized().unwrap();

// we build an execution plan. default input and output are inferred from
// the model graph
let plan = SimplePlan::new(&model).unwrap();

// run the computation.
let input = tensor1(&[1.0f32, 2.5, 5.0]);
let mut outputs = plan.run(tvec![input]).unwrap();

// take the first and only output tensor
let mut tensor = outputs.pop().unwrap();

assert_eq!(tensor, rctensor1(&[4.0f32, 5.5, 8.0]));
# }

Dependencies

~11–20MB
~416K SLoC