10 releases

Uses old Rust 2015

0.0.10 Sep 8, 2018
0.0.9 May 16, 2018
0.0.7 Jan 22, 2018
0.0.5 Sep 16, 2017
0.0.2 Aug 9, 2017

#1142 in Machine learning

MIT/Apache

2.5MB
16K SLoC

TF deploy / Rust

Build Status

A tiny TensorFlow inference-only executor.

Why ?

TensorFlow is a big beast. It is designed for being as efficient as possible when training NN models on big platforms, with as much support for custom hardware as practical.

Performing inference, aka running trained models sometimes needs to happen on small-ish devices, like mobiles phones or single-board computers. Cross-compiling TensorFlow for these platforms can be a daunting task, and does produce a huge libraries.

TFDeploy is a tensorflow-compatible inference library. It loads a tensorflow frozen model from the regular protobuf format, and runs data through it.

Status

Even if TFDeploy is very far from supporting any arbitrary model, it can run Google Inception v3, or Snips hotword models, and missing operators are easy to add.

Roadmap

One important guiding cross-concern: this library must cross-compile as easily as practical to small-ish devices (think 30$ boards).

License

Note: files in the protos/tensorflow directory are copied from the TensorFlow project and are not covered by the following licence statement.

Apache 2.0/MIT

All original work licensed under either of

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

Dependencies

~5–11MB
~223K SLoC