#model #edge #onnx #inference

enfer_core

Feakin is a architecture design and visual collaboration tool. This is the parser for Feakin.

3 releases

0.1.2 Nov 29, 2023
0.1.1 Nov 27, 2023
0.1.0 Nov 24, 2023

#60 in #onnx

30 downloads per month

MIT license

22KB
428 lines

Edge Inference Core

EdgeInfer enables efficient edge intelligence by running small AI models, including embeddings and OnnxModels, on resource-constrained devices like Android, iOS, or MCUs for real-time decision-making.

usage:

add to your Cargo.toml:

[dependencies]
edgeinfer = "0.1.1"

in your main.rs:

let model = std::fs::read("model/model.onnx").unwrap();
let tokenizer_data = std::fs::read("model/tokenizer.json").unwrap();

let semantic = init_semantic(model, tokenizer_data).unwrap();
let embedding = semantic.embed("hello world").unwrap();
assert_eq!(embedding.len(), 128);

Dependencies

~23–33MB
~566K SLoC