#machine-learning #cpu-gpu #artificial-intelligence #ai

ort

A safe Rust wrapper for ONNX Runtime 1.20 - Optimize and accelerate machine learning inference & training

39 releases (23 stable)

2.0.0-rc.9 Nov 21, 2024
2.0.0-rc.6 Sep 10, 2024
2.0.0-rc.4 Jul 7, 2024
2.0.0-rc.1 Mar 28, 2024
1.13.1 Nov 27, 2022

#2 in Machine learning

Download history 15754/week @ 2024-08-24 17823/week @ 2024-08-31 20223/week @ 2024-09-07 20885/week @ 2024-09-14 21506/week @ 2024-09-21 19302/week @ 2024-09-28 27082/week @ 2024-10-05 16799/week @ 2024-10-12 17941/week @ 2024-10-19 19464/week @ 2024-10-26 23367/week @ 2024-11-02 18243/week @ 2024-11-09 22331/week @ 2024-11-16 17534/week @ 2024-11-23 23212/week @ 2024-11-30 17932/week @ 2024-12-07

84,269 downloads per month
Used in 65 crates (43 directly)

MIT/Apache

640KB
13K SLoC


Coverage Results Crates.io Open Collective backers and sponsors
Crates.io ONNX Runtime

ort is an (unofficial) ONNX Runtime 1.20 wrapper for Rust based on the now inactive onnxruntime-rs. ONNX Runtime accelerates ML inference and training on both CPU & GPU.

πŸ“– Documentation

πŸ€” Support

πŸ’– Projects using ort

Open a PR to add your project here 🌟

  • Twitter uses ort to serve homepage recommendations to hundreds of millions of users.
  • Bloop uses ort to power their semantic code search feature.
  • edge-transformers uses ort for accelerated transformer model inference at the edge.
  • Ortex uses ort for safe ONNX Runtime bindings in Elixir.
  • Supabase uses ort to remove cold starts for their edge functions.
  • Lantern uses ort to provide embedding model inference inside Postgres.
  • Magika uses ort for content type detection.
  • sbv2-api is a fast implementation of Style-BERT-VITS2 text-to-speech using ort.

🌠 Sponsor ort


Dependencies

~2–11MB
~130K SLoC