#machine-learning #ai #cpu-gpu

ort

A safe Rust wrapper for ONNX Runtime 1.18 - Optimize and accelerate machine learning inference & training

34 releases (stable)

2.0.0-rc.4 Jul 7, 2024
2.0.0-rc.2 Apr 27, 2024
2.0.0-rc.1 Mar 28, 2024
2.0.0-alpha.4 Dec 28, 2023
1.13.1 Nov 27, 2022

#1 in Machine learning

Download history 5416/week @ 2024-03-31 5068/week @ 2024-04-07 6005/week @ 2024-04-14 6486/week @ 2024-04-21 7109/week @ 2024-04-28 6237/week @ 2024-05-05 10979/week @ 2024-05-12 10894/week @ 2024-05-19 12268/week @ 2024-05-26 11092/week @ 2024-06-02 12973/week @ 2024-06-09 15186/week @ 2024-06-16 15687/week @ 2024-06-23 11456/week @ 2024-06-30 15754/week @ 2024-07-07 13277/week @ 2024-07-14

57,357 downloads per month
Used in 47 crates (31 directly)

MIT/Apache

580KB
11K SLoC


Coverage Results Crates.io Open Collective backers and sponsors
Crates.io ONNX Runtime

ort is an (unofficial) ONNX Runtime 1.18 wrapper for Rust based on the now inactive onnxruntime-rs. ONNX Runtime accelerates ML inference and training on both CPU & GPU.

πŸ“– Documentation

πŸ€” Support

πŸ’– Projects using ort

Open a PR to add your project here 🌟

  • Twitter uses ort to serve homepage recommendations to hundreds of millions of users.
  • Bloop uses ort to power their semantic code search feature.
  • edge-transformers uses ort for accelerated transformer model inference at the edge.
  • Ortex uses ort for safe ONNX Runtime bindings in Elixir.
  • Supabase uses ort to remove cold starts for their edge functions.
  • Lantern uses ort to provide embedding model inference inside Postgres.
  • Magika uses ort for content type detection.

🌠 Sponsor ort


Dependencies

~2–14MB
~181K SLoC