#machine-learning #ai #cpu-gpu

ort

A safe Rust wrapper for ONNX Runtime 1.17 - Optimize and Accelerate Machine Learning Inferencing

31 releases (stable)

2.0.0-rc.1 Mar 28, 2024
2.0.0-alpha.4 Dec 28, 2023
2.0.0-alpha.2 Nov 28, 2023
1.16.3 Nov 12, 2023
1.13.1 Nov 27, 2022

#3 in Machine learning

Download history 1951/week @ 2024-01-01 2460/week @ 2024-01-08 2622/week @ 2024-01-15 2131/week @ 2024-01-22 2021/week @ 2024-01-29 2517/week @ 2024-02-05 2710/week @ 2024-02-12 3164/week @ 2024-02-19 3044/week @ 2024-02-26 3120/week @ 2024-03-04 4176/week @ 2024-03-11 4895/week @ 2024-03-18 5121/week @ 2024-03-25 5405/week @ 2024-04-01 5270/week @ 2024-04-08 5887/week @ 2024-04-15

22,022 downloads per month
Used in 35 crates (25 directly)

MIT/Apache

490KB
10K SLoC


Coverage Results Crates.io Open Collective backers and sponsors
Crates.io ONNX Runtime

ort is an (unofficial) ONNX Runtime 1.17 wrapper for Rust based on the now inactive onnxruntime-rs. ONNX Runtime accelerates ML inference on both CPU & GPU.

πŸ“– Documentation

πŸ€” Support

πŸ’– Projects using ort

Open a PR to add your project here 🌟

  • Twitter uses ort to serve homepage recommendations to hundreds of millions of users.
  • Bloop uses ort to power their semantic code search feature.
  • pyke Diffusers uses ort for efficient Stable Diffusion image generation on both CPUs & GPUs.
  • edge-transformers uses ort for accelerated transformer model inference at the edge.
  • Ortex uses ort for safe ONNX Runtime bindings in Elixir.

🌠 Sponsor ort


Dependencies

~2–13MB
~147K SLoC