#artificial-intelligence #machine-learning #onnx-runtime

sys no-std ort-sys

Unsafe Rust bindings for ONNX Runtime 1.23 - Optimize and Accelerate Machine Learning Inferencing

16 releases

Uses new Rust 2024

2.0.0-rc.11 Jan 7, 2026
2.0.0-rc.10 Jun 1, 2025
2.0.0-rc.9 Nov 21, 2024
2.0.0-rc.4 Jul 7, 2024
2.0.0-alpha.2 Nov 28, 2023

#591 in Machine learning

Download history 79263/week @ 2025-09-29 107856/week @ 2025-10-06 91441/week @ 2025-10-13 80967/week @ 2025-10-20 76713/week @ 2025-10-27 86330/week @ 2025-11-03 86718/week @ 2025-11-10 105638/week @ 2025-11-17 81930/week @ 2025-11-24 100771/week @ 2025-12-01 99029/week @ 2025-12-08 116851/week @ 2025-12-15 73991/week @ 2025-12-22 66666/week @ 2025-12-29 161256/week @ 2026-01-05 159838/week @ 2026-01-12

469,480 downloads per month
Used in 277 crates (14 directly)

MIT/Apache

220KB
4.5K SLoC


Coverage Results Crates.io Open Collective backers and sponsors
Crates.io ONNX Runtime

ort is a Rust interface for performing hardware-accelerated inference & training on machine learning models in the Open Neural Network Exchange (ONNX) format.

Based on the now-inactive onnxruntime-rs crate, ort is primarily a wrapper for Microsoft's ONNX Runtime library, but offers support for other pure-Rust runtimes.

ort with ONNX Runtime is super quick - and it supports almost any hardware accelerator you can think of. Even still, it's light enough to run on your users' devices.

When you need to deploy a PyTorch/TensorFlow/Keras/scikit-learn/PaddlePaddle model either on-device or in the datacenter, ort has you covered.

📖 Documentation

🤔 Support

🌠 Sponsor ort


💖 FOSS projects using ort

Open a PR to add your project here 🌟

  • Koharu uses ort to detect, OCR, and inpaint manga pages.
  • BoquilaHUB uses ort for local AI deployment in biodiversity conservation efforts.
  • Magika uses ort for content type detection.
  • Text Embeddings Inference (TEI) uses ort to deliver high-performance ONNX runtime inference for text embedding models.
  • sbv2-api is a fast implementation of Style-BERT-VITS2 text-to-speech using ort.
  • CamTrap Detector uses ort to detect animals, humans and vehicles in trail camera imagery.
  • oar-ocr A comprehensive OCR library, built in Rust with ort for efficient inference.
  • retto uses ort for reliable, fast ONNX inference of PaddleOCR models on Desktop and WASM platforms.
  • Ahnlich uses ort to power their AI proxy for semantic search applications.
  • Valentinus uses ort to provide embedding model inference inside LMDB.
  • edge-transformers uses ort for accelerated transformer model inference at the edge.
  • FastEmbed-rs uses ort for generating vector embeddings, reranking locally.
  • Ortex uses ort for safe ONNX Runtime bindings in Elixir.
  • SilentKeys uses ort for fast, on-device real-time dictation with NVIDIA Parakeet and Silero VAD.

Dependencies

~0–1.9MB
~26K SLoC