30 releases (14 breaking)
Uses new Rust 2024
| new 0.21.0-pre.3 | Apr 9, 2026 |
|---|---|
| 0.21.0-pre.1 | Feb 11, 2026 |
| 0.20.1 | Jan 23, 2026 |
| 0.20.0-pre.6 | Dec 18, 2025 |
| 0.8.0 | Jul 25, 2023 |
#205 in Machine learning
1,637 downloads per month
Used in 14 crates
(11 directly)
1.5MB
31K
SLoC
Overview
burn-onnx converts ONNX models to native Burn Rust code, allowing you to run models from PyTorch,
TensorFlow, and other frameworks on any Burn backend - from WebAssembly to CUDA.
Key features:
- Generates readable, modifiable Rust source code from ONNX models
- Produces
burnpackweight files for efficient loading - Works with any Burn backend (CPU, GPU, WebGPU, embedded)
- Supports both
stdandno_stdenvironments - Full opset compliance: all supported operators work across ONNX opset versions 1 through 24
- Graph simplification (enabled by default): attention coalescing, constant folding, constant shape propagation, idempotent-op elimination, identity-element elimination, CSE, dead code elimination, and permute-reshape detection
Quick Start
Add to your Cargo.toml:
[build-dependencies]
burn-onnx = "0.21"
In your build.rs:
use burn_onnx::ModelGen;
fn main() {
ModelGen::new()
.input("src/model/my_model.onnx")
.out_dir("model/")
.run_from_script();
}
Include the generated code in src/model/mod.rs:
pub mod my_model {
include!(concat!(env!("OUT_DIR"), "/model/my_model.rs"));
}
Then use the model:
use burn::backend::NdArray;
use crate::model::my_model::Model;
let model: Model<NdArray<f32>> = Model::default();
let output = model.forward(input_tensor);
For detailed usage instructions, see the ONNX Import Guide in the Burn Book.
Examples
| Example | Description |
|---|---|
| onnx-inference | Basic ONNX model inference |
| image-classification-web | WebAssembly/WebGPU image classifier |
Model Validation
We validate burn-onnx against 26 real-world models spanning image classification, object detection, depth estimation, NLP, speech, and generative AI. Each model check verifies the full pipeline: ONNX import, Rust codegen, weight loading, and numerical accuracy against ONNX Runtime reference outputs.
Supported Operators
See the Supported ONNX Operators table for the complete list of supported operators.
Contributing
We welcome contributions! Please read the Contributing Guidelines before opening a PR, and the Development Guide for architecture and implementation details.
For questions and discussions, join us on Discord.
License
Licensed under either of Apache License, Version 2.0 or MIT license at your option.
Dependencies
~100–145MB
~2.5M SLoC