2 releases
Uses new Rust 2024
| 0.21.0-pre.3 | Apr 8, 2026 |
|---|---|
| 0.21.0-pre.2 | Mar 2, 2026 |
#2917 in Machine learning
2,369 downloads per month
2MB
34K
SLoC
Burn multi-backend dispatch.
Available Backends
The dispatch backend supports the following variants, each enabled via cargo features:
| Backend | Feature | Description |
|---|---|---|
Cpu |
cpu |
Rust CPU backend (MLIR + LLVM) |
Cuda |
cuda |
NVIDIA CUDA backend |
Metal |
metal |
Apple Metal backend via wgpu (MSL) |
Rocm |
rocm |
AMD ROCm backend |
Vulkan |
vulkan |
Vulkan backend via wgpu (SPIR-V) |
Wgpu |
webgpu |
WebGPU backend via wgpu (WGSL) |
NdArray |
ndarray |
Pure Rust CPU backend using ndarray |
LibTorch |
tch |
Libtorch backend via tch |
Autodiff |
autodiff |
Autodiff-enabled backend (used in combination with any of the backends above) |
Note: WGPU-based backends (metal, vulkan, webgpu) are mutually exclusive.
All other backends can be combined freely.
WGPU Backend Exclusivity
The WGPU-based backends (metal, vulkan, webgpu) are mutually exclusive due to
the current automatic compile, which can only select one target at a time.
Enable only one of these features in your Cargo.toml:
metalvulkanwebgpu
If multiple WGPU features are enabled, the build script will emit a warning and disable all WGPU backends to prevent unintended behavior.
Burn Backend Dispatch
A multi-backend dispatch that forwards the tensor operations to the appropriate backend.
Dependencies
~82–125MB
~2M SLoC