13 releases (breaking)
|0.10.0||Oct 24, 2023|
|0.8.0||Jul 25, 2023|
|0.6.0||Mar 21, 2023|
|0.4.0||Dec 30, 2022|
|0.1.0||Jul 27, 2022|
#93 in Machine learning
11,038 downloads per month
Used in 10 crates (6 directly)
Burn Tensor Library
This library provides multiple tensor implementations hidden behind an easy to use API that supports reverse mode automatic differentiation.
- Flexible ✨
- CPU + GPU 🙏
- Multi-Threads 🚀
- Intuitive Usage 😌
- No Global State 🚫
- Multiple Backends 🦾
- Reverse Mode Autodiff 🔥
For now, three backends are implemented, and some more are planned.
- Pytorch using tch-rs
- 100% Rust backend using ndarray
- WGPU backend
- Candle backend
- Tensorflow using tensorflow-rust
- CuDNN using RustCUDAtensorflow-rust
Automatic differentiation is implemented as just another tensor backend without any global state. It's possible since we keep track of the order in which each operation as been executed and the tape is only created when calculating the gradients. To do so, each operation creates a new node which has a reference to its parent nodes. Therefore, creating the tape only requires a simple and efficient graph traversal algorithm.
let x = ADTensor::from_tensor(x_ndarray); let y = ADTensor::from_tensor(y_ndarray); let z = x.matmul(&y); let grads = z.backward(); let x_grad = x.grad(&grads); let y_grad = y.grad(&grads);
To run with CUDA set
This crate can be used alone without the entire burn stack and with only selected backends for smaller binaries.
This crate can be used without the standard library (
alloc by disabling
std- enables the standard library.
burn-tensor-testgen- enables test macros for generating tensor tests.