6 releases
0.2.2 | Sep 18, 2023 |
---|---|
0.2.1 | Sep 11, 2023 |
0.2.0 | Aug 30, 2023 |
0.1.2 | Aug 21, 2023 |
#206 in Machine learning
1,316 downloads per month
Used in 9 crates
(7 directly)
710KB
17K
SLoC
candle
Minimalist ML framework for Rust
lib.rs
:
ML framework for Rust
use candle_core::{Tensor, DType, Device};
let a = Tensor::arange(0f32, 6f32, &Device::Cpu)?.reshape((2, 3))?;
let b = Tensor::arange(0f32, 12f32, &Device::Cpu)?.reshape((3, 4))?;
let c = a.matmul(&b)?;
Features
- Simple syntax (looks and like PyTorch)
- CPU and Cuda backends (and M1 support)
- Enable serverless (CPU) small and fast deployments
- Model training
- Distributed computing (NCCL).
- Models out of the box (Llama, Whisper, Falcon, ...)
FAQ
- Why Candle?
Candle stems from the need to reduce binary size in order to enable serverless possible by making the whole engine smaller than PyTorch very large library volume
And simply removing Python from production workloads. Python can really add overhead in more complex workflows and the GIL is a notorious source of headaches.
Rust is cool, and a lot of the HF ecosystem already has Rust crates safetensors and tokenizers
Dependencies
~4.5–7MB
~128K SLoC