5 releases
| new 0.10.2 | Apr 1, 2026 |
|---|---|
| 0.10.1 | Apr 1, 2026 |
| 0.10.0 | Mar 31, 2026 |
| 0.9.2 | Jan 24, 2026 |
| 0.9.2-alpha.2 | Dec 16, 2025 |
#2401 in Machine learning
96 downloads per month
Used in 6 crates
(2 directly)
1.5MB
33K
SLoC
Candle Flash Attention v3 Layer
Flash Attention v3 Layer for Hopper (compatible nvidia sm90a arch) and the candle framework.
Dependencies
~25MB
~524K SLoC