#tensor #machine-learning #blas

candle-flash-attn

Flash attention layer for the candle ML framework

27 releases

new 0.9.0-alpha.4 Apr 16, 2025
0.8.4 Mar 15, 2025
0.8.1 Dec 7, 2024
0.8.0 Nov 12, 2024
0.3.1 Nov 12, 2023

#1213 in Machine learning

Download history 51/week @ 2024-12-27 141/week @ 2025-01-03 125/week @ 2025-01-10 28/week @ 2025-01-17 11/week @ 2025-01-24 77/week @ 2025-01-31 50/week @ 2025-02-07 192/week @ 2025-02-14 135/week @ 2025-02-21 114/week @ 2025-02-28 24/week @ 2025-03-07 226/week @ 2025-03-14 967/week @ 2025-03-21 46/week @ 2025-03-28 225/week @ 2025-04-04 285/week @ 2025-04-11

1,544 downloads per month
Used in 10 crates (6 directly)

MIT/Apache

2.5MB
27K SLoC

candle-flash-attn

Dependencies

~14–21MB
~370K SLoC