#tensor #blas #machine-learning

candle-flash-attn

Flash attention layer for the candle ML framework

30 releases

new 0.9.1 May 1, 2025
0.8.4 Mar 15, 2025
0.8.1 Dec 7, 2024
0.8.0 Nov 12, 2024
0.3.1 Nov 12, 2023

#983 in Machine learning

Download history 125/week @ 2025-01-10 28/week @ 2025-01-17 11/week @ 2025-01-24 77/week @ 2025-01-31 50/week @ 2025-02-07 192/week @ 2025-02-14 135/week @ 2025-02-21 114/week @ 2025-02-28 24/week @ 2025-03-07 226/week @ 2025-03-14 967/week @ 2025-03-21 46/week @ 2025-03-28 225/week @ 2025-04-04 380/week @ 2025-04-11 280/week @ 2025-04-18 256/week @ 2025-04-25

1,145 downloads per month
Used in 10 crates (6 directly)

MIT/Apache

2.5MB
28K SLoC

candle-flash-attn

Dependencies

~15–21MB
~375K SLoC