#machine-learning #tensor #blas

candle-flash-attn

Flash attention layer for the candle ML framework

21 releases (7 breaking)

new 0.8.2 Jan 7, 2025
0.8.1 Dec 7, 2024
0.8.0 Nov 12, 2024
0.6.0 Jun 29, 2024
0.3.1 Nov 12, 2023

#618 in Machine learning

Download history 313/week @ 2024-09-22 260/week @ 2024-09-29 24/week @ 2024-10-06 64/week @ 2024-10-13 16/week @ 2024-10-20 13/week @ 2024-10-27 64/week @ 2024-11-03 133/week @ 2024-11-10 43/week @ 2024-11-17 4/week @ 2024-11-24 94/week @ 2024-12-01 207/week @ 2024-12-08 90/week @ 2024-12-15 59/week @ 2024-12-22 33/week @ 2024-12-29 134/week @ 2025-01-05

326 downloads per month
Used in 10 crates (6 directly)

MIT/Apache

2.5MB
27K SLoC

candle-flash-attn

Dependencies

~38MB
~1M SLoC