#machine-learning #tensor #blas

candle-flash-attn

Flash attention layer for the candle ML framework

20 releases (7 breaking)

new 0.8.1 Dec 7, 2024
0.8.0 Nov 12, 2024
0.7.2 Sep 29, 2024
0.6.0 Jun 29, 2024
0.3.1 Nov 12, 2023

#622 in Machine learning

Download history 45/week @ 2024-08-19 58/week @ 2024-08-26 43/week @ 2024-09-02 32/week @ 2024-09-09 232/week @ 2024-09-16 421/week @ 2024-09-23 122/week @ 2024-09-30 22/week @ 2024-10-07 67/week @ 2024-10-14 15/week @ 2024-10-21 10/week @ 2024-10-28 70/week @ 2024-11-04 132/week @ 2024-11-11 42/week @ 2024-11-18 170/week @ 2024-12-02

350 downloads per month
Used in 10 crates (6 directly)

MIT/Apache

2.5MB
26K SLoC

candle-flash-attn

Dependencies

~33MB
~756K SLoC