#machine-learning #tensor #blas

candle-flash-attn

Flash attention layer for the candle ML framework

13 releases (4 breaking)

0.5.0 May 3, 2024
0.4.1 Feb 28, 2024
0.3.3 Jan 31, 2024
0.3.2 Dec 20, 2023
0.3.1 Nov 12, 2023

#611 in Machine learning

Download history 47/week @ 2024-01-22 14/week @ 2024-01-29 38/week @ 2024-02-19 328/week @ 2024-02-26 22/week @ 2024-03-04 42/week @ 2024-03-11 12/week @ 2024-03-18 10/week @ 2024-03-25 44/week @ 2024-04-01 4/week @ 2024-04-08 3/week @ 2024-04-15 23/week @ 2024-04-22 150/week @ 2024-04-29 16/week @ 2024-05-06

193 downloads per month
Used in 7 crates (4 directly)

MIT/Apache

2MB
25K SLoC

candle-flash-attn

Dependencies

~21MB
~526K SLoC