#machine-learning #tensor #blas

candle-flash-attn

Flash attention layer for the candle ML framework

19 releases (7 breaking)

new 0.8.0 Nov 12, 2024
0.6.0 Jun 29, 2024
0.4.1 Feb 28, 2024
0.3.2 Dec 20, 2023
0.3.1 Nov 12, 2023

#627 in Machine learning

Download history 12/week @ 2024-07-22 11/week @ 2024-07-29 10/week @ 2024-08-05 43/week @ 2024-08-12 45/week @ 2024-08-19 58/week @ 2024-08-26 43/week @ 2024-09-02 32/week @ 2024-09-09 232/week @ 2024-09-16 421/week @ 2024-09-23 122/week @ 2024-09-30 22/week @ 2024-10-07 67/week @ 2024-10-14 15/week @ 2024-10-21 10/week @ 2024-10-28 80/week @ 2024-11-04

173 downloads per month
Used in 10 crates (6 directly)

MIT/Apache

2.5MB
26K SLoC

candle-flash-attn

Dependencies

~33MB
~751K SLoC