#machine-learning #simd #vectorized #function #mathml #ml-model

rten-vecmath

SIMD vectorized implementations of various math functions used in ML models

8 breaking releases

0.11.0 Jul 5, 2024
0.9.0 May 16, 2024
0.6.0 Mar 31, 2024
0.1.0 Dec 31, 2023

#453 in Machine learning

Download history 359/week @ 2024-04-03 266/week @ 2024-04-10 64/week @ 2024-04-17 278/week @ 2024-04-24 229/week @ 2024-05-01 126/week @ 2024-05-08 518/week @ 2024-05-15 342/week @ 2024-05-22 222/week @ 2024-05-29 121/week @ 2024-06-05 152/week @ 2024-06-12 132/week @ 2024-06-19 142/week @ 2024-06-26 304/week @ 2024-07-03 201/week @ 2024-07-10 178/week @ 2024-07-17

850 downloads per month
Used in 6 crates (via rten)

MIT/Apache

93KB
2K SLoC

rten-vecmath

This crate provides portable SIMD types that abstract over SIMD intrinsics on different architectures. Unlike std::simd this works on stable Rust. There is also functionality to detect the available instructions at runtime and dispatch to the optimal implementation.

This crate also contains SIMD-vectorized versions of math functions such as exp, erf, tanh, softmax etc. that are performance-critical in machine-learning models.


lib.rs:

SIMD-vectorized implementations of various math functions that are commonly used in neural networks.

For each function in this library there are multiple variants, which typically include:

  • A version that operates on scalars
  • A version that reads values from an input slice and writes to the corresponding position in an equal-length output slice. These have a vec_ prefix.
  • A version that reads values from a mutable input slice and writes the computed values back in-place. These have a vec_ prefix and _in_place suffix.

All variants use the same underlying implementation and should have the same accuracy.

See the source code for comments on accuracy.

Dependencies