#machine-learning #simd #vectorized #ml-model #mathml #function #exp

rten-vecmath

SIMD vectorized implementations of various math functions used in ML models

12 releases (breaking)

0.14.1 Nov 16, 2024
0.11.0 Jul 5, 2024
0.6.0 Mar 31, 2024
0.1.0 Dec 31, 2023

#626 in Machine learning

Download history 247/week @ 2024-08-19 305/week @ 2024-08-26 256/week @ 2024-09-02 352/week @ 2024-09-09 308/week @ 2024-09-16 280/week @ 2024-09-23 278/week @ 2024-09-30 145/week @ 2024-10-07 380/week @ 2024-10-14 371/week @ 2024-10-21 479/week @ 2024-10-28 413/week @ 2024-11-04 566/week @ 2024-11-11 644/week @ 2024-11-18 567/week @ 2024-11-25 534/week @ 2024-12-02

2,334 downloads per month
Used in 8 crates (via rten)

MIT/Apache

95KB
2K SLoC

rten-vecmath

This crate provides portable SIMD types that abstract over SIMD intrinsics on different architectures. Unlike std::simd this works on stable Rust. There is also functionality to detect the available instructions at runtime and dispatch to the optimal implementation.

This crate also contains SIMD-vectorized versions of math functions such as exp, erf, tanh, softmax etc. that are performance-critical in machine-learning models.


lib.rs:

SIMD-vectorized implementations of various math functions that are commonly used in neural networks.

For each function in this library there are multiple variants, which typically include:

  • A version that operates on scalars
  • A version that reads values from an input slice and writes to the corresponding position in an equal-length output slice. These have a vec_ prefix.
  • A version that reads values from a mutable input slice and writes the computed values back in-place. These have a vec_ prefix and _in_place suffix.

All variants use the same underlying implementation and should have the same accuracy.

See the source code for comments on accuracy.

Dependencies