14 releases (breaking)
new 0.15.1 | Jan 6, 2025 |
---|---|
0.15.0 | Dec 28, 2024 |
0.14.1 | Nov 16, 2024 |
0.11.0 | Jul 5, 2024 |
0.1.0 | Dec 31, 2023 |
#632 in Machine learning
1,886 downloads per month
Used in 8 crates
(via rten)
110KB
2.5K
SLoC
rten-vecmath
This crate contains SIMD-vectorized kernels ("vectorized math") for various operations used in machine learning models. This includes:
- Math functions such as exp, erf, tanh
- Activation function such as gelu
- Normalization functions such as softmax and mean-variance normalization
- Reduction functions such as sums and sum-of-square
SIMD operations are implemented using portable SIMD types from the rten-simd crate.
lib.rs
:
SIMD-vectorized implementations of operations used in neural networks.
These implementations are used as kernels for operations in the rten crate.
Constructing and dispatching operations
The operations are implemented by structs which implement the SIMD operation
traits from rten-simd. To apply an operation to data, first
construct the operation using the struct from this crate, then use a
dispatch method from the SimdOp
or
SimdUnaryOp
traits to execute the
operation using the preferred SIMD instruction set.
In-place versus mutating operations
Some operations support both updating data in place or reading input from
one slice and writing to another. For unary operations this is controlled by
dispatching with either map
or
map_mut
. For other operations
this is handled by exposing different constructors for the in-place and
mutating cases, such as Softmax::new
and Softmax::new_mut
.
Examples
Applying a vectorized unary function
use std::mem::MaybeUninit;
use rten_simd::dispatch::SimdUnaryOp;
use rten_vecmath::Erf;
// Apply the error function to each element of `data`.
let mut data = [1., 0.5, 2.0];
let erf_op = Erf {};
erf_op.map_mut(&mut data);
// Apply the error function to each element of `src`, writing to `dest`.
let src = [1., 0.5, 2.0];
let mut dest = [MaybeUninit::uninit(); 3];
erf_op.map(&src, &mut dest);
Applying softmax in place
use rten_simd::dispatch::SimdOp;
use rten_vecmath::Softmax;
let mut data = [1., 0.5, 2.0];
Softmax::new_mut(&mut data).dispatch();
Applying softmax with separate input and output buffers
This example reads data from an input and writes to an uninitialized output buffer. The softmax operation returns the initialized slice.
use rten_simd::dispatch::SimdOp;
use rten_vecmath::Softmax;
let data = [1., 0.5, 2.0];
let mut output = Vec::with_capacity(data.len());
let output_uninit = &mut output.spare_capacity_mut()[..data.len()];
let output_init = Softmax::new(&data, output_uninit).dispatch();
// Safety: The softmax operation initialized all output elements.
let init_len = output_init.len();
unsafe { output.set_len(init_len) };
Computing the sum of a list of floats
use rten_simd::dispatch::SimdOp;
use rten_vecmath::Sum;
let data = [1., 0.5, 2.0];
let sum = Sum::new(&data).dispatch();