|0.2.3||Sep 20, 2019|
|0.2.2||Dec 8, 2018|
|0.2.1||Nov 27, 2018|
|0.1.14||Oct 29, 2017|
|0.1.4||Mar 31, 2016|
#2 in Science
47,005 downloads per month
Used in 277 crates (6 directly)
General matrix multiplication for f32, f64 matrices. Operates on matrices with general layout (they can use arbitrary row and column stride).
Please read the API documentation here
This crate uses the same macro/microkernel approach to matrix multiplication as the BLIS project.
We presently provide a few good microkernels portable and for x86-64, and only one operation: the general matrix-matrix multiplication (“gemm”).
- Code clarity and maintainability
- Portability and stable Rust
- Performance: provide target-specific microkernels when it is beneficial
- Testing: Test diverse inputs and test and benchmark all microkernels
- Small code footprint and fast compilation
- We are not reimplementing BLAS.
Blog Posts About This Crate
- Update rawpointer dependency to 0.2
- Minor changes to inlining for -Ctarget-cpu=native use (not recommended - use automatic runtime feature detection.
- Minor improvements to kernel masking (#42, #41) by @bluss and @SuperFluffy
New dgemm avx and fma kernels implemented by R. Janis Goldschmidt (@SuperFluffy). With fast cases for both row and column major output.
Benchmark improvements: Using fma instructions reduces execution time on dgemm benchmarks by 25-35% compared with the avx kernel, see issue #35
Using the avx dgemm kernel reduces execution time on dgemm benchmarks by 5-7% compared with the previous version's autovectorized kernel.
New fma adaption of the sgemm avx kernel by R. Janis Goldschmidt (@SuperFluffy).
Benchmark improvement: Using fma instructions reduces execution time on sgemm benchmarks by 10-15% compared with the avx kernel, see issue #35
More flexible kernel selection allows kernels to individually set all their parameters, ensures the fallback (plain Rust) kernels can be tuned for performance as well, and moves feature detection out of the gemm loop.
Benchmark improvement: Reduces execution time on various benchmarks by 1-2% in the avx kernels, see #37.
Improved testing to cover input/output strides of more diversity.
Improve matrix packing by taking better advantage of contiguous inputs.
Benchmark improvement: execution time for 64×64 problem where inputs are either both row major or both column major changed by -5% sgemm and -1% for dgemm. (#26)
In the sgemm avx kernel, handle column major output arrays just like it does row major arrays.
Benchmark improvement: execution time for 32×32 problem where output is column major changed by -11%. (#27)
Use runtime feature detection on x86 and x86-64 platforms, to enable AVX-specific microkernels at runtime if available on the currently executing configuration.
This means no special compiler flags are needed to enable native instruction performance!
Implement a specialized 8×8 sgemm (f32) AVX microkernel, this speeds up matrix multiplication by another 25%.
Use std::alloc for allocation of aligned packing buffers
We now require Rust 1.28 as the minimal version
- Fix bug where the result matrix C was not updated in the case of a M × K by K × N matrix multiplication where K was zero. (This resulted in the output C potentially being left uninitialized or with incorrect values in this specific scenario.) By @jturner314 (PR #21)
- Avoid an unused code warning
- Pick 8x8 sgemm (f32) kernel when AVX target feature is enabled (with Rust 1.14 or later, no effect otherwise).
- Use rawpointer, a µcrate with raw pointer methods taken from this project.
- Internal cleanup with retained performance
- Adjust sgemm (f32) kernel to optimize better on recent Rust.
- Update doc links to docs.rs
- Workaround optimization regression in rust nightly (1.12-ish) (#9)
- Improved docs
- Reduce overhead slightly for small matrix multiplication problems by using only one allocation call for both packing buffers.
- Disable manual loop unrolling in debug mode (quicker debug builds)
- Update sgemm to use a 4x8 microkernel (“still in simplistic rust”), which improves throughput by 10%.
- Prepare support for aligned packed buffers
- Update dgemm to use a 8x4 microkernel, still in simplistic rust, which improves throughput by 10-20% when using AVX.
- Silence some debug prints
- Major performance improvement for sgemm and dgemm (20-30% when using AVX). Since it all depends on what the optimizer does, I'd love to get issue reports that report good or bad performance.
- Made the kernel masking generic, which is a cleaner design
- Minor improvement in the kernel