8 releases (5 breaking)
0.6.0 | Feb 21, 2021 |
---|---|
0.5.1 | Aug 15, 2020 |
0.5.0 | Jul 26, 2020 |
0.4.0 | Oct 15, 2019 |
0.1.1 | Feb 14, 2019 |
825 downloads per month
Used in 9 crates
(4 directly)
73KB
1.5K
SLoC
Reductive
Training of optimized product quantizers
Training of optimized product quantizers requires a LAPACK
implementation. For this reason, training of the OPQ
and
GaussianOPQ
quantizers is feature-gated by the opq-train
feature.
opq-train
is automatically enabled by selecting a BLAS/LAPACK
implementation. The supported implementations are:
- OpenBLAS (feature:
openblas
) - Netlib (feature:
netlib
) - Intel MKL (feature:
intel-mk;
)
A backend can be selected as follows:
[dependencies]
reductive = { version = "0.3", features = ["openblas"] }
Running tests
To run all tests, specify the BLAS/LAPACK implementation:
$ cargo test --verbose --features "openblas"
Multi-threaded OpenBLAS
reductive
uses Rayon to parallelize quantizer training. However,
multi-threaded OpenBLAS is known to
conflict
with application threading. Is you use OpenBLAS, ensure that threading
is disabled, for instance by setting the number of threads to 1:
$ export OPENBLAS_NUM_THREADS=1
Dependencies
~3.5MB
~68K SLoC