12 releases (breaking)
| 0.9.0 | Dec 3, 2021 |
|---|---|
| 0.8.0 | Nov 30, 2021 |
| 0.7.0 | Jul 18, 2021 |
| 0.6.0 | Feb 21, 2021 |
| 0.2.0 | Feb 18, 2019 |
#1456 in Algorithms
2,912 downloads per month
Used in 12 crates
(4 directly)
76KB
1.5K
SLoC
Reductive
Training of optimized product quantizers
Training of optimized product quantizers requires a LAPACK implementation. For
this reason, training of the Opq and GaussianOpq quantizers is feature-gated
by the opq-train feature. This feature must be enabled if you want to use
Opq or GaussianOpq:
[dependencies]
reductive = { version = "0.7", features = ["opq-train"] }
This also requires that a crate that links a LAPACK library is added as a
dependency, e.g. accelerate-src, intel-mkl-src, openblas-src, or
netlib-src.
Running tests
Linux
You can run all tests on Linux, including tests for optimized product
quantizers, using the intel-mkl-test feature:
$ cargo test --features intel-mkl-test
macOS
All tests can be run on macOS with the accelerate-test feature:
$ cargo test --features accelerate-test
Multi-threaded OpenBLAS
reductive uses Rayon to parallelize quantizer training. However,
multi-threaded OpenBLAS is known to
conflict
with application threading. Is you use OpenBLAS, ensure that threading
is disabled, for instance by setting the number of threads to 1:
$ export OPENBLAS_NUM_THREADS=1
Dependencies
~4–14MB
~210K SLoC