9 releases (breaking)

new 0.7.0 Jul 18, 2021
0.6.0 Feb 21, 2021
0.5.1 Aug 15, 2020
0.5.0 Jul 26, 2020
0.1.1 Feb 14, 2019
Download history 161/week @ 2021-04-01 160/week @ 2021-04-08 116/week @ 2021-04-15 121/week @ 2021-04-22 78/week @ 2021-04-29 96/week @ 2021-05-06 114/week @ 2021-05-13 161/week @ 2021-05-20 55/week @ 2021-05-27 307/week @ 2021-06-03 108/week @ 2021-06-10 95/week @ 2021-06-17 192/week @ 2021-06-24 164/week @ 2021-07-01 330/week @ 2021-07-08 285/week @ 2021-07-15

645 downloads per month
Used in 9 crates (4 directly)

Apache-2.0

71KB
1.5K SLoC

Reductive

Training of optimized product quantizers

Training of optimized product quantizers requires a LAPACK implementation. For this reason, training of the OPQ and GaussianOPQ quantizers is feature-gated by the opq-train feature. This feature must be enabled if you want to use OPQ or GaussianOPQ:

[dependencies]
reductive = { version = "0.7", features = ["opq-train"] }

This also requires that a crate that links a LAPACK library is added as a dependency, e.g. accelerate-src, intel-mkl-src, openblas-src, or netlib-src.

Running tests

Linux

You can run all tests on Linux, including tests for optimized product quantizers, using the intel-mkl-test feature:

$ cargo test --features intel-mkl-test

macOS

All tests can be run on macOS with the accelerate-test feature:

$ cargo test --features accelerate-test

Multi-threaded OpenBLAS

reductive uses Rayon to parallelize quantizer training. However, multi-threaded OpenBLAS is known to conflict with application threading. Is you use OpenBLAS, ensure that threading is disabled, for instance by setting the number of threads to 1:

$ export OPENBLAS_NUM_THREADS=1

Dependencies

~3–12MB
~184K SLoC