8 releases (5 breaking)

0.6.0 Feb 21, 2021
0.5.1 Aug 15, 2020
0.5.0 Jul 26, 2020
0.4.0 Oct 15, 2019
0.1.1 Feb 14, 2019
Download history 100/week @ 2020-11-14 136/week @ 2020-11-21 77/week @ 2020-11-28 237/week @ 2020-12-05 229/week @ 2020-12-12 87/week @ 2020-12-19 42/week @ 2020-12-26 286/week @ 2021-01-02 78/week @ 2021-01-09 110/week @ 2021-01-16 127/week @ 2021-01-23 538/week @ 2021-01-30 409/week @ 2021-02-06 104/week @ 2021-02-13 212/week @ 2021-02-20 206/week @ 2021-02-27

825 downloads per month
Used in 9 crates (4 directly)

Apache-2.0

73KB
1.5K SLoC

Reductive

Training of optimized product quantizers

Training of optimized product quantizers requires a LAPACK implementation. For this reason, training of the OPQ and GaussianOPQ quantizers is feature-gated by the opq-train feature. opq-train is automatically enabled by selecting a BLAS/LAPACK implementation. The supported implementations are:

  • OpenBLAS (feature: openblas)
  • Netlib (feature: netlib)
  • Intel MKL (feature: intel-mk;)

A backend can be selected as follows:

[dependencies]
reductive = { version = "0.3", features = ["openblas"] }

Running tests

To run all tests, specify the BLAS/LAPACK implementation:

$ cargo test --verbose --features "openblas"

Multi-threaded OpenBLAS

reductive uses Rayon to parallelize quantizer training. However, multi-threaded OpenBLAS is known to conflict with application threading. Is you use OpenBLAS, ensure that threading is disabled, for instance by setting the number of threads to 1:

$ export OPENBLAS_NUM_THREADS=1

Dependencies

~3.5MB
~68K SLoC