17 unstable releases (7 breaking)
new 0.12.0 | Dec 14, 2024 |
---|---|
0.10.0 | Aug 25, 2024 |
0.9.2 | Jun 22, 2024 |
0.9.1 | Mar 10, 2023 |
0.6.2 | Oct 6, 2018 |
#47 in Machine learning
180 downloads per month
Used in 2 crates
(via visqol-rs)
130KB
1.5K
SLoC
In One Sentence
You trained an SVM using libSVM, now you want the highest possible performance during (real-time) classification, like games or VR.
Highlights
- loads almost all libSVM types (C-SVC, ν-SVC, ε-SVR, ν-SVR) and kernels (linear, poly, RBF and sigmoid)
- produces practically same classification results as libSVM
- optimized for SIMD and can be mixed seamlessly with Rayon
- written in 100% safe Rust
- allocation-free during classification for dense SVMs
- 2.5x - 14x faster than libSVM for dense SVMs
- extremely low classification times for small models (e.g., 128 SV, 16 dense attributes, linear ~ 500ns)
- successfully used in Unity and VR projects (Windows & Android)
Usage
Train with libSVM (e.g., using the tool svm-train
), then classify with ffsvm-rust
.
From Rust:
// Replace `SAMPLE_MODEL` with a `&str` to your model.
let svm = DenseSVM::try_from(SAMPLE_MODEL)?;
let mut fv = FeatureVector::from(&svm);
let features = fv.features();
features[0] = 0.55838;
features[1] = -0.157895;
features[2] = 0.581292;
features[3] = -0.221184;
svm.predict_value(&mut fv)?;
assert_eq!(fv.label(), Label::Class(42));
Status
- December 14, 2024: After 7+ years, finally ported to stable.🎉🎉🎉
- March 10, 2023: Reactivated for latest Rust nightly.
- June 7, 2019: Gave up on 'no
unsafe
', but gained runtime SIMD selection. - March 10, 2019: As soon as we can move away from nightly we'll go beta.
- Aug 5, 2018: Still in alpha, but finally on crates.io.
- May 27, 2018: We're in alpha. Successfully used internally on Windows, Mac, Android and Linux on various machines and devices. Once SIMD stabilizes and we can cross-compile to WASM we'll move to beta.
- December 16, 2017: We're in pre-alpha. It will probably not even work on your machine.
Performance
All performance numbers reported for the DenseSVM
. We also have support for SparseSVM
s, which are slower
for "mostly dense" models, and faster for "mostly sparse" models (and generally on the performance level of libSVM).
Tips
- Compile your project with
target-cpu=native
for a massive speed boost (e.g., check our.cargo/config.toml
how you can easily do that for your project). Note, due to how Rust works, this is only used for application (or dynamic FFI libraries), not library crates wrapping us. - For an x-fold performance increase, create a number of
Problem
structures, and process them with Rayon'spar_iter
.
FAQ
Dependencies
~1MB
~23K SLoC