#machine-learning #data-science #pca #dimension-reduction #missing-values

ppca

Rust implementation of the Probabilistic Principal Component Analysis model

4 releases (2 breaking)

0.5.0 Aug 10, 2024
0.4.1 Mar 13, 2023
0.4.0 Jan 23, 2023
0.3.1 Dec 16, 2022

#171 in Machine learning

Download history 61/week @ 2024-08-11 2/week @ 2024-08-18 1/week @ 2024-09-01 16/week @ 2024-09-15 8/week @ 2024-09-22 3/week @ 2024-09-29

201 downloads per month

Custom license

105KB
2K SLoC

Probabilistic Principal Component Analysis (PPCA) model

PyPI version Crates.io version Docs.rs version

This project implements a PPCA model implemented in Rust for Python using pyO3 and maturin.

Installing

This package is available in PyPI!

pip install ppca-rs

And you can also use it natively in Rust:

cargo add ppca

Why use PPCA?

Glad you asked!

  • The PPCA is a simples extension of the PCA (principal component analysis), but can be overall more robust to train.
  • The PPCA is a proper statistical model. It doesn't spit out only the mean. You get standard deviations, covariances, and all the goodies that come from thre realm of probability and statistics.
  • The PPCA model can handle missing values. If there is data missing from your dataset, it can extrapolate it with reasonable values and even give you a confidence interval.
  • The training converges quickly and will always tend to a global maxima. No metaparameters to dabble with and no local maxima.

Why use ppca-rs?

That's an easy one!

  • It's written in Rust, with only a bit of Python glue on top. You can expect a performance in the same leage as of C code.
  • It uses rayon to paralellize computations evenly across as many CPUs as you have.
  • It also uses fancy Linear Algebra Trickery Technology to reduce computational complexity in key bottlenecks.
  • Battle-tested at Vio.com with some ridiculously huge datasets.

Quick example

import numpy as np
from ppca_rs import Dataset, PPCATrainer, PPCA

samples: np.ndarray

# Create your dataset from a rank 2 np.ndarray, where each line is a sample.
# Use non-finite values (`inf`s and `nan`) to signal masked values
dataset = Dataset(samples)

# Train the model (convenient edition!):
model: PPCAModel = PPCATrainer(dataset).train(state_size=10, n_iters=10)


# And now, here is a free sample of what you can do:

# Extrapolates the missing values with the most probable values:
extrapolated: Dataset = model.extrapolate(dataset)

# Smooths (removes noise from) samples and fills in missing values:
extrapolated: Dataset = model.filter_extrapolate(dataset)

# ... go back to numpy:
eextrapolated_np = extrapolated.numpy()

Juicy extras!

  • Tired of the linear? We have support for PPCA mixture models. Make the most of your data with clustering and dimensionality reduction in a single tool!
  • Support for adaptation of DataFrames using either pandas or polars. Never juggle those dfs in your code again.

Building from soure

Prerequisites

You will need Rust, which can be installed locally (i.e., without sudo) and you will also need maturin, which can be installed by

pip install maturin

pipenv is also a good idea if you are going to mess around with it locally. At least, you need a venv set, otherwise, maturin will complain with you.

Installing it locally

Check the Makefile for the available commands (or just type make). To install it locally, do

make install    # optional: i=python.version (e.g, `i=3.9`)

Messing around and testing

To mess around, inside a virtual environment (a Pipfile is provided for the pipenv lovers), do

maturin develop  # use the flag --release to unlock superspeed!

This will install the package locally as is from source.

How do I use this stuff?

See the examples in the examples folder. Also, all functions are type hinted and commented. If you are using pylance or mypy, it should be easy to navigate.

Is it faster than the pure Python implemetation you made?

You bet!

Dependencies

~7MB
~145K SLoC