#matrix #vector #linspace

russell_lab

Matrix-vector laboratory including linear algebra tools

19 releases (7 breaking)

new 0.8.0 Feb 12, 2024
0.7.1 Oct 22, 2023
0.5.0 Jul 3, 2023
0.4.1 Jun 28, 2022
0.1.0 Jun 22, 2021

#41 in Math

Download history 532/week @ 2023-10-22 496/week @ 2023-10-29 111/week @ 2023-11-05 81/week @ 2023-11-12 68/week @ 2023-11-19 82/week @ 2023-11-26 96/week @ 2023-12-03 81/week @ 2023-12-10 70/week @ 2023-12-17 81/week @ 2023-12-24 50/week @ 2023-12-31 118/week @ 2024-01-07 167/week @ 2024-01-14 87/week @ 2024-01-21 70/week @ 2024-01-28 32/week @ 2024-02-04

396 downloads per month
Used in 6 crates

MIT license

440KB
9K SLoC

Russell Lab - Matrix-vector laboratory including linear algebra tools

This crate is part of Russell - Rust Scientific Library

Contents

Introduction

This crate implements several functions to perform linear algebra computations--it is a matrix-vector laboratory ๐Ÿ˜‰. We implement some functions in native Rust code as much as possible but also wrap the best tools available, such as OpenBLAS and Intel MKL.

The main structures are NumVector and NumMatrix, which are generic Vector and Matrix structures. The Matrix data is stored as column-major. The Vector and Matrix are f64 and Complex64 aliases of NumVector and NumMatrix, respectively.

The linear algebra functions currently handle only (f64, i32) pairs, i.e., accessing the (double, int) C functions. We also consider (Complex64, i32) pairs.

There are many functions for linear algebra, such as (for Real and Complex types):

  • Vector addition, copy, inner and outer products, norms, and more
  • Matrix addition, multiplication, copy, singular-value decomposition, eigenvalues, pseudo-inverse, inverse, norms, and more
  • Matrix-vector multiplication, and more
  • Solution of dense linear systems with symmetric or non-symmetric coefficient matrices, and more
  • Reading writing files, linspace, grid generators, Stopwatch, linear fitting, and more
  • Checking results, comparing float point numbers, and verifying the correctness of derivatives; see russell_lab::check

See the documentation for further information:

Installation on Debian/Ubuntu/Linux

This crate depends on an efficient BLAS library such as OpenBLAS and Intel MKL.

The root README file presents the steps to install the required dependencies.

Setting Cargo.toml

Crates.io

๐Ÿ‘† Check the crate version and update your Cargo.toml accordingly:

[dependencies]
russell_lab = "*"

Examples

See also:

Compute the pseudo-inverse matrix

use russell_lab::{mat_pseudo_inverse, Matrix, StrError};

fn main() -> Result<(), StrError> {
    // set matrix
    let mut a = Matrix::from(&[
      [1.0, 0.0], //
      [0.0, 1.0], //
      [0.0, 1.0], //
    ]);
    let a_copy = a.clone();

    // compute pseudo-inverse matrix
    let mut ai = Matrix::new(2, 3);
    mat_pseudo_inverse(&mut ai, &mut a)?;

    // compare with solution
    let ai_correct = "โ”Œ                โ”\n\
                      โ”‚ 1.00 0.00 0.00 โ”‚\n\
                      โ”‚ 0.00 0.50 0.50 โ”‚\n\
                      โ””                โ”˜";
    assert_eq!(format!("{:.2}", ai), ai_correct);

    // compute a โ‹… ai
    let (m, n) = a.dims();
    let mut a_ai = Matrix::new(m, m);
    for i in 0..m {
        for j in 0..m {
            for k in 0..n {
                a_ai.add(i, j, a_copy.get(i, k) * ai.get(k, j));
            }
        }
    }

    // check: a โ‹… ai โ‹… a = a
    let mut a_ai_a = Matrix::new(m, n);
    for i in 0..m {
        for j in 0..n {
            for k in 0..m {
                a_ai_a.add(i, j, a_ai.get(i, k) * a_copy.get(k, j));
            }
        }
    }
    let a_ai_a_correct = "โ”Œ           โ”\n\
                          โ”‚ 1.00 0.00 โ”‚\n\
                          โ”‚ 0.00 1.00 โ”‚\n\
                          โ”‚ 0.00 1.00 โ”‚\n\
                          โ””           โ”˜";
    assert_eq!(format!("{:.2}", a_ai_a), a_ai_a_correct);
    Ok(())
}

Compute eigenvalues

use russell_lab::*;

fn main() -> Result<(), StrError> {
    // set matrix
    let data = [[2.0, 0.0, 0.0], [0.0, 3.0, 4.0], [0.0, 4.0, 9.0]];
    let mut a = Matrix::from(&data);

    // allocate output arrays
    let m = a.nrow();
    let mut l_real = Vector::new(m);
    let mut l_imag = Vector::new(m);
    let mut v_real = Matrix::new(m, m);
    let mut v_imag = Matrix::new(m, m);

    // perform the eigen-decomposition
    mat_eigen(&mut l_real, &mut l_imag, &mut v_real, &mut v_imag, &mut a)?;

    // check results
    assert_eq!(
        format!("{:.1}", l_real),
        "โ”Œ      โ”\n\
         โ”‚ 11.0 โ”‚\n\
         โ”‚  1.0 โ”‚\n\
         โ”‚  2.0 โ”‚\n\
         โ””      โ”˜"
    );
    assert_eq!(
        format!("{}", l_imag),
        "โ”Œ   โ”\n\
         โ”‚ 0 โ”‚\n\
         โ”‚ 0 โ”‚\n\
         โ”‚ 0 โ”‚\n\
         โ””   โ”˜"
    );

    // check eigen-decomposition (similarity transformation) of a
    // symmetric matrix with real-only eigenvalues and eigenvectors
    let a_copy = Matrix::from(&data);
    let lam = Matrix::diagonal(l_real.as_data());
    let mut a_v = Matrix::new(m, m);
    let mut v_l = Matrix::new(m, m);
    let mut err = Matrix::filled(m, m, f64::MAX);
    mat_mat_mul(&mut a_v, 1.0, &a_copy, &v_real)?;
    mat_mat_mul(&mut v_l, 1.0, &v_real, &lam)?;
    mat_add(&mut err, 1.0, &a_v, -1.0, &v_l)?;
    approx_eq(mat_norm(&err, Norm::Max), 0.0, 1e-15);
    Ok(())
}

Cholesky factorization

use russell_lab::*;

fn main() -> Result<(), StrError> {
    // set matrix
    let sym = 0.0;
    #[rustfmt::skip]
    let mut a = Matrix::from(&[
        [  4.0,   sym,   sym],
        [ 12.0,  37.0,   sym],
        [-16.0, -43.0,  98.0],
    ]);

    // perform factorization
    mat_cholesky(&mut a, false)?;

    // define alias (for convenience)
    let l = &a;

    // compare with solution
    let l_correct = "โ”Œ          โ”\n\
                     โ”‚  2  0  0 โ”‚\n\
                     โ”‚  6  1  0 โ”‚\n\
                     โ”‚ -8  5  3 โ”‚\n\
                     โ””          โ”˜";
    assert_eq!(format!("{}", l), l_correct);

    // check:  l โ‹… lแต€ = a
    let m = a.nrow();
    let mut l_lt = Matrix::new(m, m);
    for i in 0..m {
        for j in 0..m {
            for k in 0..m {
                l_lt.add(i, j, l.get(i, k) * l.get(j, k));
            }
        }
    }
    let l_lt_correct = "โ”Œ             โ”\n\
                        โ”‚   4  12 -16 โ”‚\n\
                        โ”‚  12  37 -43 โ”‚\n\
                        โ”‚ -16 -43  98 โ”‚\n\
                        โ””             โ”˜";
    assert_eq!(format!("{}", l_lt), l_lt_correct);
    Ok(())
}

About the column major representation

Only the COL-MAJOR representation is considered here.

    โ”Œ     โ”  row_major = {0, 3,
    โ”‚ 0 3 โ”‚               1, 4,
A = โ”‚ 1 4 โ”‚               2, 5};
    โ”‚ 2 5 โ”‚
    โ””     โ”˜  col_major = {0, 1, 2,
    (m ร— n)               3, 4, 5}

Aแตขโฑผ = col_major[i + jยทm] = row_major[iยทn + j]
        โ†‘
COL-MAJOR IS ADOPTED HERE

The main reason to use the col-major representation is to make the code work better with BLAS/LAPACK written in Fortran. Although those libraries have functions to handle row-major data, they usually add an overhead due to temporary memory allocation and copies, including transposing matrices. Moreover, the row-major versions of some BLAS/LAPACK libraries produce incorrect results (notably the DSYEV).

Benchmarks

Need to install:

cargo install cargo-criterion

Run the benchmarks with:

bash ./zscripts/benchmark.bash

Jacobi Rotation versus LAPACK DSYEV

Comparison of the performances of mat_eigen_sym_jacobi (Jacobi rotation) versus mat_eigen_sym (calling LAPACK DSYEV).

Jacobi Rotation versus LAPACK DSYEV (1-5)

Jacobi Rotation versus LAPACK DSYEV (1-32)

For developers

Notes for developers:

  • The c_code directory contains a thin wrapper to the BLAS libraries (OpenBLAS or Intel MKL)
  • The c_code directory also contains a wrapper to the C math functions
  • The build.rs file uses the crate cc to build the C-wrappers

Dependencies

~8โ€“21MB
~270K SLoC