17 releases

new 0.8.0 May 4, 2021
0.7.3 Sep 12, 2020
0.7.1 Aug 14, 2020
0.7.0 Jul 2, 2020
0.5.3 Mar 25, 2020

#100 in Data structures

Download history 19/week @ 2021-01-15 3/week @ 2021-01-22 2/week @ 2021-01-29 1/week @ 2021-02-05 304/week @ 2021-02-12 19/week @ 2021-02-19 4/week @ 2021-02-26 7/week @ 2021-03-05 3/week @ 2021-03-12 34/week @ 2021-03-19 33/week @ 2021-03-26 34/week @ 2021-04-02 6/week @ 2021-04-09 71/week @ 2021-04-16 8/week @ 2021-04-23 60/week @ 2021-04-30

181 downloads per month
Used in 4 crates (via mop-blocks)

Apache-2.0

73KB
1.5K SLoC

ndsparse

CI crates.io Documentation License Rustc

Structures to store and retrieve N-dimensional sparse data. Well, not any N ∈ ℕ but any natural number that fits into the pointer size of the machine that you are using. E.g., an 8-bit microcontroller can manipulate any sparse structure with up to 255 dimensions.

For those that might be wondering about why this crate should be used, it generally comes down to space-efficiency, ergometrics and retrieving speed. The following snippet shows some use-cases for potential replacement with _cube_of_vecs being the most inefficient of all.

let _vec_of_options: Vec<Option<i32>> = Default::default();
let _matrix_of_options: [Option<[Option<i32>; 8]>; 16] = Default::default();
let _cube_of_vecs: Vec<Vec<Vec<i32>>> = Default::default();
// The list worsens exponentially for higher dimensions

See this blog post for more information.

Example

use ndsparse::{coo::CooArray, csl::CslVec};

fn main() -> ndsparse::Result<()> {
  // A CSL and COO cube.
  //
  //      ___ ___
  //    /   /   /\
  //   /___/___/ /\
  //  / 1 /   /\/2/
  // /_1_/___/ /\/
  // \_1_\___\/ /
  //  \___\___\/
  let coo = CooArray::new([2, 2, 2], [([0, 0, 0].into(), 1.0), ([1, 1, 1].into(), 2.0)])?;
  let mut csl = CslVec::default();
  csl
    .constructor()?
    .next_outermost_dim(2)?
    .push_line([(0, 1.0)].iter().copied())?
    .next_outermost_dim(2)?
    .push_empty_line()?
    .next_outermost_dim(2)?
    .push_empty_line()?
    .push_line([(1, 2.0)].iter().copied())?;
  assert_eq!(coo.value([0, 0, 0]), csl.value([0, 0, 0]));
  assert_eq!(coo.value([1, 1, 1]), csl.value([1, 1, 1]));
  Ok(())
}

Supported structures

  • Compressed Sparse Line (CSL)
  • Coordinate format (COO)

Features

  • no_std w/o opt-out flags
  • Different storages (Array, Vec, Slice and more!)
  • Fully documented
  • Fuzz testing
  • No unsafe

Optional features

  • alloc and std
  • Bindings (Py03, wasm-bindgen)
  • Deserialization/Serialization (serde)
  • Parallel iterators (rayon)
  • Random instances (rand)

Future

Although CSR and COO are general sparse structures, they aren't good enough for certain situations, therefore, the existence of DIA, JDS, ELL, LIL, DOK and many others.

If there are enough interest, the mentioned sparse storages might be added at some point in the future.

Algebra library

This project isn't and will never be a sparse algebra library because of its own self-contained responsibility and complexity. Futhermore, a good implementation of such library would require a titanic amount of work and research for different algorithms, operations, decompositions, solvers and hardwares.

Alternatives

One of these libraries might suit you better:

  • sprs: Sparse linear algebra.
  • ndarray: Dense N-dimensional operations.
  • nalgebra: Dense linear algebra.

Dependencies

~49–570KB
~12K SLoC