#tensor #blas #machine-learning #error

nightly constensor-core

Experimental ML framework featuring a graph-based JIT compiler

2 releases

new 0.1.1 Apr 26, 2025
0.1.0 Apr 26, 2025

#635 in Machine learning

Download history 231/week @ 2025-04-23

231 downloads per month

MIT/Apache

160KB
4K SLoC

constensor

Experimental ML framework featuring a graph-based JIT compiler.

Docs

Constensor is an ML framework which provides the following key features:

  • Compile time shape, dtype, and device checking: Develop quickly and handle common errors
  • Opt-in half precision support: Run on any GPU
  • Advanced AI compiler features:
    • Elementwise JIT kernel fusion
    • Automatic inplacing
    • Constant folding
    • Dead code removal

You can find out more with the DeepWiki page!

use constensor_core::{Cpu, Graph, GraphTensor, Tensor, R2};

fn main() {
    let mut graph: Graph<f32> = Graph::empty();
    let x: GraphTensor<R2<3, 4>, f32, Cpu> =
        GraphTensor::<R2<3, 4>, f32, Cpu>::fill(graph.clone(), 1.0);
    let y: GraphTensor<R2<3, 4>, f32, Cpu> =
        GraphTensor::<R2<3, 4>, f32, Cpu>::fill(graph.clone(), 2.0);
    let z: GraphTensor<R2<3, 4>, f32, Cpu> = y.clone() + y * x;

    graph.optimize();

    graph.visualize("graph.png").unwrap();

    let tensor: Tensor<R2<3, 4>, f32, Cpu> = z.to_tensor().unwrap();

    assert_eq!(tensor.data().unwrap().to_vec(), vec![vec![4.0; 4]; 3],);
}

Opt-in half precision support

Via the following feature flags:

  • half for f16
  • bfloat for bf16

Dependencies

~8–21MB
~248K SLoC