#tensor-flow #neural-networks

tract-linalg

Tiny, no-nonsense, self contained, TensorFlow and ONNX inference

130 releases

0.21.1 Feb 8, 2024
0.20.22 Nov 28, 2023
0.20.7 Jun 14, 2023
0.19.8 Mar 27, 2023
0.2.9 Mar 28, 2019

#717 in Machine learning

Download history 2749/week @ 2023-11-01 3868/week @ 2023-11-08 3749/week @ 2023-11-15 3112/week @ 2023-11-22 3031/week @ 2023-11-29 2832/week @ 2023-12-06 2023/week @ 2023-12-13 1496/week @ 2023-12-20 1186/week @ 2023-12-27 1541/week @ 2024-01-03 3534/week @ 2024-01-10 5903/week @ 2024-01-17 6964/week @ 2024-01-24 6833/week @ 2024-01-31 5075/week @ 2024-02-07 7104/week @ 2024-02-14

26,752 downloads per month
Used in 39 crates (via tract-core)

MIT/Apache

525KB
11K SLoC

tract-linalg

linalg stands for "linear algebra". This is a misnamer. This crates contains low-level, architecture dependant optimisations used by tract-core.

Functions

  • MatMatMul: Extended matrix*matrix product:
    • inspired by Gotoblass and BLIS micro kernel approach
    • extended for convolution friendly addressing (fused img2col)
    • fused output pipeline (min, max, and a few more simple, fast ops)
    • f32*f32 -> f32 (à la sgemm)
    • i8*i8 -> i32 accumulator -> i32 storage
    • i8*i8 -> i32 accumulator -> i8 (with channel zeropoint and scale, and re-quantization pipeline)
  • f32 sigmoid and f32 tanh: at f32 precision, by a rationale function (no exponentiation)
  • byte-to-byte lookup table

Implementations

generic fallback armv6, vfp armv7 neon armv8 simd x64 FMA
MatMatMul f32 4x4 8x4 8x8 16x6
MatMatMul i8->i8 8x4 8x8
MatMatMul i8->i32 8x8
sigmoid f32 4n 4n
tanh f32 4n 4n
byte lookup

Dependencies

~10MB
~193K SLoC