4 releases

new 0.1.0-rc.1 Jan 31, 2026
0.1.0-beta.1 Jan 27, 2026
0.1.0-alpha.2 Dec 22, 2025
0.1.0-alpha.1 Sep 29, 2025

#1559 in Machine learning

Download history 103/week @ 2025-10-14 109/week @ 2025-10-21 43/week @ 2025-10-28 61/week @ 2025-11-04 82/week @ 2025-11-11 57/week @ 2025-11-18 66/week @ 2025-11-25 64/week @ 2025-12-02 64/week @ 2025-12-09 68/week @ 2025-12-16 61/week @ 2025-12-23 65/week @ 2025-12-30 28/week @ 2026-01-06 23/week @ 2026-01-13 80/week @ 2026-01-20 100/week @ 2026-01-27

247 downloads per month
Used in 30 crates (29 directly)

MIT/Apache

6MB
127K SLoC

torsh-tensor

PyTorch-compatible tensor implementation for ToRSh, built on top of scirs2.

Overview

This crate provides the core Tensor type with a familiar PyTorch-like API, wrapping scirs2's powerful autograd functionality.

Features

  • PyTorch-compatible tensor operations
  • Automatic differentiation support
  • Broadcasting and shape manipulation
  • Comprehensive indexing and slicing
  • Integration with scirs2 for optimized computation

Usage

Basic Tensor Creation

use torsh_tensor::prelude::*;

// Create tensors using the tensor! macro
let a = tensor![1.0, 2.0, 3.0];
let b = tensor![[1.0, 2.0], [3.0, 4.0]];

// Create tensors with specific shapes
let zeros = zeros::<f32>(&[3, 4]);
let ones = ones::<f32>(&[2, 3]);
let eye = eye::<f32>(5);

// Random tensors
let uniform = rand::<f32>(&[3, 3]);
let normal = randn::<f32>(&[2, 4]);

Tensor Operations

// Element-wise operations
let c = a.add(&b)?;
let d = a.mul(&b)?;

// Matrix multiplication
let e = a.matmul(&b)?;

// Reductions
let sum = a.sum();
let mean = a.mean();
let max = a.max();

// Activation functions
let relu = a.relu();
let sigmoid = a.sigmoid();

Shape Manipulation

// Reshape
let reshaped = a.view(&[2, 3])?;

// Transpose
let transposed = a.t()?;

// Squeeze and unsqueeze
let squeezed = a.squeeze();
let unsqueezed = a.unsqueeze(0)?;

Automatic Differentiation

// Enable gradient computation
let x = tensor![2.0].requires_grad_(true);

// Forward pass
let y = x.pow(2.0)?.add(&x.mul(&tensor![3.0])?)?;

// Backward pass
y.backward()?;

// Access gradient
let grad = x.grad().unwrap();

Indexing and Slicing

// Basic indexing
let element = tensor.get(0)?;
let element_2d = tensor.get_2d(1, 2)?;

// Slicing with macros
let slice = tensor.index(&[s![1..5], s![..], s![0..10; 2]])?;

// Boolean masking
let mask = tensor.gt(&zeros)?;
let selected = tensor.masked_select(&mask)?;

License

Licensed under either of

at your option.

Dependencies

~45–60MB
~1M SLoC