#deep-learning #neural-network #machine-learning #autograd

no-std torsh

A blazingly fast, production-ready deep learning framework written in pure Rust

1 unstable release

0.1.0-alpha.1 Sep 30, 2025
0.0.0 Jun 30, 2025

#7 in #autograd

Download history 63/week @ 2025-09-24 57/week @ 2025-10-01 8/week @ 2025-10-08 8/week @ 2025-10-15

136 downloads per month

MIT/Apache

13MB
278K SLoC

torsh

The main crate for ToRSh - A blazingly fast, production-ready deep learning framework written in pure Rust.

Overview

This is the primary entry point for the ToRSh framework, providing convenient access to all functionality through a unified API.

Installation

Add to your Cargo.toml:

[dependencies]
torsh = "0.1.0-alpha.1"

Quick Start

use torsh::prelude::*;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Create tensors
    let x = tensor![[1.0, 2.0], [3.0, 4.0]];
    let y = tensor![[5.0, 6.0], [7.0, 8.0]];
    
    // Matrix multiplication
    let z = x.matmul(&y)?;
    println!("Result: {:?}", z);
    
    // Automatic differentiation
    let a = tensor![2.0].requires_grad_(true);
    let b = a.pow(2.0)? + a * 3.0;
    b.backward()?;
    println!("Gradient: {:?}", a.grad()?);
    
    Ok(())
}

Available Features

  • default: Includes std, nn, optim, and data
  • std: Standard library support (enabled by default)
  • nn: Neural network modules
  • optim: Optimization algorithms
  • data: Data loading utilities
  • cuda: CUDA backend support
  • wgpu: WebGPU backend support
  • metal: Metal backend support (Apple Silicon)
  • serialize: Serialization support
  • full: All features

Module Structure

The crate re-exports functionality from specialized sub-crates:

  • Core (torsh::core): Basic types and traits
  • Tensor (torsh::tensor): Tensor operations
  • Autograd (torsh::autograd): Automatic differentiation
  • NN (torsh::nn): Neural network layers
  • Optim (torsh::optim): Optimizers
  • Data (torsh::data): Data loading

F Namespace

Similar to PyTorch's torch.nn.functional, ToRSh provides functional operations in the F namespace:

use torsh::F;

let output = F::relu(&input);
let output = F::softmax(&logits, -1)?;

License

Licensed under either of

at your option.

Dependencies

~110–150MB
~2.5M SLoC