#tensor #operator #hpt #defines #operations #traits #n-dimensional

hpt-traits

An internal library defines tensor operator traits for hpt

10 releases

new 0.1.0 Mar 19, 2025
0.0.21 Mar 10, 2025
0.0.15 Feb 19, 2025

#597 in Math

Download history 111/week @ 2025-02-07 440/week @ 2025-02-14 59/week @ 2025-02-21 403/week @ 2025-02-28 184/week @ 2025-03-07 78/week @ 2025-03-14

776 downloads per month
Used in 4 crates

MIT/Apache

1.5MB
44K SLoC

HPT

Crates.io

Hpt is a high performance N-dimensional array library. It is being highly optimized and is designed to be easy to use. Most of the operators are implemented based on Onnx operator list. Hence, you can use it to build most of the deep learning models.

Features

Memory Layout

  • Optimized memory layout with support for both contiguous and not contiguous tensors.

SIMD Support

  • Leverages CPU SIMD instructions (SSE/AVX/NEON) for vectorized operations.

Iterator API

  • Flexible iterator API for efficient element-wise/broadcast operations and custom implementations.

Multi-Threading

  • Auto efficient parallel processing for CPU-intensive operations.

Broadcasting

  • Automatic shape broadcasting for element-wise operations, similar to NumPy.

Type Safe

  • Strong type system ensures correctness at compile time, preventing runtime errors.

Zero-Copy

  • Minimizes memory overhead with zero-copy operations and efficient data sharing.

Auto Type Promote

  • Allows auto type promote when compute with different types.

Custom Type

  • Allows user to define their own data type for calculation (CPU support only).

Note

Hpt is in early stage, bugs and wrong calculation results are expected

Cargo Features

  • cuda: enable cuda support.
  • bound_check: enable bound check, this is experimental and will reduce performance.
  • normal_promote: auto type promote. There may be more type promote feature in the future.

Get Start

use hpt::Tensor;
use hpt::ops::FloatUnaryOps;
fn main() -> anyhow::Result<()> {
    let x = Tensor::new(&[1f64, 2., 3.]);
    let y = Tensor::new(&[4i64, 5, 6]);

    let result: Tensor<f64> = x + &y; // with `normal_promote` feature enabled, i64 + f64 will output f64
    println!("{}", result); // [5. 7. 9.]

    // All the available methods are listed in https://jianqoq.github.io/Hpt/user_guide/user_guide.html
    let result: Tensor<f64> = y.sin()?;
    println!("{}", result); // [-0.7568 -0.9589 -0.2794]
    Ok(())
}

To use Cuda, enable feature cuda (Note that Cuda is in development and not tested)

use hpt::{Tensor, backend::Cuda};
use hpt::ops::FloatUnaryOps;

fn main() -> anyhow::Result<()> {
    let x = Tensor::<f64>::new(&[1f64, 2., 3.]).to_cuda::<0/*Cuda device id*/>()?;
    let y = Tensor::<i64>::new(&[4i64, 5, 6]).to_cuda::<0/*Cuda device id*/>()?;

    let result = x + &y; // with `normal_promote` feature enabled, i64 + f64 will output f64
    println!("{}", result); // [5. 7. 9.]

    // All the available methods are listed in https://jianqoq.github.io/Hpt/user_guide/user_guide.html
    let result: Tensor<f64, Cuda, 0> = y.sin()?;
    println!("{}", result); // [-0.7568 -0.9589 -0.2794]
    Ok(())
}

For more examples, reference here and documentation

How To Get Highest Performance

  • Compile your program with the following configuration in Cargo.toml, note that lto is very important.
opt-level = 3
lto = "fat"
codegen-units = 1
  • Ensure your Env variable RUSTFLAGS enabled the best features your CPU has, like -C target-feature=+avx2 -C target-feature=+fma.

Benchmarks

benchmarks

Backend Support

Backend Supported
CPU
Cuda 🚧
CPU Supported
AVX2
AVX512
SSE
Neon

It is welcome to get contribution for supporting machines that is not supported in the list. Before contribute, please look at the dev guide.

Documentations

For more details, visit https://jianqoq.github.io/Hpt/

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.


lib.rs:

this crate defines a set of traits for tensor and its operations.

Dependencies

~4.5–6.5MB
~117K SLoC