11 releases (7 stable)

Uses old Rust 2015

1.3.1 Mar 2, 2016
1.3.0 Mar 1, 2016
1.2.1 Feb 22, 2016
1.0.1 Dec 21, 2015
0.1.3 Dec 14, 2015

#625 in Machine learning

Download history 214/week @ 2024-03-15 303/week @ 2024-03-22 636/week @ 2024-03-29 165/week @ 2024-04-05 199/week @ 2024-04-12 204/week @ 2024-04-19 262/week @ 2024-04-26 460/week @ 2024-05-03 207/week @ 2024-05-10 202/week @ 2024-05-17 210/week @ 2024-05-24 171/week @ 2024-05-31 396/week @ 2024-06-07 200/week @ 2024-06-14 191/week @ 2024-06-21 99/week @ 2024-06-28

901 downloads per month
Used in 256 crates (2 directly)

MIT/Apache

120KB
2K SLoC

Provides a safe and convenient wrapper for the CUDA cuDNN API.

This crate (1.0.0) was developed against cuDNN v3.

Architecture

This crate provides three levels of entrace.

FFI
The ffi module exposes the foreign function interface and cuDNN specific types. Usually, there should be no use to touch it if you only want to use cuDNN in you application. The ffi is provided by the rust-cudnn-sys crate and gets reexported here.

Low-Level
The api module exposes already a complete and safe wrapper for the cuDNN API, including proper Rust Errors. Usually there should be not need to use the API directly though, as the Cudnn module, as described in the next block, provides all the API functionality but provides a more convenient interface.

High-Level
The cudnn module exposes the Cudnn struct, which provides a very convenient, easy-to-understand interface for the cuDNN API. There should be not much need to obtain and read the cuDNN manual. Initialize the Cudnn struct and you can call the available methods wich are representing all the available cuDNN operations.

Examples

extern crate cudnn;
extern crate libc;
use cudnn::{Cudnn, TensorDescriptor};
use cudnn::utils::{ScalParams, DataType};
fn main() {
    // Initialize a new cuDNN context and allocates resources.
    let cudnn = Cudnn::new().unwrap();
    // Create a cuDNN Tensor Descriptor for `src` and `dest` memory.
    let src_desc = TensorDescriptor::new(&[2, 2, 2], &[4, 2, 1], DataType::Float).unwrap();
    let dest_desc = TensorDescriptor::new(&[2, 2, 2], &[4, 2, 1], DataType::Float).unwrap();
    // Obtain the `src` and memory pointer on the GPU.
    // NOTE: You wouldn't do it like that. You need to really allocate memory on the GPU with e.g. CUDA or Collenchyma.
    let src_data: *const ::libc::c_void = ::std::ptr::null();
    let dest_data: *mut ::libc::c_void = ::std::ptr::null_mut();
    // Now you can compute the forward sigmoid activation on your GPU.
    cudnn.sigmoid_forward::<f32>(&src_desc, src_data, &dest_desc, dest_data, ScalParams::default());
}

Notes

rust-cudnn was developed at Autumn for the Rust Machine Intelligence Framework Leaf.

rust-cudnn is part of the High-Performance Computation Framework Collenchyma, for the Neural Network Plugin. For an easy, unified interface for NN operations, such as those provided by cuDNN, you might check out Collenchyma.

Dependencies

~160KB