#layer #neural #neural-network #linear #blas #keras #random

blasphemy

Inspired by Keras, powered by BLAS. Construct neural networks with one line of code per layer. That's BLASphemy.

2 releases

0.0.2 Jun 7, 2020
0.0.1 Jun 5, 2020

#653 in Machine learning

Apache-2.0

16KB
244 lines

blasphemy

The safety of Rust. The raw speed of BLAS. An ergonomic interface for neural network architecture inspired by Keras. That's blasphemy.

Blasphemy is early Work In Progress - a functionality roadmap is below. It is heavily based on RustML with an emphasis on ergonimics and numeric stability with vanishing gradients.

Feature Roadmap

  • Random initialization for symmetry breaking
  • Linear Layers
  • Softmax Layers
  • Sigmoid Layers
  • ReLU Layers
  • Residual Layers

Updates

  • 0.0.2 Changed random initialization to center around 0.0 - much more stable
  • 0.0.2 fixed bug in backprop for linear layers

Quickstart

You will need BLAS to use blasphemy. If you are using Ubuntu/Debian this can be accomplished with

$ sudo apt-get install libblas-dev libopencv-highgui-dev libopenblas-dev

Hello World example

This example creates a neural network with three linear layers and softmax normalization.

use blasphemy::{NeuralNet, Matrix};

let mut nn = NeuralNet::new(4); // new Neural Net with input shape (1,4)
nn.linear(5); // add a linear activation layer with 5 neurons
nn.linear(5); // add a linear activation layer with 5 neurons
nn.linear(3); // add a linear activation layer with 3 neurons
nn.softmax(); // apply softmax normalization to the output

let mut err = 0f64;
for iter in 0..200{
    // train on two examples for 200 epochs
    let input = Matrix::from_vec(vec![1f64,0f64,0f64,1f64], 1,4);
    let output = Matrix::from_vec(vec![1f64,0f64,0f64], 1,3);
    err = nn.backprop(&input, &output); // accumulate errors for one example
    let input = Matrix::from_vec(vec![0f64,2f64,2f64,0f64], 1,4);
    let output = Matrix::from_vec(vec![0f64,1f64,0f64], 1,3);
    err = nn.backprop(&input, &output); // accumulate errors for one example
    if iter % 3 == 0 {
    	// every 3rd example, perform gradient descent with the accumulated errors
        nn.grad_descent();
        println!("iter={} error={}", iter, err);
    }
}
// make a prediction
let input = Matrix::from_vec(vec![1f64,0f64,0f64,1f64], 1,4);
let prediction = nn.forward_prop(&input);

Note numeric stability can be troublesome with vanishing gradients past ~3-4 layers: this is an item for future improvement, especially with the incorporation of residual layers.

Dependencies

~6MB
~119K SLoC