#zip #learning #framework #machine

tangram_zip

Tangram is an all-in-one automated machine learning framework

3 releases (breaking)

0.6.0 Jul 19, 2021
0.5.0 Jul 2, 2021
0.4.0 Jun 25, 2021

#133 in Machine learning

24 downloads per month
Used in 8 crates (6 directly)

MIT license

2KB
50 lines

Tangram

Tangram is an automated machine learning framework designed for programmers.

  • Run tangram train to train a model from a CSV file on the command line.
  • Make predictions with libraries for Elixir, Go, Node.js, Python, Ruby, and Rust.
  • Run tangram app to learn more about your models and monitor them in production.

Install

Install the tangram CLI

Train

Train a machine learning model by running tangram train with the path to a CSV file and the name of the column you want to predict.

$ tangram train --file heart_disease.csv --target diagnosis --output heart_disease.tangram
 Loading data.
 Computing features.
🚂 Training model 1 of 8.
[==========================================>                         ]

The CLI automatically transforms your data into features, trains a number of models to predict the target column, and writes the best model to a .tangram file. If you want more control, you can provide a config file.

Predict

Make predictions with libraries for Elixir, Go, Node.js, Python, Ruby, and Rust.

let tangram = require("@tangramxyz/tangram-node");

let model = new tangram.Model("./heart_disease.tangram");

let input = {
	age: 63,
	gender: "male",
	// ...
};

let output = model.predictSync(input);
console.log(output);
{ className: 'Negative', probability: 0.9381780624389648 }

Inspect

Run tangram app, open your browser to http://localhost:8080, and upload the model you trained.

  • View stats and metrics.
  • Tune your model to get the best performance.
  • Make example predictions and get detailed explanations.

Monitor

Once your model is deployed, make sure that it performs as well in production as it did in training. Opt in to logging by calling logPrediction.

// Log the prediction.
model.logPrediction({
	identifier: "6c955d4f-be61-4ca7-bba9-8fe32d03f801",
	input,
	options,
	output,
});

Later on, if you find out the true value for a prediction, call logTrueValue.

// Later on, if we get an official diagnosis for the patient, log the true value.
model.logTrueValue({
	identifier: "6c955d4f-be61-4ca7-bba9-8fe32d03f801",
	trueValue: "Positive",
});

Now you can:

  • Look up any prediction by its identifier and get a detailed explanation.
  • Get alerts if your data drifts or metrics dip.
  • Track production accuracy, precision, recall, etc.

Contributing

This repository is a Cargo workspace, and does not require anything other than the latest stable Rust toolchain to get started with.

  1. Install Rust on Linux, macOS, or Windows.
  2. Clone this repo and cd into it.
  3. Run cargo run to run a debug build of the CLI.

If you are working on the app, run scripts/app/dev. This rebuilds and reruns the CLI with the app subcommand as you make changes.

Before submitting a pull request, please run scripts/fmt and scripts/check at the root of the repository to confirm that your changes are formatted correctly and do not have any errors.

To install all dependencies necessary to work on the language libraries, install Nix with flake support, then run nix develop or set up direnv.

License

All of this repository is MIT licensed, except for the app directory, which is source available and free to use for testing, but requires a paid license to use in production. Send us an email at hello@tangram.xyz if you are interested in a license.

No runtime deps