#model #inference #gpt-2 #wrapper #onnx #encoder #vocabulary

gpt-model

Pure-Rust inference wrapper for GPT-2 large language models

1 unstable release

0.1.0 Feb 24, 2024

#1063 in Machine learning

AGPL-3.0-only

42KB
500 lines

100% pure Rust inference wrapper for the GPT-2 (and possibly later) model family.

Getting a GPT Model

The GPT-2 model packaged within the crate's repository uses the original model trained by OpenAI, with minor modifications to support Tensorflow 2.0, and to support conversion to the ONNX model format.

When getting started with this crate, we recommend using our prebuilt version of the 124M (smallest) GPT-2 model; the model, encoder, and byte-pair encoding vocabulary for this model may all be downloaded from here.

Repository Structure

  • src/: Main crate contents, including a pure Rust implementation of the GPT-2 byte-pair encoder (tokenizer) and a Rust wrapper for loading and invoking an ONNX GPT-2 model.
  • gpt-2-model/: Python scripts and Docker files to download and export Tensorflow and ONNX versions of the GPT-2 model.
  • gpt-2-model/saved_models: Exported GPT-2 models. The latest prebuilt version of the 124M (smallest) GPT-2 model is shipped with this repo, as part of Git LFS.

License and Contributions

Except where otherwise noted, this project is Copyright (C) 2022-24 Brandon Sanders [me@caer.cc], and licensed under the AGPL-3.0-only.

The files within the gpt-2-model directory are Copyright (C) 2019 OpenAI and (C) 2022-24 Brandon Sanders, and licensed under an MIT-style license.

Contributions are always welcome!

Dependencies

~14–25MB
~434K SLoC