5 releases
0.1.6 | Oct 2, 2023 |
---|---|
0.1.3 | Apr 18, 2023 |
0.1.2 | Apr 17, 2023 |
0.1.1 | Apr 17, 2023 |
0.1.0 | Apr 8, 2023 |
#1020 in Machine learning
21 downloads per month
140KB
3K
SLoC
xla-rs
Experimentation using the xla compiler from rust
Pre-compiled binaries for the xla library can be downloaded from the
elixir-nx/xla repo.
These should be extracted at the root of this repository, resulting
in a xla_extension
subdirectory being created, the currently supported version
is 0.5.1.
For a linux platform, this can be done via:
wget https://github.com/elixir-nx/xla/releases/download/v0.5.1/xla_extension-x86_64-linux-gnu-cpu.tar.gz
tar -xzvf xla_extension-x86_64-linux-gnu-cpu.tar.gz
If the xla_extension
directory is not in the main project directory, the path
can be specified via the XLA_EXTENSION_DIR
environment variable.
Generating some Text Samples with LLaMA
The LLaMA large language model can be used to generate text. The model weights are only available after completing this form and once downloaded can be converted to a format this crate can use. This requires a GPU with 16GB of memory or 32GB of memory when running on cpu (using the -cpu flag).
# Download the tokenizer config.
wget https://huggingface.co/hf-internal-testing/llama-tokenizer/raw/main/tokenizer.json -O llama-tokenizer.json
# Extract the pre-trained weights, this requires the transformers python library to be installed.
# This creates a npz file storing all the weights.
python examples/llama/convert_checkpoint.py ..../LLaMA/7B/consolidated.00.pth
# Run the example.
cargo run --example llama --release
Generating some Text Samples with GPT2
One of the featured examples is GPT2. In order to run it, one should first download the tokenization configuration file as well as the weights before running the example. In order to do this, run the following commands:
# Download the vocab file.
wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/vocab.bpe
# Extract the pre-trained weights, this requires the transformers python library to be installed.
# This creates a npz file storing all the weights.
python examples/nanogpt/get_weights.py
# Run the example.
cargo run --example nanogpt --release
Dependencies
~6–8.5MB
~158K SLoC