121 releases (7 breaking)

0.8.19 Sep 22, 2024
0.8.17 Jul 3, 2024
0.6.35 Mar 25, 2024
0.4.9 Dec 25, 2023
0.1.16 Jul 30, 2023

#44 in Machine learning

Download history 1/week @ 2024-07-13 526/week @ 2024-07-27 10/week @ 2024-08-03 3/week @ 2024-08-10 153/week @ 2024-08-17 19/week @ 2024-08-24 18/week @ 2024-08-31 33/week @ 2024-09-07 56/week @ 2024-09-14 211/week @ 2024-09-21 368/week @ 2024-09-28 11/week @ 2024-10-05 5/week @ 2024-10-12 2/week @ 2024-10-26

89 downloads per month
Used in ai00-core

MIT/Apache and GPL-3.0-or-later

640KB
17K SLoC

Rust 14K SLoC // 0.0% comments WebGPU Shader Language 3K SLoC // 0.0% comments Python 94 SLoC // 0.1% comments

Web-RWKV

crates.io docs.rs

This is an inference engine for the language model of RWKV implemented in pure WebGPU.

Features

  • No dependencies on CUDA/Python.
  • Support Nvidia/AMD/Intel GPUs, including integrated GPUs.
  • Vulkan/Dx12/OpenGL backends.
  • WASM support (can run in browser).
  • Batched inference.
  • Int8 and NF4 quantization.
  • Very fast.
  • LoRA merging at loading time.
  • Support RWKV V4, V5 and V6.
  • Hooks to intervene the inference process at any point.
  • Model (de)serialization.

Note that web-rwkv is only an inference engine. It only provides the following functionalities:

  • A tokenizer.
  • Model loading.
  • State creation and updating.
  • Model implements run function that takes in prompt tokens and returns logits, and a softmax function that turns logits into predicted next token probabilities. Both of them are executed on GPU.
  • Model quantization and (de)serialization.
  • WASM bindings.

It does not provide the following:

  • OpenAI API or APIs of any kind.
    • If you would like to deploy an API server, check AI00 RWKV Server which is a fully-functional OpenAI-compatible API server built upon web-rwkv.
    • You could also check the web-rwkv-axum project if you want some fancy inference pipelines, including Classifier-Free Guidance (CFG), Backus–Naur Form (BNF) guidance, and more.
  • Samplers, though in the examples a basic nucleus sampler is implemented, this is not included in the library itself.
  • State caching or management system.
  • Python bindings.

Compile

  1. Install Rust.
  2. Download the model from HuggingFace, and convert it using convert_safetensors.py. Put the .st model under assets/models.
  3. Compile
    $ cargo build --release --examples
    

Examples

Performance Test

The test generates 500 tokens and measure the time cost.

$ cargo run --release --example rt-gen

Chat Demo

To chat with the model, run

$ cargo run --release --example rt-chat

In this demo, type + to retry last round's generation; type - to exit.

  • To specify the location of your safetensors model, use

    $ cargo run --release --example rt-chat -- --model /path/to/model
    
  • To load custom prompts for chat, use

    $ cargo run --release --example rt-chat -- --prompt /path/to/prompt
    

    See assets/prompt.json for details.

  • To specify layer quantization, use --quant <LAYERS> or --quant-nf4 <LAYERS> to quantize the first <LAYERS> layers. For example, use

    $ cargo run --release --example rt-chat -- --quant 32
    

    to quantize all 32 layers.

  • Use --turbo flag to switch to alternative GEMM kernel when inferring long prompts. The rt examples enforces turbo.

Batched Inference

This demo showcases generation of 4 batches of text with various lengths simultaneously.

$ cargo run --release --example rt-batch

Inspector

The inspector demo is a guide to an advanced usage called hooks. Hooks allow user to inject any tensor ops into the model's inference process, fetching and modifying the contents of the runtime buffer, state, and even the model parameters. Hooks enable certain third-party implementations like dynamic LoRA, control net, and so on.

(De)serialization

All versions of models implements serde::ser::Serialize and serde::de::DeserializeSeed<'de>, which means that one can save quantized or lora-merged model into a file and load it afterwards.

Use in Your Project

To use in your own rust project, simply add web-rwkv = "0.8" as a dependency in your Cargo.toml. Check examples on how to create the environment, the tokenizer and how to run the model.

Explanations

Inference Runtime

Since v0.7 there is a runtime feature for the crate. When enabled, applications can use infrastructures of the asynchronous runtime API.

In general, a runtime is an asynchronous task that is driven by tokio. It allows CPU and GPU to work in parallel, maximizing the utilization of GPU computing resource.

Check examples starting with rt for more information, and compare the generation speed with their non-rt counterparts.

Batched Inference

Since version v0.2.4, the engine supports batched inference, i.e., inference of a batch of prompts (with different length) in parallel. This is achieved by a modified WKV kernel.

When building the model, the user specifies token_chunk_size (default: 32, but for powerful GPUs this could be much higher), which is the maximum number of tokens the engine could process in one run call.

After creating the model, the user creates a ModelState with num_batch specified. This means that there are num_batch slots that could consume the inputs in parallel.

Before calling run(), the user fills each slot with some tokens as prompt. If a slot is empty, no inference will be run for it.

After calling run(), some (but may not be all) input tokens are consumed, and logits appears in their corresponding returned slots if the inference of that slot is finished during this run. Since there are only token_chunk_size tokens are processed during each run() call, there may be none of logits appearing in the results.

Hooks

Hooks are a very powerful tool for customizing model inference process. The library provides with the Model::run_with_hooks function, which takes into a HookMap as a parameter.

A HookMap is essentially a hashmap from Model::Hook to functions. A Model::Hook defines a certain place the hook function can be injected into. A model generally has dozens of hooking points. A hook function is a function of Fn(&Model<'_>, &ModelState, &Runtime) -> Result<TensorOp, TensorError>, where you can create tensor ops that reads/writes all the tensors you get here.

An example that reads out every layer's output:

let info = model.info();
// create a buffer to store each layer's output
let buffer = Buffer::new(&context, &info);
let mut hooks = HookMap::default();
for layer in 0..info.num_layer {
   let buffer = buffer.clone();
   hooks.insert(
      v5::Hook::PostFfn(layer),
      Box::new(
            move |_model, _state, runtime: &v5::Runtime| -> Result<TensorOp, TensorError> {
               // figure out how many tokens this run has
               let shape = runtime.ffn_x.shape();
               let num_token = shape[1];
               // "steal" the layer's output (activation), and put it into our buffer
               TensorOp::blit(
                  runtime.ffn_x.view(.., num_token - 1, .., ..)?,
                  buffer.ffn_x.view(.., layer, .., ..)?,
               )
            },
      ),
   );
}
let output = model.run_with_hooks(&mut tokens, &state, &hooks).await?;

Convert Models

You must download the model and put in assets/models before running if you are building from source. You can now download the converted models here.

You may download the official RWKV World series models from HuggingFace, and convert them via the provided convert_safetensors.py.

$ python convert_safetensors.py --input /path/to/model.pth --output /path/to/model.st

If you don't have python installed or don't want to, there is a pure rust converter. You can clone that repo and run

$ cd /path/to/web-rwkv-converter
$ cargo run --release --example converter -- --input /path/to/model.pth --output /path/to/model.st

Troubleshoot

  • "thread 'main' panicked at 'called Result::unwrap() on an Err value: HeaderTooLarge'"

    Your model is broken, mainly because you cloned the repo but did not set up git-lfs.Please download the model manually and overwrite that one in assets/models.

  • "thread 'main' panicked at 'Error in Queue::submit: parent device is lost'"

    Your GPU is not responding. Maybe you are running a model that is just too big for your device. If the model doesn't fit into your VRam, the driver needs to constantly swap and transfer the model parameters, causing it to be 10x slower. Try to quantize your model first.

Credits

  • Tokenizer is implemented by @koute.

Dependencies

~10–45MB
~703K SLoC