8 releases

0.3.4 Oct 10, 2024
0.3.3 Aug 21, 2024
0.2.2 Feb 28, 2024
0.1.0 Dec 16, 2023

#1017 in Machine learning

Download history 153/week @ 2024-08-19 28/week @ 2024-08-26 8/week @ 2024-09-02 6/week @ 2024-09-09 213/week @ 2024-09-16 182/week @ 2024-09-23 98/week @ 2024-09-30 193/week @ 2024-10-07 118/week @ 2024-10-14 35/week @ 2024-10-21 63/week @ 2024-10-28 44/week @ 2024-11-04 31/week @ 2024-11-11 56/week @ 2024-11-18 32/week @ 2024-11-25 68/week @ 2024-12-02

191 downloads per month
Used in 4 crates (via kalosm-language)

MIT/Apache

395KB
9K SLoC

RLlama

RLlama is a Rust implementation of the quantized Llama 7B language model.

Llama 7B is a very small but performant language model that can be easily run on your local machine.

This library uses Candle to run Llama.

Usage

use kalosm_llama::prelude::*;

#[tokio::main]
async fn main() {
    let mut model = Llama::new().await.unwrap();
    let prompt = "The capital of France is ";
    let mut result = model.stream_text(prompt).await.unwrap();

    print!("{prompt}");
    while let Some(token) = result.next().await {
        print!("{token}");
    }
}

Dependencies

~35–55MB
~1M SLoC