5 unstable releases
0.3.0 | Oct 12, 2023 |
---|---|
0.2.0 | Aug 29, 2023 |
0.1.2 | Aug 3, 2023 |
0.1.1 | Jul 20, 2023 |
0.1.0 | Jul 10, 2023 |
#308 in FFI
76 downloads per month
3.5MB
65K
SLoC
rust_llama.cpp
LLama.cpp rust bindings.
The rust bindings are mostly based on https://github.com/go-skynet/go-llama.cpp/
Building Locally
Note: This repository uses git submodules to keep track of LLama.cpp.
Clone the repository locally:
git clone --recurse-submodules https://github.com/mdrokz/rust-llama.cpp
cargo build
Usage
[dependencies]
llama_cpp_rs = "0.2.0"
use llama_cpp_rs::{
options::{ModelOptions, PredictOptions},
LLama,
};
fn main() {
let model_options = ModelOptions::default();
let llama = LLama::new(
"../wizard-vicuna-13B.ggmlv3.q4_0.bin".into(),
&model_options,
)
.unwrap();
let predict_options = PredictOptions {
token_callback: Some(Box::new(|token| {
println!("token1: {}", token);
true
})),
..Default::default()
};
llama
.predict(
"what are the national animals of india".into(),
predict_options,
)
.unwrap();
}
Examples
The examples contain dockerfiles to run them
see examples
TODO
- Implement support for cublas,openBLAS & OpenCL #7
- Implement support for GPU (Metal)
- Add some test cases
- Support for fetching models through http & S3
- Sync with latest master & support GGUF
- Add some proper examples https://github.com/mdrokz/rust-llama.cpp/pull/7
LICENSE
MIT