#llama-cpp #llm #mimic

llama-cpp-2

llama.cpp bindings for Rust

123 releases

new 0.1.144 Apr 19, 2026
0.1.141 Mar 29, 2026
0.1.130 Dec 13, 2025
0.1.128 Nov 28, 2025
0.1.45 Mar 27, 2024

#13 in Machine learning

Download history 9834/week @ 2025-12-31 11001/week @ 2026-01-07 4958/week @ 2026-01-14 4902/week @ 2026-01-21 5773/week @ 2026-01-28 6664/week @ 2026-02-04 7903/week @ 2026-02-11 10015/week @ 2026-02-18 8683/week @ 2026-02-25 16133/week @ 2026-03-04 15866/week @ 2026-03-11 16018/week @ 2026-03-18 16608/week @ 2026-03-25 16822/week @ 2026-04-01 16512/week @ 2026-04-08 21543/week @ 2026-04-15

74,162 downloads per month
Used in 67 crates (51 directly)

MIT/Apache

13MB
248K SLoC

C++ 160K SLoC // 0.1% comments C 30K SLoC // 0.0% comments CUDA 17K SLoC // 0.0% comments GLSL 15K SLoC // 0.0% comments Python 10K SLoC // 0.1% comments Metal Shading Language 8K SLoC // 0.1% comments Rust 6K SLoC // 0.0% comments Objective-C 2K SLoC // 0.1% comments

llama-cpp-rs-2

A wrapper around the llama-cpp library for rust.

Info

This is part of the project powering all the LLMs at utilityai, it is tightly coupled llama.cpp and mimics its API as closly as possible while being safe in order to stay up to date.

Tool Calling

llama-cpp-2 exposes the raw llama.cpp OpenAI-compatible tool-calling flow, so Rust callers can pass tool definitions into chat templates and get the generated grammar back.

use llama_cpp_2::openai::OpenAIChatTemplateParams;
use serde_json::json;

let template = model.chat_template(None)?;

let tools_json = json!([
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Fetch current weather by city.",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": { "type": "string" }
                },
                "required": ["location"]
            }
        }
    }
])
.to_string();

let messages_json = json!([
    {
        "role": "system",
        "content": "You are a tool caller."
    },
    {
        "role": "user",
        "content": "Fetch the weather in Paris."
    }
])
.to_string();

let params = OpenAIChatTemplateParams {
    messages_json: &messages_json,
    tools_json: Some(&tools_json),
    tool_choice: Some("auto"),
    json_schema: None,
    grammar: None,
    reasoning_format: None,
    chat_template_kwargs: Some("{}"),
    add_generation_prompt: true,
    use_jinja: true,
    parallel_tool_calls: false,
    enable_thinking: false,
    add_bos: false,
    add_eos: false,
    parse_tool_calls: true,
};

let result = model.apply_chat_template_oaicompat(&template, &params)?;

For standalone grammar generation from a JSON schema string, use llama_cpp_2::json_schema_to_grammar.

Dependencies

This uses bindgen to build the bindings to llama.cpp. This means that you need to have clang installed on your system.

If this is a problem for you, open an issue, and we can look into including the bindings.

See bindgen for more information.

Disclaimer

This crate is not safe. There is absolutly ways to misuse the llama.cpp API provided to create UB, please create an issue if you spot one. Do not use this code for tasks where UB is not acceptable.

This is not a simple library to use. In an ideal world a nice abstraction would be written on top of this crate to provide an ergonomic API - the benefits of this crate over raw bindings is safety (and not much of it as that) and not much else.

We compensate for this shortcoming (we hope) by providing lots of examples and good documentation. Testing is a work in progress.

Contributing

Contributions are welcome. Please open an issue before starting work on a non-trivial PR.

Dependencies

~8–15MB
~277K SLoC