#ollama #model #async #bindings #api-bindings #embedding #running

yanked ollama-rest

Asynchronous Rust bindings of Ollama REST API

6 releases

0.3.3 Nov 26, 2024
0.3.2 Sep 26, 2024
0.3.0 Jul 26, 2024
0.1.1 Jul 23, 2024

#14 in #ollama

Download history 41/week @ 2024-09-14 253/week @ 2024-09-21 64/week @ 2024-09-28 5/week @ 2024-10-05 4/week @ 2024-10-12 150/week @ 2024-11-23 14/week @ 2024-11-30

164 downloads per month

MIT license

28KB
600 lines

ollama-rest.rs

Asynchronous Rust bindings of Ollama REST API, using reqwest, tokio, serde, and chrono.

Install

cargo add ollama-rest@0.3

Features

name status
Completion Supported ✅
Embedding Supported ✅
Model creation Supported ✅
Model deletion Supported ✅
Model pulling Supported ✅
Model copying Supported ✅
Local models Supported ✅
Running models Supported ✅
Model pushing Experimental 🧪
Tools Experimental 🧪

At a glance

See source of this example.

use std::io::Write;

use ollama_rest::{models::generate::{GenerationRequest, GenerationResponse}, Ollama};
use serde_json::json;

// By default checking Ollama at 127.0.0.1:11434
let ollama = Ollama::default();

let request = serde_json::from_value::<GenerationRequest>(json!({
    "model": "llama3.2:1b",
    "prompt": "Why is the sky blue?",
})).unwrap();

let mut stream = ollama.generate_streamed(&request).await.unwrap();

while let Some(Ok(res)) = stream.next().await {
    if !res.done {
        print!("{}", res.response);
        // Flush stdout for each word to allow realtime output
        std::io::stdout().flush().unwrap();
    }
}

println!();

Or, make your own chatbot interface! See this example (CLI) and this example (REST API).

Dependencies

~7–19MB
~247K SLoC