13 releases
Uses new Rust 2024
0.2.13 | Apr 19, 2025 |
---|---|
0.2.12 | Apr 19, 2025 |
0.2.11 | Mar 13, 2025 |
0.2.9 | Feb 1, 2025 |
0.1.0 |
|
#272 in Web programming
329 downloads per month
71KB
1.5K
SLoC
ai
Simple to use AI library for Rust primarily targeting OpenAI compatible providers with more to come.
This library is work in progress, and the API is subject to change.
Table of Contents
Using the library
Add ai as a dependency along with tokio
. For
streaming add futures
crate, for CancellationToken
support add tokio-util
.
This library directly uses reqwest
for http client when making requests to the
servers.
cargo add ai
Cargo Features
Feature | Description | Default |
---|---|---|
openai_client |
Enable OpenAI client | ✅ |
azure_openai_client |
Enable Azure OpenAI client | ✅ |
ollama_client |
Enable Ollama client | |
native_tls |
Enable native TLS for reqwest http client | |
rustls_tls |
Enable rustls TLS for reqwest http client | ✅ |
Examples
Example Name | Description |
---|---|
azure_openai_chat_completions | Basic chat completions using Azure OpenAI API |
chat_completions_streaming | Chat completions streaming example |
chat_completions_streaming_with_cancellation_token | Chat completions streaming with cancellation token |
chat_completions_tool_calling | Tool/Function calling example |
chat_console | Console chat example |
clients_dynamic_runtime | Dynamic runtime client selection |
openai_chat_completions | Basic chat completions using OpenAI API |
openai_embeddings | Text embeddings with OpenAI API |
Chat Completion API
use ai::{
chat_completions::{ChatCompletion, ChatCompletionMessage, ChatCompletionRequestBuilder},
Result,
};
#[tokio::main]
async fn main() -> Result<()> {
let openai = ai::clients::openai::Client::from_url("ollama", "http://localhost:11434/v1")?;
// let openai = ai::clients::openai::Client::from_env()?;
// let openai = ai::clients::openai::Client::new("api_key")?;
let request = ChatCompletionRequestBuilder::default()
.model("gemma3")
.messages(vec![
ChatCompletionMessage::System("You are a helpful assistant".into()),
ChatCompletionMessage::User("Tell me a joke.".into()),
])
.build()?;
let response = openai.chat_completions(&request).await?;
println!("{}", &response.choices[0].message.content.as_ref().unwrap());
Ok(())
}
Embeddings API
use ai::{
embeddings::{Embeddings, EmbeddingsRequestBuilder},
Result,
};
#[tokio::main]
async fn main() -> Result<()> {
let openai = ai::clients::openai::Client::from_env()?;
let request = EmbeddingsRequestBuilder::default()
.model("text-embedding-3-small")
.input(vec!["Hello, world!".to_string()])
.build()?;
// Get standard float embeddings
let response = openai.create_embeddings(&request).await?;
println!("Embedding dimensions: {}", response.data[0].embedding.len());
// Get base64 encoded embeddings
let base64_response = openai.create_base64_embeddings(&request).await?;
println!("Base64 embedding: {}", base64_response.data[0].embedding);
Ok(())
}
Using tuples for messages. Unrecognized role
will cause panic.
let request = &ChatCompletionRequestBuilder::default()
.model("gpt-4o-mini".to_string())
.messages(vec![
("system", "You are a helpful assistant.").into(),
("user", "Tell me a joke").into(),
])
.build()?;
Clients
OpenAI
cargo add ai --features=openai_client
let openai = ai::clients::openai::Client::new("open_api_key")?;
let openai = ai::clients::openai::Client::from_url("open_api_key", "http://api.openai.com/v1")?;
let openai = ai::clients::openai::Client::from_env()?;
Gemini API via OpenAI
Set http1_title_case_headers
for Gemini API.
let gemini = ai::clients::openai::ClientBuilder::default()
.http_client(
reqwest::Client::builder()
.http1_title_case_headers()
.build()?,
)
.api_key("gemini_api_key".into())
.base_url("https://generativelanguage.googleapis.com/v1beta/openai".into())
.build()?;
Azure OpenAI
cargo add ai --features=azure_openai_client
let azure_openai = ai::clients::azure_openai::ClientBuilder::default()
.auth(ai::clients::azure_openai::Auth::BearerToken("token".into()))
// .auth(ai::clients::azure_openai::Auth::ApiKey(
// std::env::var(ai::clients::azure_openai::AZURE_OPENAI_API_KEY_ENV_VAR)
// .map_err(|e| Error::EnvVarError(ai::clients::azure_openai::AZURE_OPENAI_API_KEY_ENV_VAR.to_string(), e))?
// .into(),
// ))
.api_version("2024-02-15-preview")
.base_url("https://resourcename.openai.azure.com")
.build()?;
Pass the deployment_id
as model
of the ChatCompletionRequest
.
Use the following command to get bearer token.
az account get-access-token --resource https://cognitiveservices.azure.com
Ollama
Suggest using openai client instead of ollama for maximum compatibility.
cargo add ai --features=ollama_client
let ollama = ai::clients::ollama::Client::new()?;
let ollama = ai::clients::ollama::Client::from_url("http://localhost:11434")?;
LICENSE
MIT
Dependencies
~9–20MB
~269K SLoC