5 releases (breaking)
| 0.5.0 | Oct 14, 2025 |
|---|---|
| 0.4.0 | Oct 12, 2025 |
| 0.3.0 | Oct 10, 2025 |
| 0.2.0 | Oct 10, 2025 |
| 0.1.0 | Sep 21, 2025 |
#123 in Asynchronous
725 downloads per month
590KB
13K
SLoC
openai-ergonomic
Ergonomic Rust wrapper for the OpenAI API, providing type-safe builder patterns and async/await support.
Features
- Type-safe - full type safety with builder patterns using
bon - Async/await - built on
tokioandreqwestfor modern async Rust - Streaming - first-class support for streaming responses
- Comprehensive - covers all
OpenAIAPI endpoints - Azure
OpenAI- seamless support for AzureOpenAIdeployments - Well-tested - extensive test coverage with mock support
- Well-documented - rich documentation with examples
Status
Status: under construction. The crate is still in active development and not yet ready for production use.
Quick Start
Add openai-ergonomic to your Cargo.toml:
[dependencies]
openai-ergonomic = "0.1"
tokio = { version = "1.0", features = ["full"] }
Basic Usage
use openai_ergonomic::{Client, Config};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Build client from environment variables
let client = Client::from_env()?.build();
let response = client
.chat_completions()
.model("gpt-4")
.message("user", "Hello, world!")
.send()
.await?;
println!("{}", response.choices[0].message.content);
Ok(())
}
Streaming Example
use openai_ergonomic::Client;
use futures::StreamExt;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Build client from environment variables
let client = Client::from_env()?.build();
let builder = client
.chat()
.user("Tell me a story");
let mut stream = client.send_chat_stream(builder).await?;
while let Some(chunk) = stream.next().await {
let chunk = chunk?;
if let Some(content) = chunk.content() {
print!("{}", content);
}
}
Ok(())
}
Custom HTTP Client with Retry Logic
You can provide your own reqwest::Client with custom retry, timeout, and middleware configuration.
Note: When using a custom HTTP client, you must configure the timeout on the reqwest::Client itself:
use openai_ergonomic::{Client, Config};
use reqwest_middleware::ClientBuilder;
use reqwest_retry::{RetryTransientMiddleware, policies::ExponentialBackoff};
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a retry policy with exponential backoff
let retry_policy = ExponentialBackoff::builder()
.build_with_max_retries(3);
// Build a reqwest client with custom timeout
let reqwest_client = reqwest::Client::builder()
.timeout(Duration::from_secs(60)) // Configure timeout here
.build()?;
// Add retry middleware
let http_client = ClientBuilder::new(reqwest_client)
.with(RetryTransientMiddleware::new_with_policy(retry_policy))
.build();
// Create OpenAI client with custom HTTP client
let config = Config::builder()
.api_key("your-api-key")
.http_client(http_client)
.build();
let client = Client::new(config)?.build();
// Use the client normally - retries and timeout are handled automatically
let response = client.chat_simple("Hello!").await?;
println!("{}", response);
Ok(())
}
Azure OpenAI Support
The crate seamlessly supports Azure OpenAI deployments. Azure-specific configuration can be provided through environment variables or programmatically.
Using Environment Variables
export AZURE_OPENAI_ENDPOINT="https://my-resource.openai.azure.com"
export AZURE_OPENAI_API_KEY="your-azure-api-key"
export AZURE_OPENAI_DEPLOYMENT="gpt-4"
export AZURE_OPENAI_API_VERSION="2024-02-01" # Optional, defaults to 2024-02-01
use openai_ergonomic::Client;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Build client from Azure environment variables
let client = Client::from_env()?.build();
// Use exactly the same API as standard OpenAI
let response = client.chat_simple("Hello from Azure!").await?;
println!("{}", response);
Ok(())
}
Manual Configuration
use openai_ergonomic::{Client, Config};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = Config::builder()
.api_key("your-azure-api-key")
.api_base("https://my-resource.openai.azure.com")
.azure_deployment("gpt-4")
.azure_api_version("2024-02-01")
.build();
let client = Client::new(config)?.build();
let response = client.chat_simple("Hello!").await?;
println!("{}", response);
Ok(())
}
Note: The library automatically handles the differences between Azure OpenAI and standard OpenAI (authentication, URL paths, API versioning). You use the same API regardless of the provider.
Documentation
- Getting Started Guide
- Architecture Overview
- API Coverage Status - See what APIs are available
- Examples Index - Browse all examples by category
- Responses-First Workflows
- Tool Orchestration
- Vector Store Operations
- Langfuse Integration
- API Documentation
Examples
The examples/ directory contains comprehensive examples for all major OpenAI API features:
Core Examples
- quickstart.rs - Quick introduction to the library with basic usage patterns
- chat_streaming.rs - Real-time chat streaming with Server-Sent Events (SSE)
- tool_calling_multiturn.rs - Multi-turn tool calling with proper conversation history management
- responses_comprehensive.rs - Complete responses API demonstration including function calling and web search
- responses_streaming.rs - Real-time streaming responses with progress indicators
- chat_comprehensive.rs - Full chat completions API with conversation history
- structured_outputs.rs - JSON mode and schema-based structured outputs
- vision_chat.rs - Image understanding with GPT-4 Vision
Media & AI Capabilities
- audio_speech.rs - Text-to-speech generation with multiple voices
- audio_transcription.rs - Speech-to-text transcription and translation
- images_comprehensive.rs - Image generation, editing, and variations
- embeddings.rs - Vector embeddings with similarity search patterns
Advanced APIs
- assistants_basic.rs - Introduction to the Assistants API with threads and tools
Run any example with:
# Set your OpenAI API key
export OPENAI_API_KEY="your-api-key-here"
# Run an example
cargo run --example quickstart
cargo run --example responses_streaming
cargo run --example vision_chat
Each example includes:
- Comprehensive documentation and inline comments
- Error handling best practices
- Real-world use cases and patterns
- Progressive complexity from basic to advanced usage
Contributing
We welcome contributions! Please see our Contributing Guide for details.
License
Licensed under either of
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.
Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
Dependencies
~18–37MB
~540K SLoC