19 releases
Uses new Rust 2024
0.3.1 | May 19, 2025 |
---|---|
0.3.0 | May 19, 2025 |
0.2.9 | May 15, 2025 |
0.1.9 | May 14, 2025 |
#270 in Web programming
2,211 downloads per month
25KB
361 lines
My ChatGPT API Rust Client
A Rust library for interacting with OpenAI's ChatGPT API with streaming support. This library provides a simple and efficient way to communicate with OpenAI's API while handling streaming responses, token usage tracking, and now text embeddings.
Features
- Stream mode for API responses
- Conversation memory to maintain chat history
- Comprehensive error handling
- Detailed token usage tracking with reasoning tokens
- Flexible output handling via callback functions
- Type-safe API interactions
- Async/await support
- Support for multiple GPT-4 models with appropriate tool configurations
- NEW: Text embeddings support with two embedding models
Version
Current version: 0.3.0
Requirements
- Rust edition 2021
- Dependencies:
- reqwest
0.11
(with json, stream features) - serde
1.0
(with derive feature) - serde_json
1.0
- tokio
1.0
(with full features) - futures-util
0.3
- reqwest
Installation
Add this to your Cargo.toml
:
[dependencies]
my-chatgpt = { git = "https://github.com/bongkow/chatgpt-api", version = "0.3.0" }
Usage
Chat Completion
use my_chatgpt::response::{send_chat, ResponseError, UsageInfo, Message, SendChatResult};
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let api_key = "your-api-key";
let model = "gpt-4o"; // or any other supported model
let instructions = "You are a helpful assistant.";
// Define a handler function for the API responses
fn handler(usage: Option<&UsageInfo>, error: Option<&ResponseError>, chunk: Option<&serde_json::Value>) {
if let Some(err) = error {
println!("Error: {:?}", err);
}
if let Some(chunk_data) = chunk {
println!("Received chunk: {:?}", chunk_data);
}
};
// First message
let input1 = "Tell me about Rust programming language.";
let response1 = match send_chat(instructions, input1, api_key, model, &handler, None).await {
SendChatResult::Ok(response) => {
println!("First response: {}", response.message);
println!("Model used: {}", response.model);
println!("Response ID: {}", response.id);
response
},
SendChatResult::Err(e) => panic!("Error: {:?}", e),
};
// Follow-up question using previous response ID
let input2 = "What are its main advantages over C++?";
match send_chat(instructions, input2, api_key, model, &handler, Some(&response1.id)).await {
SendChatResult::Ok(response) => {
println!("Second response: {}", response.message);
println!("Model used: {}", response.model);
println!("Response ID: {}", response.id);
},
SendChatResult::Err(e) => panic!("Error: {:?}", e),
};
Ok(())
}
Text Embeddings
use my_chatgpt::embedding::{get_embedding, EmbeddingModel};
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let api_key = "your-api-key";
// Choose an embedding model
let model = EmbeddingModel::TextEmbedding3Small; // or EmbeddingModel::TextEmbedding3Large
// Get embeddings for a text
let text = "This is a sample text for embedding.";
let embedding_result = get_embedding(text, model, &api_key).await?;
println!("Model used: {}", embedding_result.model);
println!("Embedding dimensions: {}", embedding_result.embedding.len());
println!("First 5 values: {:?}", &embedding_result.embedding[0..5]);
Ok(())
}
Error Handling
The library provides a ResponseError
enum for different error cases:
pub enum ResponseError {
RequestError(String), // Errors related to API requests
ParseError(String), // Errors in parsing responses
NetworkError(String), // Network-related errors
Unknown(String), // Other unexpected errors
}
Token Usage
Enhanced token usage information is provided via the UsageInfo
struct:
pub struct UsageInfo {
pub input_tokens: Option<u32>,
pub input_tokens_details: Option<InputTokensDetails>,
pub output_tokens: Option<u32>,
pub output_tokens_details: Option<OutputTokensDetails>,
pub total_tokens: Option<u32>,
}
pub struct InputTokensDetails {
pub cached_tokens: Option<u32>,
}
pub struct OutputTokensDetails {
pub reasoning_tokens: Option<u32>,
}
Response Structure
The enhanced SendChatOK
structure provides more detailed response information:
pub struct SendChatOK {
pub message: String, // The response message
pub id: String, // Unique response identifier
pub model: String, // Model used for the response
pub usage: UsageInfo, // Detailed token usage information
pub tools: Vec<Tool>, // List of tools used in the response
}
Embedding Structure
The Embedding
structure provides results from embedding requests:
pub struct Embedding {
pub embedding: Vec<f32>, // Vector of embedding values
pub model: String, // Name of the model used
pub input: String, // Original input text
}
Supported Models
Chat Models
The library supports various GPT-4 models with appropriate tool configurations:
- gpt-4.1-nano
- gpt-4.1-mini
- gpt-4.1
- gpt-4o-mini
- gpt-4o-mini-search-preview
- gpt-4o
- gpt-4o-search-preview
Each model is automatically configured with appropriate tools (e.g., web search preview) based on its capabilities.
Embedding Models
The library supports OpenAI's text embedding models:
- text-embedding-3-small (1536 dimensions)
- text-embedding-3-large (3072 dimensions)
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
MIT
Dependencies
~7–19MB
~250K SLoC