9 releases
0.1.8 | May 11, 2024 |
---|---|
0.1.7 | May 9, 2024 |
0.1.2 | Apr 29, 2024 |
#797 in Web programming
45KB
1K
SLoC
The llm_api_access
crate provides a unified way to interact with different large language models (LLMs) like OpenAI, Gemini, and Anthropic.
LLM Enum
This enum represents the supported LLM providers:
OpenAI
: Represents the OpenAI language model.Gemini
: Represents the Gemini language model.Anthropic
: Represents the Anthropic language model.
Access Trait
The Access
trait defines asynchronous methods for interacting with LLMs:
send_single_message
: Sends a single message and returns the generated response.send_convo_message
: Sends a list of messages as a conversation and returns the generated response.get_model_info
: Gets information about a specific LLM model.list_models
: Lists all available LLM models.count_tokens
: Counts the number of tokens in a given text.
The LLM
enum implements Access
, providing specific implementations for each method based on the chosen LLM provider.
Note: Currently, get_model_info
, list_models
, and count_tokens
only work for the Gemini LLM. Other providers return an error indicating this functionality is not yet supported.
Loading API Credentials with dotenv
The llm_api_access
crate uses the dotenv
library to securely load API credentials from a .env
file in your project's root directory. This file should contain key-value pairs for each LLM provider you want to use.
Example Structure:
OPEN_AI_ORG=your_openai_org
OPENAI_API_KEY=your_openai_api_key
GEMINI_API_KEY=your_gemini_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
Steps:
- Create
.env
File: Create a file named.env
at the root of your Rust project directory. - Add API Keys: Fill in the
.env
file with the following format, replacing placeholders with your actual API keys.
Important Note:
- Never commit your
.env
file to version control systems like Git. It contains sensitive information like API keys.
Example Usage
send_single_message
Example
use llm::{LLM, Access};
#[tokio::main]
async fn main() {
// Create an instance of the OpenAI LLM
let llm = LLM::OpenAI;
// Send a single message to the LLM
let response = llm.send_single_message("Tell me a joke about programmers").await;
match response {
Ok(joke) => println!("Joke: {}", joke),
Err(err) => eprintln!("Error: {}", err),
}
}
This example creates an instance of the LLM::OpenAI
provider and sends a single message using the send_single_message
method. It then matches the result, printing the generated joke or an error message if an error occurred.
send_convo_message
Example
use llm::{LLM, Access, Message};
#[tokio::main]
async fn main() {
// Create an instance of the Gemini LLM
let llm = LLM::Gemini;
// Define the conversation messages
let messages = vec![
Message {
role: "user".to_string(),
content: "You are a helpful coding assistant.".to_string(),
},
Message {
role: "model".to_string(),
content: "You got it! I am ready to assist!".to_string(),
},
Message {
role: "user".to_string(),
content: "Generate a rust function that reverses a string.".to_string(),
},
];
// Send the conversation messages to the LLM
let response = llm.send_convo_message(messages).await;
match response {
Ok(code) => println!("Code: {}", code),
Err(err) => eprintln!("Error: {}", err),
}
}
Note: This example requires API keys and configuration for the Gemini LLM provider.
Testing
The llm_api_access
crate includes unit tests for various methods in the Access
trait. These tests showcase usage and expected behavior with different LLM providers.
Dependencies
~6–18MB
~264K SLoC