#language-model #llm #api-bindings #stream #api #api-key #api-client

llm_stream

A very simple Rust library to simplify streaming api interaction with LLMs, free from complex async operations and redundant dependencies

4 releases (2 breaking)

0.3.1 Sep 15, 2024
0.3.0 Sep 5, 2024
0.2.0 Sep 4, 2024
0.1.0 Sep 4, 2024

#249 in Machine learning

Download history 323/week @ 2024-09-03 182/week @ 2024-09-10 24/week @ 2024-09-17 8/week @ 2024-09-24 4/week @ 2024-10-01 2/week @ 2024-10-08 2/week @ 2024-10-15

166 downloads per month

MIT license

44KB
801 lines

llm-stream

Crates.io Docs.rs

This library provides a streamlined approach to interacting with Large Language Model (LLM) streaming APIs from different providers.

Supported Providers

  • OpenAI: Access the powerful GPT models through OpenAI's API.
  • Anthropic: Utilize Anthropic's Claude models for various language tasks.
  • Google: Integrate Google's Gemini family of models.
  • Mistral: Leverage Mistral's language models for advanced capabilities.
  • GitHub Copilot: Access code-generation capabilities powered by GitHub Copilot.

Key Features

  • Unified Interface: Interact with different LLM providers using a consistent API.
  • Streaming Responses: Receive model output as a stream of text, enabling real-time interactions and dynamic UI updates.
  • Simplified Authentication: Easily authenticate with API keys for supported providers.
  • Customization Options: Configure model parameters, such as temperature and top_p, to fine-tune output generation.

Installation

Add the following dependency to your Cargo.toml file:

[dependencies]
llm-stream = "0.1.3"

Usage

Here's a basic example demonstrating how to use the library to generate text with OpenAI's GPT-4 model:

use anyhow::Result;
use futures::stream::TryStreamExt;
use llm_stream::openai::{Auth, Client, Message, MessageBody, Role};
use std::io::Write;

#[tokio::main]
async fn main() -> Result<()> {
    env_logger::init();

    let key = std::env::var("OPENAI_API_KEY")?;

    let auth = Auth::new(key);
    let client = Client::new(auth, "https://api.openai.com/v1");

    let messages = vec![Message {
        role: Role::User,
        content: "What is the capital of the United States?".to_string(),
    }];

    let body = MessageBody::new("gpt-4o", messages);

    let mut stream = client.delta(&body)?;

    while let Ok(Some(text)) = stream.try_next().await {
        print!("{text}");
        std::io::stdout().flush()?;
    }

    Ok(())
}

For more in-depth examples and usage instructions, refer to the examples directory: ./lib/llm_stream/examples.

🔐 Authentication

Each provider requires an API key, typically set as an environment variable:

  • OpenAI: OPENAI_API_KEY
  • Google: GOOGLE_API_KEY
  • Anthropic: ANTHROPIC_API_KEY
  • Mistral: MISTRAL_API_KEY

Dependencies

~9–19MB
~242K SLoC