#flows #llm #integration #service #api-integration #model #chat

bin+lib llmservice-flows

LLM Service integration for flows.network

12 unstable releases (3 breaking)

new 0.4.2 Nov 19, 2024
0.4.1 Nov 18, 2024
0.3.3 Aug 21, 2024
0.3.0 Jul 15, 2024
0.1.3 Aug 10, 2023

#10 in #flows

MIT/Apache

26KB
443 lines

LLM Service Integration for flows.network.


lib.rs:

LLM Service integration for Flows.network

Quick Start

To get started, let's write a tiny flow function.

use llmservice_flows::{
    chat::ChatOptions,
    LLMServiceFlows,
};
use lambda_flows::{request_received, send_response};
use serde_json::Value;
use std::collections::HashMap;

#[no_mangle]
#[tokio::main(flavor = "current_thread")]
pub async fn run() {
    request_received(handler).await;
}

async fn handler(_qry: HashMap<String, Value>, body: Vec<u8>) {
    let co = ChatOptions {
      model: Some("gpt-4"),
      token_limit: 8192,
      ..Default::default()
    };
    let mut lf = LLMServiceFlows::new("https://api.openai.com/v1");
    lf.set_api_key("your api key");

    let r = match lf.chat_completion(
        "any_conversation_id",
        String::from_utf8_lossy(&body).into_owned().as_str(),
        &co,
    )
    .await
    {
        Ok(c) => c.choice,
        Err(e) => e,
    };

    send_response(
        200,
        vec![(
            String::from("content-type"),
            String::from("text/plain; charset=UTF-8"),
        )],
        r.as_bytes().to_vec(),
    );
}

When the Lambda request is received, chat using LLMServiceFlows::chat_completion then send the response.

Dependencies

~7–19MB
~258K SLoC