#flows #llm #integration #service #api-integration #model #chat

bin+lib llmservice-flows

LLM Service integration for flows.network

13 unstable releases (4 breaking)

0.5.0 Nov 26, 2024
0.4.2 Nov 19, 2024
0.3.3 Aug 21, 2024
0.3.0 Jul 15, 2024
0.1.3 Aug 10, 2023

#767 in Network programming

Download history 23/week @ 2024-08-13 270/week @ 2024-08-20 13/week @ 2024-08-27 37/week @ 2024-09-10 30/week @ 2024-09-17 52/week @ 2024-09-24 46/week @ 2024-10-01 16/week @ 2024-10-08 2/week @ 2024-10-15 5/week @ 2024-10-22 1/week @ 2024-10-29 8/week @ 2024-11-05 161/week @ 2024-11-12 321/week @ 2024-11-19 176/week @ 2024-11-26

667 downloads per month

MIT/Apache

27KB
488 lines

LLM Service Integration for flows.network.


lib.rs:

LLM Service integration for Flows.network

Quick Start

To get started, let's write a tiny flow function.

use llmservice_flows::{
    chat::ChatOptions,
    LLMServiceFlows,
};
use lambda_flows::{request_received, send_response};
use serde_json::Value;
use std::collections::HashMap;

#[no_mangle]
#[tokio::main(flavor = "current_thread")]
pub async fn run() {
    request_received(handler).await;
}

async fn handler(_qry: HashMap<String, Value>, body: Vec<u8>) {
    let co = ChatOptions {
      model: Some("gpt-4"),
      token_limit: 8192,
      ..Default::default()
    };
    let mut lf = LLMServiceFlows::new("https://api.openai.com/v1");
    lf.set_api_key("your api key");

    let r = match lf.chat_completion(
        "any_conversation_id",
        String::from_utf8_lossy(&body).into_owned().as_str(),
        &co,
    )
    .await
    {
        Ok(c) => c.choice,
        Err(e) => e,
    };

    send_response(
        200,
        vec![(
            String::from("content-type"),
            String::from("text/plain; charset=UTF-8"),
        )],
        r.as_bytes().to_vec(),
    );
}

When the Lambda request is received, chat using LLMServiceFlows::chat_completion then send the response.

Dependencies

~7–19MB
~261K SLoC