#studio #model #lm #chat #chat-completion #server #lm-studio

rustylms

A library used to communicate with lm-studio servers

1 unstable release

0.1.0 Jun 22, 2024

#26 in #studio

Apache-2.0

12KB
213 lines

rustylms - A LM-Studio API wrapper written in Rust

ℹ️ If you are looking for an ollama api wrapper, consider looking at ollama-rs

⚠️ This project is still not finished! Bugs may occur

This library provides support for LM Studio Servers. All features are made according to the official documentation.

Feature List

  • Generating completions using chats
  • Retrieving all models from the server

To-Do List

  • Generating completions
    • Supporting streams as responses
  • Creating embeddings

Examples

Retrieve models

use rustylms::lmsserver::LMSServer;

#[tokio::main]
async fn main() {
    let server = LMSServer::new("http://localhost:1234");
    let models = server.get_models().await.expect("Unable to retrieve models");

    println!("{:#?}", models);
}

Generating a chat completion

use rustylms::{chat::Chat, lmsserver::LMSServer};

#[tokio::main]
async fn main() {
    let server = LMSServer::new("http://localhost:1234");
    let chat = Chat::new("model-name")
        .system_prompt(
            "You are a helpful assistant that gives information to any programming-related topic.",
        )
        .user_prompt("what is rust?");

    let completion = chat
        .get_completions(&server)
        .await
        .expect("could not get completions");
    let message = completion.get_message().unwrap();

    println!("The assistant answered: {}", message.content);
}

Dependencies

~4–16MB
~195K SLoC