#llm-chain #llm #langchain #chain #chatgpt

langchain-rust

LangChain for Rust, the easiest way to write LLM-based programs in Rust

17 stable releases

4.6.0 Oct 6, 2024
4.4.3 Sep 30, 2024
4.3.0 Jun 23, 2024
3.3.2 Apr 15, 2024
0.1.9 Feb 27, 2024

#23 in Machine learning

Download history 449/week @ 2024-08-23 386/week @ 2024-08-30 370/week @ 2024-09-06 233/week @ 2024-09-13 590/week @ 2024-09-20 644/week @ 2024-09-27 542/week @ 2024-10-04 342/week @ 2024-10-11 180/week @ 2024-10-18 120/week @ 2024-10-25 101/week @ 2024-11-01 386/week @ 2024-11-08 569/week @ 2024-11-15 593/week @ 2024-11-22 340/week @ 2024-11-29 139/week @ 2024-12-06

1,821 downloads per month
Used in 2 crates

MIT license

415KB
10K SLoC

🦜️🔗LangChain Rust

Latest Version

⚡ Building applications with LLMs through composability, with Rust! ⚡

Discord Docs: Tutorial

🤔 What is this?

This is the Rust language implementation of LangChain.

Current Features

  • LLMs

  • Embeddings

  • VectorStores

  • Chain

  • Agents

  • Tools

  • Semantic Routing

  • Document Loaders

    • PDF

      use futures_util::StreamExt;
      
      async fn main() {
          let path = "./src/document_loaders/test_data/sample.pdf";
      
          let loader = PdfExtractLoader::from_path(path).expect("Failed to create PdfExtractLoader");
          // let loader = LoPdfLoader::from_path(path).expect("Failed to create LoPdfLoader");
      
          let docs = loader
              .load()
              .await
              .unwrap()
              .map(|d| d.unwrap())
              .collect::<Vec<_>>()
              .await;
      
      }
      
    • Pandoc

      use futures_util::StreamExt;
      
      async fn main() {
      
          let path = "./src/document_loaders/test_data/sample.docx";
      
          let loader = PandocLoader::from_path(InputFormat::Docx.to_string(), path)
              .await
              .expect("Failed to create PandocLoader");
      
          let docs = loader
              .load()
              .await
              .unwrap()
              .map(|d| d.unwrap())
              .collect::<Vec<_>>()
              .await;
      }
      
    • HTML

      use futures_util::StreamExt;
      use url::Url;
      
      async fn main() {
          let path = "./src/document_loaders/test_data/example.html";
          let html_loader = HtmlLoader::from_path(path, Url::parse("https://example.com/").unwrap())
              .expect("Failed to create html loader");
      
          let documents = html_loader
              .load()
              .await
              .unwrap()
              .map(|x| x.unwrap())
              .collect::<Vec<_>>()
              .await;
      }
      
    • CSV

      use futures_util::StreamExt;
      
      async fn main() {
          let path = "./src/document_loaders/test_data/test.csv";
          let columns = vec![
              "name".to_string(),
              "age".to_string(),
              "city".to_string(),
              "country".to_string(),
          ];
          let csv_loader = CsvLoader::from_path(path, columns).expect("Failed to create csv loader");
      
          let documents = csv_loader
              .load()
              .await
              .unwrap()
              .map(|x| x.unwrap())
              .collect::<Vec<_>>()
              .await;
      }
      
    • Git commits

      use futures_util::StreamExt;
      
      async fn main() {
          let path = "/path/to/git/repo";
          let git_commit_loader = GitCommitLoader::from_path(path).expect("Failed to create git commit loader");
      
          let documents = csv_loader
              .load()
              .await
              .unwrap()
              .map(|x| x.unwrap())
              .collect::<Vec<_>>()
              .await;
      }
      
    • Source code

      
      let loader_with_dir =
      SourceCodeLoader::from_path("./src/document_loaders/test_data".to_string())
      .with_dir_loader_options(DirLoaderOptions {
      glob: None,
      suffixes: Some(vec!["rs".to_string()]),
      exclude: None,
      });
      
      let stream = loader_with_dir.load().await.unwrap();
      let documents = stream.map(|x| x.unwrap()).collect::<Vec<_>>().await;
      

Installation

This library heavily relies on serde_json for its operation.

Step 1: Add serde_json

First, ensure serde_json is added to your Rust project.

cargo add serde_json

Step 2: Add langchain-rust

Then, you can add langchain-rust to your Rust project.

Simple install

cargo add langchain-rust

With Sqlite

cargo add langchain-rust --features sqlite

Download additional sqlite_vss libraries from https://github.com/asg017/sqlite-vss

With Postgres

cargo add langchain-rust --features postgres

With SurrialDB

cargo add langchain-rust --features surrealdb

With Qdrant

cargo add langchain-rust --features qdrant

Please remember to replace the feature flags sqlite, postgres or surrealdb based on your specific use case.

This will add both serde_json and langchain-rust as dependencies in your Cargo.toml file. Now, when you build your project, both dependencies will be fetched and compiled, and will be available for use in your project.

Remember, serde_json is a necessary dependencies, and sqlite, postgres and surrealdb are optional features that may be added according to project needs.

Quick Start Conversational Chain

use langchain_rust::{
    chain::{Chain, LLMChainBuilder},
    fmt_message, fmt_placeholder, fmt_template,
    language_models::llm::LLM,
    llm::openai::{OpenAI, OpenAIModel},
    message_formatter,
    prompt::HumanMessagePromptTemplate,
    prompt_args,
    schemas::messages::Message,
    template_fstring,
};

#[tokio::main]
async fn main() {
    //We can then initialize the model:
    // If you'd prefer not to set an environment variable you can pass the key in directly via the `openai_api_key` named parameter when initiating the OpenAI LLM class:
    // let open_ai = OpenAI::default()
    //     .with_config(
    //         OpenAIConfig::default()
    //             .with_api_key("<your_key>"),
    //     ).with_model(OpenAIModel::Gpt4oMini.to_string());
    let open_ai = OpenAI::default().with_model(OpenAIModel::Gpt4oMini.to_string());


    //Once you've installed and initialized the LLM of your choice, we can try using it! Let's ask it what LangSmith is - this is something that wasn't present in the training data so it shouldn't have a very good response.
    let resp = open_ai.invoke("What is rust").await.unwrap();
    println!("{}", resp);

    // We can also guide it's response with a prompt template. Prompt templates are used to convert raw user input to a better input to the LLM.
    let prompt = message_formatter![
        fmt_message!(Message::new_system_message(
            "You are world class technical documentation writer."
        )),
        fmt_template!(HumanMessagePromptTemplate::new(template_fstring!(
            "{input}", "input"
        )))
    ];

    //We can now combine these into a simple LLM chain:

    let chain = LLMChainBuilder::new()
        .prompt(prompt)
        .llm(open_ai.clone())
        .build()
        .unwrap();

    //We can now invoke it and ask the same question. It still won't know the answer, but it should respond in a more proper tone for a technical writer!

    match chain
        .invoke(prompt_args! {
        "input" => "Quien es el escritor de 20000 millas de viaje submarino",
           })
        .await
    {
        Ok(result) => {
            println!("Result: {:?}", result);
        }
        Err(e) => panic!("Error invoking LLMChain: {:?}", e),
    }

    //If you want to prompt to have a list of messages you could use the `fmt_placeholder` macro

    let prompt = message_formatter![
        fmt_message!(Message::new_system_message(
            "You are world class technical documentation writer."
        )),
        fmt_placeholder!("history"),
        fmt_template!(HumanMessagePromptTemplate::new(template_fstring!(
            "{input}", "input"
        ))),
    ];

    let chain = LLMChainBuilder::new()
        .prompt(prompt)
        .llm(open_ai)
        .build()
        .unwrap();
    match chain
        .invoke(prompt_args! {
        "input" => "Who is the writer of 20,000 Leagues Under the Sea, and what is my name?",
        "history" => vec![
                Message::new_human_message("My name is: luis"),
                Message::new_ai_message("Hi luis"),
                ],

        })
        .await
    {
        Ok(result) => {
            println!("Result: {:?}", result);
        }
        Err(e) => panic!("Error invoking LLMChain: {:?}", e),
    }
}

Dependencies

~28–75MB
~1.5M SLoC