32 stable releases (3 major)

3.5.0 Apr 12, 2024
3.3.0 Mar 28, 2024
2.1.1 Feb 15, 2024
1.11.0 Jan 8, 2024
0.6.0 Oct 2, 2023

#81 in Machine learning

Download history 44/week @ 2024-01-04 60/week @ 2024-01-11 41/week @ 2024-01-18 22/week @ 2024-01-25 5/week @ 2024-02-01 24/week @ 2024-02-08 855/week @ 2024-02-15 740/week @ 2024-02-22 285/week @ 2024-02-29 378/week @ 2024-03-07 137/week @ 2024-03-14 44/week @ 2024-03-21 269/week @ 2024-03-28 356/week @ 2024-04-04 513/week @ 2024-04-11 166/week @ 2024-04-18

1,307 downloads per month
Used in langchain-rust

MIT license

87KB
616 lines

FastEmbed-rs 🦀

Rust implementation of @Qdrant/fastembed

Crates.io MIT Licensed Semantic release

🍕 Features

The default model is Flag Embedding, which is top of the MTEB leaderboard.

🔍 Not looking for Rust?

🤖 Models

Alternatively, raw .onnx files can be loaded through the UserDefinedEmbeddingModel struct (for "bring your own" text embedding models).

🚀 Installation

Run the following command in your project directory:

cargo add fastembed

Or add the following line to your Cargo.toml:

[dependencies]
fastembed = "3"

📖 Usage

use fastembed::{TextEmbedding, InitOptions, EmbeddingModel};

// With default InitOptions
let model = TextEmbedding::try_new(Default::default())?;

// Alternatively, here a "bring your own" model could be loaded
// Users will need to manually pass in the bytes of the relevant model files
// This includes  the .onnx file itself and json files to constitute the TokenizerFiles struct)
let custom_model = TextEmbedding::try_new_from_user_defined(UserDefinedEmbeddingModel, options);

// With custom InitOptions
let model = TextEmbedding::try_new(InitOptions {
    model_name: EmbeddingModel::AllMiniLML6V2,
    show_download_progress: true,
    ..Default::default()
})?;

let documents = vec![
    "passage: Hello, World!",
    "query: Hello, World!",
    "passage: This is an example passage.",
    // You can leave out the prefix but it's recommended
    "fastembed-rs is licensed under Apache  2.0"
    ];

 // Generate embeddings with the default batch size, 256
 let embeddings = model.embed(documents, None)?;

 println!("Embeddings length: {}", embeddings.len()); // -> Embeddings length: 4
 println!("Embedding dimension: {}", embeddings[0].len()); // -> Embedding dimension: 384

🚒 Under the hood

Why fast?

It's important we justify the "fast" in FastEmbed. FastEmbed is fast because:

  1. Quantized model weights
  2. ONNX Runtime which allows for inference on CPU, GPU, and other dedicated runtimes

Why light?

  1. No hidden dependencies via Huggingface Transformers

Why accurate?

  1. Better than OpenAI Ada-002
  2. Top of the Embedding leaderboards e.g. MTEB

📄 LICENSE

Apache 2.0 © 2024

Dependencies

~21–33MB
~586K SLoC