54 stable releases (4 major)

4.3.0 Nov 27, 2024
4.1.0 Sep 29, 2024
3.14.1 Aug 5, 2024
3.12.0 Jul 30, 2024
0.6.0 Oct 2, 2023

#8 in Machine learning

Download history 336/week @ 2024-08-20 448/week @ 2024-08-27 374/week @ 2024-09-03 759/week @ 2024-09-10 1052/week @ 2024-09-17 1450/week @ 2024-09-24 979/week @ 2024-10-01 667/week @ 2024-10-08 401/week @ 2024-10-15 669/week @ 2024-10-22 458/week @ 2024-10-29 547/week @ 2024-11-05 755/week @ 2024-11-12 1307/week @ 2024-11-19 1310/week @ 2024-11-26 798/week @ 2024-12-03

4,256 downloads per month
Used in 8 crates (7 directly)

Apache-2.0

395KB
2.5K SLoC

FastEmbed-rs 🦀

Rust implementation of @qdrant/fastembed

Crates.io MIT Licensed Semantic release

🍕 Features

The default model is Flag Embedding, which is top of the MTEB leaderboard.

🔍 Not looking for Rust?

🤖 Models

Text Embedding

Click to see full List

Sparse Text Embedding

Image Embedding

Reranking

🚀 Installation

Run the following command in your project directory:

cargo add fastembed

Or add the following line to your Cargo.toml:

[dependencies]
fastembed = "3"

📖 Usage

Text Embeddings

use fastembed::{TextEmbedding, InitOptions, EmbeddingModel};

// With default InitOptions
let model = TextEmbedding::try_new(Default::default())?;

// With custom InitOptions
let model = TextEmbedding::try_new(
        InitOptions::new(EmbeddingModel::AllMiniLML6V2).with_show_download_progress(true),
    )?;

let documents = vec![
    "passage: Hello, World!",
    "query: Hello, World!",
    "passage: This is an example passage.",
    // You can leave out the prefix but it's recommended
    "fastembed-rs is licensed under Apache  2.0"
    ];

 // Generate embeddings with the default batch size, 256
 let embeddings = model.embed(documents, None)?;

 println!("Embeddings length: {}", embeddings.len()); // -> Embeddings length: 4
 println!("Embedding dimension: {}", embeddings[0].len()); // -> Embedding dimension: 384

Image Embeddings

use fastembed::{ImageEmbedding, ImageInitOptions, ImageEmbeddingModel};

// With default InitOptions
let model = ImageEmbedding::try_new(Default::default())?;

// With custom InitOptions
let model = ImageEmbedding::try_new(
    ImageInitOptions::new(ImageEmbeddingModel::ClipVitB32).with_show_download_progress(true),
)?;

let images = vec!["assets/image_0.png", "assets/image_1.png"];

// Generate embeddings with the default batch size, 256
let embeddings = model.embed(images, None)?;

println!("Embeddings length: {}", embeddings.len()); // -> Embeddings length: 2
println!("Embedding dimension: {}", embeddings[0].len()); // -> Embedding dimension: 512

Candidates Reranking

use fastembed::{TextRerank, RerankInitOptions, RerankerModel};

let model = TextRerank::try_new(
    RerankInitOptions::new(RerankerModel::BGERerankerBase).with_show_download_progress(true),
)?;

let documents = vec![
    "hi",
    "The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear, is a bear species endemic to China.",
    "panda is animal",
    "i dont know",
    "kind of mammal",
];

// Rerank with the default batch size
let results = model.rerank("what is panda?", documents, true, None)?;
println!("Rerank result: {:?}", results);

Alternatively, local model files can be used for inference via the try_new_from_user_defined(...) methods of respective structs.

✊ Support

To support the library, please consider donating to our primary upstream dependency, ort - The Rust wrapper for the ONNX runtime.

⚙️ Under the hood

It's important we justify the "fast" in FastEmbed. FastEmbed is fast because of:

  1. Quantized model weights.
  2. ONNX Runtime which allows for inference on CPU, GPU, and other dedicated runtimes.
  3. No hidden dependencies via Huggingface Transformers.

📄 LICENSE

Apache 2.0 © 2024

Dependencies

~17–30MB
~519K SLoC