#nlp #text-embeddings #reranking

gte-rs

Text embedding and re-ranking pipelines

1 unstable release

new 0.9.0 Jan 31, 2025

#489 in Machine learning

Apache-2.0

22KB
371 lines

🧲 gte-rs: general text embedding and re-ranking in Rust

💬 Introduction

This crate provides simple pipelines that can be used out-of-the box to perform text-embedding and re-ranking using ONNX models.

They are built with 🧩 orp (which relies on the 🦀 ort runtime), and use 🤗 tokenizers for token encoding.

🎓 Examples

[dependencies]
"gte-rs" = "0.9.0"

Embedding:

let params = Parameters::default();
let pipeline = TextEmbeddingPipeline::new("models/gte-modernbert-base/tokenizer.json", &params)?;
let model = Model::new("models/gte-modernbert-base/model.onnx", RuntimeParameters::default())?;
            
let inputs = TextInput::from_str(&[
    "text content", 
    "some more content",
    //...
]);

let embeddings = model.inference(inputs, &pipeline, &params)?;

Re-ranking:

let params = Parameters::default();
let pipeline = RerankingPipeline::new("models/gte-modernbert-base/tokenizer.json", &params)?;
let model = Model::new("models/gte-reranker-modernbert-base/model.onnx", RuntimeParameters::default())?;

let inputs = TextInput::from_str(&[
    ("one candidate", "query"),
    ("another candidate", "query"),
    //...
]);

let similarities = model.inference(inputs, &pipeline, &params)?;

Please refer the the source code in src/examples for complete examples.

🧬 Models

Alibaba's gte-modernbert

For english language, the gte-modernbert-base model outperforms larger models on retrieval with only 149M parameters, and runs efficiently on GPU and CPU. The gte-reranker-modernbert-base version does re-ranking with similar characteristics. This post provides interesting insights about them.

Other

This crate should be usable out-of-the box with other models, or easily adapted to other ones. Please report your own tests or requirements!

Dependencies

~20MB
~395K SLoC