#tokenizer #normalize #segmenter #tokenize #language

charabia

A simple library to detect the language, tokenize the text and normalize the tokens

16 releases

0.8.8 Mar 19, 2024
0.8.6 Jan 24, 2024
0.8.5 Oct 26, 2023
0.8.2 Jul 19, 2023
0.5.1 Jul 5, 2022

#49 in Text processing

Download history 2600/week @ 2023-12-06 3118/week @ 2023-12-13 2175/week @ 2023-12-20 2354/week @ 2023-12-27 3109/week @ 2024-01-03 3306/week @ 2024-01-10 3009/week @ 2024-01-17 3454/week @ 2024-01-24 3075/week @ 2024-01-31 3456/week @ 2024-02-07 3681/week @ 2024-02-14 3711/week @ 2024-02-21 4607/week @ 2024-02-28 4104/week @ 2024-03-06 4720/week @ 2024-03-13 3528/week @ 2024-03-20

17,627 downloads per month
Used in 6 crates (3 directly)

MIT license

2MB
6.5K SLoC

Rust 4K SLoC // 0.1% comments F* 2.5K SLoC // 0.4% comments Shell 7 SLoC

Charabia

Library used by Meilisearch to tokenize queries and documents

Role

The tokenizer’s role is to take a sentence or phrase and split it into smaller units of language, called tokens. It finds and retrieves all the words in a string based on the language’s particularities.

Details

Charabia provides a simple API to segment, normalize, or tokenize (segment + normalize) a text of a specific language by detecting its Script/Language and choosing the specialized pipeline for it.

Supported languages

Charabia is multilingual, featuring optimized support for:

Script / Language specialized segmentation specialized normalization Segmentation Performance level Tokenization Performance level
Latin ✅ CamelCase segmentation compatibility decomposition + lowercase + nonspacing-marks removal + Ð vs Đ spoofing normalization 🟩 ~23MiB/sec 🟨 ~9MiB/sec
Greek compatibility decomposition + lowercase + final sigma normalization 🟩 ~27MiB/sec 🟨 ~8MiB/sec
Cyrillic - Georgian compatibility decomposition + lowercase 🟩 ~27MiB/sec 🟨 ~9MiB/sec
Chinese CMN 🇨🇳 jieba compatibility decomposition + pinyin conversion 🟨 ~10MiB/sec 🟧 ~5MiB/sec
Hebrew 🇮🇱 compatibility decomposition + nonspacing-marks removal 🟩 ~33MiB/sec 🟨 ~11MiB/sec
Arabic ال segmentation compatibility decomposition + nonspacing-marks removal + [Tatweel, Alef, Yeh, and Taa Marbuta normalization] 🟩 ~36MiB/sec 🟨 ~11MiB/sec
Japanese 🇯🇵 lindera IPA-dict compatibility decomposition 🟧 ~3MiB/sec 🟧 ~3MiB/sec
Korean 🇰🇷 lindera KO-dict compatibility decomposition 🟥 ~2MiB/sec 🟥 ~2MiB/sec
Thai 🇹🇭 dictionary based compatibility decomposition + nonspacing-marks removal 🟩 ~22MiB/sec 🟨 ~11MiB/sec
Khmer 🇰🇭 ✅ dictionary based compatibility decomposition 🟧 ~7MiB/sec 🟧 ~5MiB/sec

We aim to provide global language support, and your feedback helps us move closer to that goal. If you notice inconsistencies in your search results or the way your documents are processed, please open an issue on our GitHub repository.

If you have a particular need that charabia does not support, please share it in the product repository by creating a dedicated discussion.

About Performance level

Performances are based on the throughput (MiB/sec) of the tokenizer (computed on a scaleway Elastic Metal server EM-A410X-SSD - CPU: Intel Xeon E5 1650 - RAM: 64 Go) using jemalloc:

  • 0️⃣⬛️: 0 -> 1 MiB/sec
  • 1️⃣🟥: 1 -> 3 MiB/sec
  • 2️⃣🟧: 3 -> 8 MiB/sec
  • 3️⃣🟨: 8 -> 20 MiB/sec
  • 4️⃣🟩: 20 -> 50 MiB/sec
  • 5️⃣🟪: 50 MiB/sec or more

Examples

Tokenization

use charabia::Tokenize;

let orig = "Thé quick (\"brown\") fox can't jump 32.3 feet, right? Brr, it's 29.3°F!";

// tokenize the text.
let mut tokens = orig.tokenize();

let token = tokens.next().unwrap();
// the lemma into the token is normalized: `Thé` became `the`.
assert_eq!(token.lemma(), "the");
// token is classfied as a word
assert!(token.is_word());

let token = tokens.next().unwrap();
assert_eq!(token.lemma(), " ");
// token is classfied as a separator
assert!(token.is_separator());

Segmentation

use charabia::Segment;

let orig = "The quick (\"brown\") fox can't jump 32.3 feet, right? Brr, it's 29.3°F!";

// segment the text.
let mut segments = orig.segment_str();

assert_eq!(segments.next(), Some("The"));
assert_eq!(segments.next(), Some(" "));
assert_eq!(segments.next(), Some("quick"));

Dependencies

~13–19MB
~364K SLoC