#tokenize #split #segmenter #word

segtok

Sentence segmentation and word tokenization tools

6 releases

0.1.5 Feb 18, 2025
0.1.4 Feb 8, 2025
0.1.3 Jan 21, 2025

#1353 in Text processing

Download history 107/week @ 2025-01-06 121/week @ 2025-01-13 189/week @ 2025-01-20 41/week @ 2025-01-27 550/week @ 2025-02-03 94/week @ 2025-02-10 163/week @ 2025-02-17 10/week @ 2025-02-24 7/week @ 2025-03-03 94/week @ 2025-03-10 20/week @ 2025-03-17 4/week @ 2025-03-24 68/week @ 2025-03-31 17/week @ 2025-04-07

127 downloads per month
Used in yake-rust

MIT license

63KB
1.5K SLoC

segtok

A rule-based sentence segmenter (splitter) and a word tokenizer using orthographic features. Ported from the python package (not maintained anymore), and fixes the contractions bug.

use segtok::{segmenter::*, tokenizer::*};

fn main() {
    let input = include_str!("../tests/test_google.txt");

    let sentences: Vec<Vec<_>> = split_multi(input, SegmentConfig::default())
        .into_iter()
        .map(|span| split_contractions(web_tokenizer(&span)).collect())
        .collect();
}

Dependencies

~2.9–4MB
~74K SLoC