#japanese #tokenizer #analyzer #morphological

no-std vaporetto

Vaporetto: a pointwise prediction based tokenizer

16 releases

0.6.3 Apr 1, 2023
0.6.2 Mar 7, 2023
0.6.1 Oct 27, 2022
0.5.1 Jun 20, 2022
0.2.0 Nov 1, 2021

#174 in Text processing

Download history 986/week @ 2023-12-17 678/week @ 2023-12-24 26/week @ 2023-12-31 459/week @ 2024-01-07 666/week @ 2024-01-14 650/week @ 2024-01-21 743/week @ 2024-01-28 551/week @ 2024-02-04 589/week @ 2024-02-11 1214/week @ 2024-02-18 1112/week @ 2024-02-25 1310/week @ 2024-03-03 961/week @ 2024-03-10 1170/week @ 2024-03-17 1379/week @ 2024-03-24 1362/week @ 2024-03-31

4,886 downloads per month
Used in 2 crates

MIT/Apache

270KB
6.5K SLoC

Vaporetto

Vaporetto is a fast and lightweight pointwise prediction based tokenizer.

Examples

use std::fs::File;

use vaporetto::{Model, Predictor, Sentence};

let f = File::open("../resources/model.bin")?;
let model = Model::read(f)?;
let predictor = Predictor::new(model, true)?;

let mut buf = String::new();

let mut s = Sentence::default();

s.update_raw("まぁ社長は火星猫だ")?;
predictor.predict(&mut s);
s.fill_tags();
s.write_tokenized_text(&mut buf);
assert_eq!(
    "まぁ/名詞/マー 社長/名詞/シャチョー は/助詞/ワ 火星/名詞/カセー 猫/名詞/ネコ だ/助動詞/ダ",
    buf,
);

s.update_raw("まぁ良いだろう")?;
predictor.predict(&mut s);
s.fill_tags();
s.write_tokenized_text(&mut buf);
assert_eq!(
    "まぁ/副詞/マー 良い/形容詞/ヨイ だろう/助動詞/ダロー",
    buf,
);

Feature flags

The following features are disabled by default:

  • kytea - Enables the reader for models generated by KyTea.
  • train - Enables the trainer.
  • portable-simd - Uses the portable SIMD API instead of our SIMD-conscious data layout. (Nightly Rust is required.)

The following features are enabled by default:

  • std - Uses the standard library. If disabled, it uses the core library instead.
  • cache-type-score - Enables caching type scores for faster processing. If disabled, type scores are calculated in a straightforward manner.
  • fix-weight-length - Uses fixed-size arrays for storing scores to facilitate optimization. If disabled, vectors are used instead.
  • tag-prediction - Enables tag prediction.
  • charwise-pma - Uses the Charwise Daachorse instead of the standard version for faster prediction, although it can make to load a model file slower.

Notes for distributed models

The distributed models are compressed in the zstd format. If you want to load these compressed models, you must decompress them outside of the API.

// Requires zstd crate or ruzstd crate
let reader = zstd::Decoder::new(File::open("path/to/model.bin.zst")?)?;
let model = Model::read(reader)?;

You can also decompress the file using the unzstd command, which is bundled with modern Linux distributions.

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

Dependencies

~2.5MB
~43K SLoC