#japanese #tokenizer #morphological #analyzer

no-std vaporetto

Vaporetto: a pointwise prediction based tokenizer

17 releases

0.6.4 Nov 5, 2024
0.6.3 Apr 1, 2023
0.6.2 Mar 7, 2023
0.6.1 Oct 27, 2022
0.2.0 Nov 1, 2021

#162 in Text processing

Download history 333/week @ 2024-08-21 434/week @ 2024-08-28 402/week @ 2024-09-04 184/week @ 2024-09-11 198/week @ 2024-09-18 506/week @ 2024-09-25 374/week @ 2024-10-02 376/week @ 2024-10-09 527/week @ 2024-10-16 765/week @ 2024-10-23 573/week @ 2024-10-30 592/week @ 2024-11-06 577/week @ 2024-11-13 547/week @ 2024-11-20 417/week @ 2024-11-27 475/week @ 2024-12-04

2,114 downloads per month
Used in 2 crates

MIT/Apache

275KB
6.5K SLoC

Vaporetto

Vaporetto is a fast and lightweight pointwise prediction based tokenizer.

Examples

use std::fs::File;

use vaporetto::{Model, Predictor, Sentence};

let f = File::open("../resources/model.bin")?;
let model = Model::read(f)?;
let predictor = Predictor::new(model, true)?;

let mut buf = String::new();

let mut s = Sentence::default();

s.update_raw("まぁ社長は火星猫だ")?;
predictor.predict(&mut s);
s.fill_tags();
s.write_tokenized_text(&mut buf);
assert_eq!(
    "まぁ/名詞/マー 社長/名詞/シャチョー は/助詞/ワ 火星/名詞/カセー 猫/名詞/ネコ だ/助動詞/ダ",
    buf,
);

s.update_raw("まぁ良いだろう")?;
predictor.predict(&mut s);
s.fill_tags();
s.write_tokenized_text(&mut buf);
assert_eq!(
    "まぁ/副詞/マー 良い/形容詞/ヨイ だろう/助動詞/ダロー",
    buf,
);

Feature flags

The following features are disabled by default:

  • kytea - Enables the reader for models generated by KyTea.
  • train - Enables the trainer.
  • portable-simd - Uses the portable SIMD API instead of our SIMD-conscious data layout. (Nightly Rust is required.)

The following features are enabled by default:

  • std - Uses the standard library. If disabled, it uses the core library instead.
  • cache-type-score - Enables caching type scores for faster processing. If disabled, type scores are calculated in a straightforward manner.
  • fix-weight-length - Uses fixed-size arrays for storing scores to facilitate optimization. If disabled, vectors are used instead.
  • tag-prediction - Enables tag prediction.
  • charwise-pma - Uses the Charwise Daachorse instead of the standard version for faster prediction, although it can make to load a model file slower.

Notes for distributed models

The distributed models are compressed in the zstd format. If you want to load these compressed models, you must decompress them outside of the API.

// Requires zstd crate or ruzstd crate
let reader = zstd::Decoder::new(File::open("path/to/model.bin.zst")?)?;
let model = Model::read(reader)?;

You can also decompress the file using the unzstd command, which is bundled with modern Linux distributions.

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

Dependencies

~1.5–2.1MB
~34K SLoC