#mecab #parser #training #clone #basic #tools #tokenize #tool #tokenizing #people


Library for tokenizing text with mecab dictionaries. Not a mecab wrapper.

9 unstable releases

0.5.1 Jul 25, 2020
0.5.0 Sep 30, 2019
0.4.0 Aug 7, 2019
0.3.2 Jul 3, 2019
0.1.1 Dec 21, 2018

#750 in Text processing

Download history 30/week @ 2022-11-26 23/week @ 2022-12-03 54/week @ 2022-12-10 40/week @ 2022-12-17 4/week @ 2022-12-24 1/week @ 2022-12-31 27/week @ 2023-01-07 33/week @ 2023-01-14 14/week @ 2023-01-21 24/week @ 2023-01-28 40/week @ 2023-02-04 37/week @ 2023-02-11 35/week @ 2023-02-18 2/week @ 2023-02-25 28/week @ 2023-03-04 28/week @ 2023-03-11

115 downloads per month


1.5K SLoC

notmecab-rs is a very basic mecab clone, designed only to do parsing, not training.

This is meant to be used as a library by other tools such as frequency analyzers. Not directly by people. It also only works with UTF-8 dictionaries. (Stop using encodings other than UTF-8 for infrastructural software.)

Licensed under the Apache License, Version 2.0.


Get unidic's sys.dic, matrix.bin, unk.dic, and char.bin and put them in data/. Then invoke tests from the repository root. Unidic 2.3.0 (spoken language or written language variant, not kobun etc) is assumed, otherwise some tests will fail.


notmecab performs maginally worse than mecab, but there are many cases where mecab fails to find the lowest-cost string of tokens, so I'm pretty sure that mecab is just cutting corners somewhere performance sensitive when searching for an ideal parse.

There are a couple difficult-to-use caching features designed to improve performance. You can upload a matrix of connections between the most common connection edge types with prepare_fast_matrix_cache, which is for extremely large dictionaries like modern versions of unidic, or you can load the entire matrix connection cache into memory with prepare_full_matrix_cache, which is for small dictionaries like ipadic. Note that prepare_full_matrix_cache is actually slower than prepare_fast_matrix_cache for modern versions of unidic after long periods of pumping text through notmecab, though obviously prepare_full_matrix_cache is the best option for small dictionaries.

There are no stability guarantees about the presence or behavior of prepare_fast_matrix_cache, because it's very hacky and if I find a better way to do what it's doing then I'm going to remove it.

Example (from tests)

// you need to acquire a mecab dictionary and place these files here manually
let sysdic = Blob::open("data/sys.dic").unwrap();
let unkdic = Blob::open("data/unk.dic").unwrap();
let matrix = Blob::open("data/matrix.bin").unwrap();
let unkdef = Blob::open("data/char.bin").unwrap();

let dict = Dict::load(sysdic, unkdic, matrix, unkdef).unwrap();

let result = parse(&dict, "これを持っていけ").unwrap();
for token in &result.0
    println!("{}", token.feature);
let split_up_string = tokenstream_to_string(&result.0, "|");
println!("{}", split_up_string);
assert_eq!(split_up_string, "これ|を|持っ|て|いけ"); // this test might fail if you're not testing with unidic (i.e. the correct parse might be different)

Output of example


You can also call parse_to_lexertoken, which does less string allocation, but you don't get the feature string as a string.


  • This software is unusably slow if optimizations are disabled.
  • Cost rewriting is not performed when user dictionaries are loaded.
  • There are some cases where multiple parses tie for the lowest cost. It's not defined which parse gets chosen in these cases.
  • There are some cases where mecab failed to find an ideal parse, but notmecab-rs does. Notmecab-rs should never produce a parse that has a higher total cost than the parse that mecab gives. If it does, it indicates some underlying bug, and should be reported, please.


~13K SLoC