41 releases
0.7.0 | Apr 16, 2024 |
---|---|
0.6.8 | Jul 16, 2023 |
0.6.7 | Oct 3, 2022 |
0.6.6 | Nov 9, 2021 |
0.2.3 | Jul 6, 2018 |
#82 in Text processing
30,333 downloads per month
Used in 20 crates
(14 directly)
4.5MB
2K
SLoC
jieba-rs
🚀 Help me to become a full-time open-source developer by sponsoring me on GitHub
The Jieba Chinese Word Segmentation Implemented in Rust
Installation
Add it to your Cargo.toml
:
[dependencies]
jieba-rs = "0.6"
then you are good to go. If you are using Rust 2015 you have to extern crate jieba_rs
to your crate root as well.
Example
use jieba_rs::Jieba;
fn main() {
let jieba = Jieba::new();
let words = jieba.cut("我们中出了一个叛徒", false);
assert_eq!(words, vec!["我们", "中", "出", "了", "一个", "叛徒"]);
}
Enabling Additional Features
default-dict
feature enables embedded dictionary, this features is enabled by defaulttfidf
feature enables TF-IDF keywords extractortextrank
feature enables TextRank keywords extractor
[dependencies]
jieba-rs = { version = "0.6", features = ["tfidf", "textrank"] }
Run benchmark
cargo bench --all-features
Benchmark: Compare with cppjieba
jieba-rs
bindings
@node-rs/jieba
NodeJS bindingjieba-php
PHP bindingrjieba-py
Python bindingcang-jie
Chinese tokenizer for tantivytantivy-jieba
An adapter that bridges between tantivy and jieba-rsjieba-wasm
the WebAssembly binding
License
This work is released under the MIT license. A copy of the license is provided in the LICENSE file.
Dependencies
~3–5MB
~88K SLoC