#nlp #extract #algorithm #language #document #tf-idf #extracting

keyword_extraction

Collection of algorithms for keyword extraction from text

13 stable releases

1.5.0 Oct 12, 2024
1.4.3 Apr 28, 2024
1.3.1 Mar 24, 2024
1.3.0 Dec 12, 2023
1.1.2 May 27, 2023

#111 in Algorithms

Download history 33/week @ 2024-08-19 33/week @ 2024-08-26 87/week @ 2024-09-02 51/week @ 2024-09-09 31/week @ 2024-09-16 116/week @ 2024-09-23 113/week @ 2024-09-30 245/week @ 2024-10-07 106/week @ 2024-10-14 95/week @ 2024-10-21 120/week @ 2024-10-28 8/week @ 2024-11-04 6/week @ 2024-11-18 47/week @ 2024-11-25 25/week @ 2024-12-02

79 downloads per month

LGPL-3.0-or-later

125KB
3K SLoC

Rust Keyword Extraction

Introduction

This is a simple NLP library with a list of unsupervised keyword extraction algorithms:

  • Tokenizer for tokenizing text;
  • TF-IDF for calculating the importance of a word in one or more documents;
  • Co-occurrence for calculating relationships between words within a specific window size;
  • RAKE for extracting key phrases from a document;
  • TextRank for extracting keywords and key phrases from a document;
  • YAKE for extracting keywords with a n-gram size (defaults to 3) from a document.

Algorithms

The full list of the algorithms in this library:

  • Helper algorithms:
    • Tokenizer
    • Co-occurrence
  • Keyword extraction algorithms:
    • TF-IDF
    • RAKE
    • TextRank
    • YAKE

Usage

Add the library to your Cargo.toml:

[dependencies]
keyword_extraction = "1.5.0"

Or use cargo add:

cargo add keyword_extraction

Features

It is possible to enable or disable features:

  • "tf_idf": TF-IDF algorithm;
  • "rake": RAKE algorithm;
  • "text_rank": TextRank algorithm;
  • "yake": YAKE algorithm;
  • "all": algorimths and helpers;
  • "parallel": parallelization of the algorithms with Rayon;
  • "co_occurrence": Co-occurrence algorithm;

Default features: ["tf_idf", "rake", "text_rank"]. By default all algorithms apart from "co_occurrence" and "yake" are enabled.

NOTE: "parallel" feature is only recommended for large documents, it exchanges memory for computation resourses.

Examples

For the stop words, you can use the stop-words crate:

[dependencies]
stop-words = "0.8.0"

For example for english:

use stop_words::{get, LANGUAGE};

fn main() {
    let stop_words = get(LANGUAGE::English);
    let punctuation: Vec<String> =[
        ".", ",", ":", ";", "!", "?", "(", ")", "[", "]", "{", "}", "\"", "'",
    ].iter().map(|s| s.to_string()).collect();
    // ...
}

TF-IDF

Create a TfIdfParams enum which can be one of the following:

  1. Unprocessed Documents: TfIdfParams::UnprocessedDocuments;
  2. Processed Documents: TfIdfParams::ProcessedDocuments;
  3. Single Unprocessed Document/Text block: TfIdfParams::TextBlock;
use keyword_extraction::tf_idf::{TfIdf, TfIdfParams};

fn main() {
    // ... stop_words & punctuation
    let documents: Vec<String> = vec![
        "This is a test document.".to_string(),
        "This is another test document.".to_string(),
        "This is a third test document.".to_string(),
    ];

    let params = TfIdfParams::UnprocessedDocuments(&documents, &stop_words, Some(&punctuation));

    let tf_idf = TfIdf::new(params);
    let ranked_keywords: Vec<String> = tf_idf.get_ranked_words(10);
    let ranked_keywords_scores: Vec<(String, f32)> = tf_idf.get_ranked_word_scores(10);

    // ...
}

RAKE

Create a RakeParams enum which can be one of the following:

  1. With defaults: RakeParams::WithDefaults;
  2. With defaults and phrase length (phrase window size limit): RakeParams::WithDefaultsAndPhraseLength;
  3. All: RakeParams::All;
use keyword_extraction::rake::{Rake, RakeParams};

fn main() {
    // ... stop_words
    let text = r#"
        This is a test document.
        This is another test document.
        This is a third test document.
    "#;

    let rake = Rake::new(RakeParams::WithDefaults(text, &stop_words));
    let ranked_keywords: Vec<String> = rake.get_ranked_words(10);
    let ranked_keywords_scores: Vec<(String, f32)> = rake.get_ranked_word_scores(10);

    // ...
}

TextRank

Create a TextRankParams enum which can be one of the following:

  1. With defaults: TextRankParams::WithDefaults;
  2. With defaults and phrase length (phrase window size limit): TextRankParams::WithDefaultsAndPhraseLength;
  3. All: TextRankParams::All;
use keyword_extraction::text_rank::{TextRank, TextRankParams};

fn main() {
    // ... stop_words
    let text = r#"
        This is a test document.
        This is another test document.
        This is a third test document.
    "#;

    let text_rank = TextRank::new(TextRankParams::WithDefaults(text, &stop_words));
    let ranked_keywords: Vec<String> = text_rank.get_ranked_words(10);
    let ranked_keywords_scores: Vec<(String, f32)> = text_rank.get_ranked_word_scores(10);
}

YAKE

Create a YakeParams enum which can be one of the following:

  1. With defaults: YakeParams::WithDefaults;
  2. All: YakeParams::All;
use keyword_extraction::yake::{Yake, YakeParams};

fn main() {
    // ... stop_words
    let text = r#"
        This is a test document.
        This is another test document.
        This is a third test document.
    "#;

    let yake = Yake::new(YakeParams::WithDefaults(text, &stop_words));
    let ranked_keywords: Vec<String> = yake.get_ranked_keywords(10);
    let ranked_keywords_scores: Vec<(String, f32)> = yake.get_ranked_keyword_scores(10);
    // ...
}

Contributing

I would love your input! I want to make contributing to this project as easy and transparent as possible, please read the CONTRIBUTING.md file for details.

License

This project is licensed under the GNU Lesser General Public License v3.0. See the Copying and Copying Lesser files for details.

Dependencies

~2.4–3.5MB
~63K SLoC