#character-encoding #character-set #text-encoding #charset #detector #conversion

bin+lib charset-normalizer-rs

Truly universal encoding detector in pure Rust - port of Python version

6 stable releases

1.0.6 Sep 28, 2023
1.0.5 Sep 22, 2023

#305 in Internationalization (i18n)

Download history 7/week @ 2024-07-22 65/week @ 2024-07-29 13/week @ 2024-08-05 4/week @ 2024-08-12 211/week @ 2024-08-19 15/week @ 2024-08-26 23/week @ 2024-09-02 26/week @ 2024-09-09 34/week @ 2024-09-16 57/week @ 2024-09-23 46/week @ 2024-09-30 39/week @ 2024-10-07 74/week @ 2024-10-14 39/week @ 2024-10-21 45/week @ 2024-10-28 32/week @ 2024-11-04

192 downloads per month

Custom license and maybe LGPL-3.0

180KB
4K SLoC

Charset Normalizer

charset-normalizer-rs on docs.rs charset-normalizer-rs on crates.io

A library that helps you read text from an unknown charset encoding.
Motivated by original Python version of charset-normalizer, I'm trying to resolve the issue by taking a new approach. All IANA character set names for which the Rust encoding library provides codecs are supported.

This project is port of original Pyhon version of Charset Normalizer. The biggest difference between Python and Rust versions - number of supported encodings as each langauge has own encoding / decoding library. In Rust version only encoding from WhatWG standard are supported. Python version supports more encodings, but a lot of them are old almost unused ones.

⚡ Performance

This package offer better performance than Python version (3 times faster, than MYPYC version of charset-normalizer, 6 times faster than usual Python version). However, in comparison with chardet and chardetng packages it is slower but more accurate (I guess because it process whole file chunk by chunk). Here are some numbers.

Package Accuracy Mean per file (ms) File per sec (est)
chardet 79 % 2.2 ms 450 file/sec
chardetng 78 % 1.6 ms 625 file/sec
charset-normalizer-rs 96.8 % 2.7 ms 370 file/sec
charset-normalizer (Python + MYPYC version) 98 % 8 ms 125 file/sec
Package 99th percentile 95th percentile 50th percentile
chardet 8 ms 2 ms 0.2 ms
chardetng 14 ms 5 ms 0.5 ms
charset-normalizer-rs 19 ms 7 ms 1.2 ms
charset-normalizer (Python + MYPYC version) 94 ms 37 ms 3 ms

Stats are generated using 400+ files using default parameters. These results might change at any time. The dataset can be updated to include more files. The actual delays heavily depends on your CPU capabilities. The factors should remain the same. Rust version dataset has been reduced as number of supported encodings is lower than in Python version.

There is a still possibility to speed up library, so I'll appreciate any contributions.

✨ Installation

Library installation:

cargo add charset-normalizer-rs

Binary CLI tool installation:

cargo install charset-normalizer-rs

🚀 Basic Usage

CLI

This package comes with a CLI, which supposes to be compatible with Python version CLI tool.

normalizer -h
Usage: normalizer [OPTIONS] <FILES>...

Arguments:
  <FILES>...  File(s) to be analysed

Options:
  -v, --verbose                Display complementary information about file if any. Stdout will contain logs about the detection process
  -a, --with-alternative       Output complementary possibilities if any. Top-level JSON WILL be a list
  -n, --normalize              Permit to normalize input file. If not set, program does not write anything
  -m, --minimal                Only output the charset detected to STDOUT. Disabling JSON output
  -r, --replace                Replace file when trying to normalize it instead of creating a new one
  -f, --force                  Replace file without asking if you are sure, use this flag with caution
  -t, --threshold <THRESHOLD>  Define a custom maximum amount of chaos allowed in decoded content. 0. <= chaos <= 1 [default: 0.2]
  -h, --help                   Print help
  -V, --version                Print version
normalizer ./data/sample.1.fr.srt

🎉 The CLI produces easily usable stdout result in JSON format (should be the same as in Python version).

{
    "path": "/home/default/projects/charset_normalizer/data/sample.1.fr.srt",
    "encoding": "cp1252",
    "encoding_aliases": [
        "1252",
        "windows_1252"
    ],
    "alternative_encodings": [
        "cp1254",
        "cp1256",
        "cp1258",
        "iso8859_14",
        "iso8859_15",
        "iso8859_16",
        "iso8859_3",
        "iso8859_9",
        "latin_1",
        "mbcs"
    ],
    "language": "French",
    "alphabets": [
        "Basic Latin",
        "Latin-1 Supplement"
    ],
    "has_sig_or_bom": false,
    "chaos": 0.149,
    "coherence": 97.152,
    "unicode_path": null,
    "is_preferred": true
}

Rust

Library offers two main methods. First one is from_bytes, which processes text using bytes as input parameter:

use charset_normalizer_rs::from_bytes;

fn test_from_bytes() {
    let result = from_bytes(&vec![0x84, 0x31, 0x95, 0x33], None);
    let best_guess = result.get_best();
    assert_eq!(
        best_guess.unwrap().encoding(),
        "gb18030",
    );
}
test_from_bytes();

from_path processes text using filename as input parameter:

use std::path::PathBuf;
use charset_normalizer_rs::from_path;

fn test_from_path() {
    let result = from_path(&PathBuf::from("src/tests/data/samples/sample-chinese.txt"), None).unwrap();
    let best_guess = result.get_best();
    assert_eq!(
        best_guess.unwrap().encoding(),
        "big5",
    );
}
test_from_path();

😇 Why

When I started using Chardet (Python version), I noticed that it was not suited to my expectations, and I wanted to propose a reliable alternative using a completely different method. Also! I never back down on a good challenge!

I don't care about the originating charset encoding, because two different tables can produce two identical rendered string. What I want is to get readable text, the best I can.

In a way, I'm brute forcing text decoding. How cool is that? 😎

🍰 How

  • Discard all charset encoding table that could not fit the binary content.
  • Measure noise, or the mess once opened (by chunks) with a corresponding charset encoding.
  • Extract matches with the lowest mess detected.
  • Additionally, we measure coherence / probe for a language.

Wait a minute, what is noise/mess and coherence according to YOU?

Noise : I opened hundred of text files, written by humans, with the wrong encoding table. I observed, then I established some ground rules about what is obvious when it seems like a mess. I know that my interpretation of what is noise is probably incomplete, feel free to contribute in order to improve or rewrite it.

Coherence : For each language there is on earth, we have computed ranked letter appearance occurrences (the best we can). So I thought that intel is worth something here. So I use those records against decoded text to check if I can detect intelligent design.

⚡ Known limitations

  • Language detection is unreliable when text contains two or more languages sharing identical letters. (eg. HTML (english tags) + Turkish content (Sharing Latin characters))
  • Every charset detector heavily depends on sufficient content. In common cases, do not bother run detection on very tiny content.

👤 Contributing

Contributions, issues and feature requests are very much welcome.
Feel free to check issues page if you want to contribute.

📝 License

Copyright © Nikolay Yarovoy @nickspring - porting to Rust.
Copyright © Ahmed TAHRI @Ousret - original Python version and some parts of this document.
This project is MIT licensed.

Characters frequencies used in this project © 2012 Denny Vrandečić

Dependencies

~11–21MB
~271K SLoC