#unicode #ascii #unicode-characters #transliteration #emoji #convert-string #unidecode

no-std deunicode

Convert Unicode strings to pure ASCII by intelligently transliterating them. Suppors Emoji and Chinese.

12 stable releases

1.6.0 May 13, 2024
1.4.3 Feb 16, 2024
1.4.2 Dec 10, 2023
1.4.1 Oct 15, 2023
0.4.0 May 5, 2018

#13 in Text processing

Download history 169647/week @ 2024-07-29 163408/week @ 2024-08-05 181691/week @ 2024-08-12 190090/week @ 2024-08-19 198254/week @ 2024-08-26 181768/week @ 2024-09-02 197084/week @ 2024-09-09 179031/week @ 2024-09-16 196721/week @ 2024-09-23 219399/week @ 2024-09-30 180612/week @ 2024-10-07 210862/week @ 2024-10-14 211920/week @ 2024-10-21 205032/week @ 2024-10-28 186033/week @ 2024-11-04 238969/week @ 2024-11-11

853,329 downloads per month
Used in 563 crates (51 directly)

BSD-3-Clause

170KB
167 lines

deunicode

Documentation

The deunicode library transliterates Unicode strings such as "Æneid" into pure ASCII ones such as "AEneid". It includes support for emoji. It's compatible with no-std Rust environments.

Deunicode is quite fast, supports on-the-fly conversion without allocations. It has a compact representation of Unicode data to minimize memory overhead and executable size (about 75K codepoints mapped to 245K ASCII characters, using 450KB of memory, 160KB gzipped).

Examples

use deunicode::deunicode;

assert_eq!(deunicode("Æneid"), "AEneid");
assert_eq!(deunicode("étude"), "etude");
assert_eq!(deunicode("北亰"), "Bei Jing");
assert_eq!(deunicode("ᔕᓇᓇ"), "shanana");
assert_eq!(deunicode("げんまい茶"), "genmaiCha");
assert_eq!(deunicode("🦄☣"), "unicorn biohazard");

When to use it?

It's a better alternative than just stripping all non-ASCII characters or letting them get mangled by some encoding-ignorant system. It's be okay for one-way conversions for things like search indexes and tokenization, as a stronger version of Unicode NFKD. It may be used for generating nice identifiers for file names and URLs, which aren't too user-facing.

However, like most "universal" libraries of this kind, it has a one-size-fits-all 1:1 mapping of Unicode code points, which can't handle language-specific exceptions nor context-dependent romanization rules. These limitations are only slightly suboptimal for European languages and Korean Hangul, but make a mess of Japanese Kanji.

Guarantees and Warnings

Here are some guarantees you have when calling deunicode():

  • The String returned will be valid ASCII; the decimal representation of every char in the string will be between 0 and 127, inclusive.
  • Every ASCII character (0x00 - 0x7F) is mapped to itself.
  • All Unicode characters will translate to printable ASCII characters (\n or characters in the range 0x20 - 0x7E).

There are, however, some things you should keep in mind:

  • Some transliterations do produce \n characters.
  • Some Unicode characters transliterate to an empty string, either on purpose or because deunicode does not know about the character.
  • Some Unicode characters are unknown and transliterate to "[?]" (or a custom placeholder, or None if you use a chars iterator).
  • Many Unicode characters transliterate to multi-character strings. For example, "北" is transliterated as "Bei".
  • The transliteration is context-free, and not sophisticated enough to produce proper Chinese or Japanese. Han characters used in multiple languages are mapped to a single Mandarin pronounciation, and will be mostly illegible to Japanese readers. Transliteration can't handle cases where a single character has multiple possible pronounciations.

Unicode data

For a detailed explanation on the rationale behind the original dataset, refer to this article written by Burke in 2001.

This is a maintained alternative to the unidecode crate, which started as a Rust port of Text::Unidecode Perl module.

No runtime deps

Features