#chacha20 #poly1305 #aead #crypto


A pure Rust implementation of the ChaCha20-Poly1305 AEAD from RFC 7539

3 releases

Uses old Rust 2015

0.1.2 Feb 1, 2016
0.1.1 Jan 31, 2016
0.1.0 Jan 30, 2016

#10 in #chacha20

Download history 139/week @ 2022-11-27 191/week @ 2022-12-04 239/week @ 2022-12-11 203/week @ 2022-12-18 120/week @ 2022-12-25 81/week @ 2023-01-01 163/week @ 2023-01-08 82/week @ 2023-01-15 223/week @ 2023-01-22 489/week @ 2023-01-29 213/week @ 2023-02-05 87/week @ 2023-02-12 280/week @ 2023-02-19 215/week @ 2023-02-26 293/week @ 2023-03-05 96/week @ 2023-03-12

894 downloads per month
Used in 17 crates (7 directly)


1.5K SLoC

This is a pure Rust implementation of the ChaCha20-Poly1305 AEAD from RFC 7539.


There are two main designs for an encryption/decryption API: either having one state/context struct with a method which is called repeatedly to encrypt/decrypt the next fragment of data, or having a single standalone function which is called once and does all the work in a single call.

For authenticated encryption, it's important that on decryption no output is produced until the authentication tag is verified. That requires two passes over the data for decryption: the first pass verifies the tag, and the second pass does the output. It would be needlessly complex to implement this with a state/context struct, so this crate uses a single function call to do the whole decryption. For simmetry, the same design is used for the encryption function.

The base primitives (ChaCha20 and Poly1305) are not exposed separately, since they are harder to use securely. This also allows their implementation to be tuned to the combined use case; for instance, the base primitives need no buffering.


The amount of data that can be encrypted in a single call is 2^32 - 1 blocks of 64 bytes, slightly less than 256 GiB. This limit could be increased to 2^64 bytes, if necessary, by allowing the use of a shorter nonce.

This crate does not attempt to clear potentially sensitive data from its work memory (which includes the the stack and processor registers). To do so correctly without a heavy performance penalty would require help from the compiler. It's better to not attempt to do so than to present a false assurance.

SIMD optimization

This crate has experimental support for explicit SIMD optimizations. It requires nightly Rust due to the use of unstable features.

The following cargo features enable the explicit SIMD optimization:

  • simd enables the explicit use of SIMD vectors instead of a plain struct
  • simd_opt additionally enables the use of SIMD shuffles to implement some of the rotates

While one might expect that each of these is faster than the previous one, and that they are all faster than not enabling explicit SIMD vectors, that's not always the case. It can vary depending on target architecture and compiler options. If you need the extra speed from these optimizations, benchmark each one (the bench feature enables cargo bench in this crate, so you can use for instance cargo bench --features="bench simd_opt"). They have currently been tuned for SSE2 (x86 and x86-64) and NEON (arm).


Licensed under either of

at your option.


Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.