#byte-range #file-serving #http #http-file #static-file #range #file


helpers for conditional GET, HEAD, byte range serving, and gzip content encoding for static files and more with hyper and tokio

13 releases

0.3.6 May 2, 2022
0.3.5 Dec 29, 2021
0.3.4 Aug 31, 2021
0.3.2 Jul 9, 2021
0.1.2 Oct 29, 2018

#264 in HTTP server

34 downloads per month


1.5K SLoC


crates.io Released API docs CI

Rust helpers for serving HTTP GET and HEAD responses with hyper 0.14.x and tokio.

This crate supplies two ways to respond to HTTP GET and HEAD requests:

  • the serve function can be used to serve an Entity, a trait representing reusable, byte-rangeable HTTP entities. Entity must be able to produce exactly the same data on every call, know its size in advance, and be able to produce portions of the data on demand.
  • the streaming_body function can be used to add a body to an otherwise-complete response. If a body is needed (on GET rather than HEAD requests, it returns a BodyWriter (which implements std::io::Writer). The caller should produce the complete body or call BodyWriter::abort, causing the HTTP stream to terminate abruptly.

It supplies a static file Entity implementation and a (currently Unix-only) helper for serving a full directory tree from the local filesystem, including automatically looking for .gz-suffixed files when the client advertises Accept-Encoding: gzip.

Why two ways?

They have pros and cons. This table shows some of them:

automatic byte range servingyesno [1]
backpressureyesno [2]
conditional GETyesno [3]
sends first byte before length knownnoyes
automatic gzip content encodingno [4]yes

[1]: streaming_body always sends the full body. Byte range serving wouldn't make much sense with its interface. The application will generate all the bytes every time anyway, and http-serve's buffering logic would have to be complex to handle multiple ranges well.

[2]: streaming_body is often appended to while holding a lock or open database transaction, where backpressure is undesired. It'd be possible to add support for "wait points" where the caller explicitly wants backpressure. This would make it more suitable for large streams, even infinite streams like Server-sent events.

[3]: streaming_body doesn't yet support generating etags or honoring conditional GET requests. PRs welcome!

[4]: serve doesn't automatically apply Content-Encoding: gzip because the content encoding is a property of the entity you supply. The entity's etag, length, and byte range boundaries must match the encoding. You can use the http_serve::should_gzip helper to decide between supplying a plain or gzipped entity. serve could automatically apply the related Transfer-Encoding: gzip where the browser requests it via TE: gzip, but common browsers have chosen to avoid requesting or handling Transfer-Encoding.

See the documentation for more.

There's a built-in Entity implementation, ChunkedReadFile. It serves static files from the local filesystem, reading chunks in a separate thread pool to avoid blocking the tokio reactor thread.

You're not limited to the built-in entity type(s), though. You could supply your own that do anything you desire:

  • bytes built into the binary via include_bytes!.
  • bytes retrieved from another HTTP server or network filesystem.
  • memcached-based caching of another entity.
  • anything else for which it's cheaper to compute the etag, size, and a byte range than the entirety of the data. (See moonfire-nvr's logic for generating .mp4 files to represent arbitrary time ranges.)

http_serve::serve is similar to golang's http.ServeContent. It was extracted from moonfire-nvr's .mp4 file serving.


  • Serve a single file:
    $ cargo run --example serve_file /usr/share/dict/words
  • Serve a directory tree:
    $ cargo run --features dir --example serve_dir .


See the AUTHORS file for details.


Your choice of MIT or Apache; see LICENSE-MIT.txt or LICENSE-APACHE, respectively.


~102K SLoC