#file-download #progress-bar #http-request #file-metadata #downloader #parallel #parallel-processing

app s3dl

A high-performance tool that downloads files in parallel chunks to maximize bandwidth utilization

1 unstable release

new 0.1.0 Mar 18, 2025

#466 in Network programming

Download history 76/week @ 2025-03-15

76 downloads per month

MIT license

17KB
190 lines

Parallel Downloader

DISCLAIMER: This project, including all code and documentation, was entirely generated by AI (Claude 3.7 Sonnet). It has not been manually reviewed, tested, or audited by a human. Use at your own risk.

A high-performance, concurrent file downloader written in Rust that splits downloads into multiple chunks and processes them in parallel to maximize bandwidth utilization.

Features

  • HEAD Request Analysis: Automatically fetches file metadata without downloading content
  • Parallel Downloading: Downloads file chunks concurrently to maximize throughput
  • Progress Visualization: Real-time progress bars for overall download and individual chunks
  • Resumable Downloads: Each chunk is tracked separately enabling potential resume capability
  • Performance Metrics: Shows download speed and time upon completion
  • Configurable Chunks: Adjust the number of parallel connections to optimize for your network

Installation

Prerequisites

  • Rust and Cargo (1.60.0 or later recommended)
  • An internet connection

Building from Source

  1. Clone the repository:

    git clone https://github.com/yourusername/parallel-downloader.git
    cd parallel-downloader
    
  2. Build the project:

    cargo build --release
    
  3. The executable will be available at ./target/release/parallel-downloader

Usage

Basic Usage

parallel-downloader https://example.com/large-file.zip

With Options

parallel-downloader https://example.com/large-file.zip -o my-download.zip -c 20

Command-line Options

Option Description Default
-o, --output PATH Specify output file path Derived from URL
-c, --chunks NUMBER Number of chunks to download in parallel 10
--help Show help information -
--version Show version information -

How It Works

  1. Size Detection: Makes a HEAD request to determine the total file size
  2. Chunking: Divides the file into equal-sized chunks based on the -c parameter
  3. Parallel Processing: Creates async tasks for each chunk using Tokio
  4. Range Requests: Uses HTTP Range headers to request specific byte ranges
  5. Concurrent Writing: Each chunk writes to its designated position in the output file
  6. Progress Tracking: Updates progress bars as chunks download

Performance Considerations

  • More chunks aren't always better - there's a point of diminishing returns based on your connection and the server's capabilities
  • Some servers may limit or reject parallel requests
  • Best performance is usually achieved when chunk count is 2-4× the number of CPU cores

Limitations

  • Some servers don't support range requests and will return the full file regardless
  • Certain CDNs or servers may throttle or block parallel connections
  • No support for authentication methods beyond what's in the URL
  • No automatic retry mechanism for failed chunks

License

This project is available under the MIT License - see the LICENSE file for details.

Acknowledgements

  • This code was generated entirely by Claude, an AI assistant by Anthropic
  • Uses the following Rust crates:
    • reqwest: HTTP client
    • tokio: Async runtime
    • clap: Command-line argument parsing
    • indicatif: Progress bars
    • anyhow: Error handling

Dependencies

~10–23MB
~317K SLoC