#tar-archive #archive #s3 #amazon-s3

ssstar

Library crate that creates and restores archives to and from S3 or S3-compatible storage. ssstar is specifically designed to stream both input and output data so memory usage is minimal, while aggressive concurrency is used to maximize network throughput. If you’re looking for the command line (CLI), see ssstar-cli

15 unstable releases (6 breaking)

0.7.3 Mar 12, 2024
0.7.1 Nov 20, 2023
0.6.0 Jul 19, 2023
0.4.3 Mar 3, 2023
0.2.0 Sep 2, 2022

#204 in Filesystem

Download history 2403/week @ 2024-08-08 1747/week @ 2024-08-15 1734/week @ 2024-08-22 2156/week @ 2024-08-29 1863/week @ 2024-09-05 2333/week @ 2024-09-12 1674/week @ 2024-09-19 1844/week @ 2024-09-26 1152/week @ 2024-10-03 1661/week @ 2024-10-10 1820/week @ 2024-10-17 736/week @ 2024-10-24 1519/week @ 2024-10-31 1841/week @ 2024-11-07 771/week @ 2024-11-14 570/week @ 2024-11-21

4,842 downloads per month
Used in ssstar-cli

Apache-2.0 OR MIT

250KB
3.5K SLoC

ssstar

Highly concurrent archiving of S3 objects to and from tar archives.


This is the Rust library crate which powers the ssstar CLI. If you're looking for the ssstar command line utility, see the ssstar-cli crate.

To create a tar archive containing S3 objects, instantiate a CreateArchiveJob:

# use ssstar::*;
# #[tokio::main]
# async fn main() -> Result<(), Box<dyn std::error::Error + Sync + Send + 'static>> {
// Write the archive to a local file
let target = TargetArchive::File("test.tar".into());

let mut builder = CreateArchiveJobBuilder::new(Config::default(), target);

// Archive all of the objects in this bucket
builder.add_input(&"s3://my-bucket".parse()?).await?;

let job = builder.build().await?;

job.run_without_progress(futures::future::pending()).await?;

# Ok(())
# }

Target archives can be written to a local file, an S3 bucket, or an arbitrary Tokio AsyncWrite implementation. See TargetArchive for more details.

Restoring a tar archive to object storage is similarly straightforward:

# use ssstar::*;
# #[tokio::main]
# async fn main() -> Result<(), Box<dyn std::error::Error + Sync + Send + 'static>> {
// Read the archive from a local file
let source = SourceArchive::File("test.tar".into());

// Extract the archive to an S3 bucket, prepending a `foo/` prefix to every file path in
// the archive
let target = "s3://my-bucket/foo/".parse::<url::Url>()?;

let mut builder = ExtractArchiveJobBuilder::new(Config::default(), source, target).await?;

// Extract only text files, in any directory, from the archive
builder.add_filter("**/*.txt")?;

let job = builder.build().await?;

job.run_without_progress(futures::future::pending()).await?;

# Ok(())
# }

Dependencies

~29–41MB
~576K SLoC