#tar #s3 #archive

ssstar

Library crate that creates and restores archives to and from S3 or S3-compatible storage. ssstar is specifically designed to stream both input and output data so memory usage is minimal, while aggressive concurrency is used to maximize network throughput. If you’re looking for the command line (CLI), see ssstar-cli

9 releases (4 breaking)

0.5.0 May 22, 2023
0.4.3 Mar 3, 2023
0.4.2 Feb 25, 2023
0.4.0 Jan 27, 2023
0.1.3 Aug 31, 2022

#144 in Filesystem

Download history 3096/week @ 2023-02-07 3504/week @ 2023-02-14 2597/week @ 2023-02-21 3909/week @ 2023-02-28 3232/week @ 2023-03-07 3397/week @ 2023-03-14 2228/week @ 2023-03-21 4015/week @ 2023-03-28 3515/week @ 2023-04-04 3742/week @ 2023-04-11 3066/week @ 2023-04-18 3992/week @ 2023-04-25 1746/week @ 2023-05-02 3809/week @ 2023-05-09 3394/week @ 2023-05-16 4038/week @ 2023-05-23

12,999 downloads per month
Used in ssstar-cli

Apache-2.0 OR MIT

245KB
3.5K SLoC

ssstar

Highly concurrent archiving of S3 objects to and from tar archives.


This is the Rust library crate which powers the ssstar CLI. If you're looking for the ssstar command line utility, see the ssstar-cli crate.

To create a tar archive containing S3 objects, instantiate a CreateArchiveJob:

# use ssstar::*;
# #[tokio::main]
# async fn main() -> Result<(), Box<dyn std::error::Error + Sync + Send + 'static>> {
// Write the archive to a local file
let target = TargetArchive::File("test.tar".into());

let mut builder = CreateArchiveJobBuilder::new(Config::default(), target);

// Archive all of the objects in this bucket
builder.add_input(&"s3://my-bucket".parse()?).await?;

let job = builder.build().await?;

job.run_without_progress(futures::future::pending()).await?;

# Ok(())
# }

Target archives can be written to a local file, an S3 bucket, or an arbitrary Tokio AsyncWrite implementation. See [TargetArchive] for more details.

Restoring a tar archive to object storage is similarly straightforward:

# use ssstar::*;
# #[tokio::main]
# async fn main() -> Result<(), Box<dyn std::error::Error + Sync + Send + 'static>> {
// Read the archive from a local file
let source = SourceArchive::File("test.tar".into());

// Extract the archive to an S3 bucket, prepending a `foo/` prefix to every file path in
// the archive
let target = "s3://my-bucket/foo/".parse::<url::Url>()?;

let mut builder = ExtractArchiveJobBuilder::new(Config::default(), source, target).await?;

// Extract only text files, in any directory, from the archive
builder.add_filter("**/*.txt")?;

let job = builder.build().await?;

job.run_without_progress(futures::future::pending()).await?;

# Ok(())
# }

Dependencies

~22–29MB
~532K SLoC