#zfs #snapshot #pool #data #prune #maintain #stale

app shock

Regularly shock your ZFS pools to maintain good hygiene and prune stale snapshots

12 releases

0.1.11 May 5, 2024
0.1.9 Mar 2, 2024
0.1.6 Dec 1, 2023
0.1.5 Nov 5, 2023

#133 in Filesystem

MIT license

18KB
346 lines

Crates.io

shock

Regularly shock your ZFS pools to maintain good hygiene and prune stale snapshots.

Usage

nix run github:ipetkov/shock -- --verbose --recursive --config ./path/to/config.toml tank/persist

or:

nix shell github:ipetkov/shock

or:

cargo install shock

then:

shock --verbose --recursive --config ./path/to/config.toml tank/persist

Note that shock will perform a dry-run by default. No data will be deleted unless --destroy is passed in.

NixOS

{
  inputs.shock.url = "github:ipetkov/shock";

  outputs = { self, nixpkgs, shock }: {
    nixosConfigurations.host = nixpkgs.lib.nixosSystem {
      system = "x86_64-linux";
      modules = [
        ./configuration.nix
        shock.nixosModules.default
        ({ config, pkgs, ... }: {
          nixpkgs.overlays = [ inputs.shock.overlays.default ];
          services.shock = {
            startAt = "daily";
            persistentTimer = true;
            jobs = {
              backups = {
                # Disable to only operate on the dataset itself
                # and not any of its children
                recursive = true;
                verbose = true; # Disable for quieter logs
                datasets = [
                  "tank/backups"
                  "tank/another"
                ];
                #destroy = true; # Uncomment to actually destroy data!
                prefix = {
                  zfs-auto-snap_monthly = 12;
                  zfs-auto-snap_weekly = 4;
                  zfs-auto-snap_daily = 7;
                  zfs-auto-snap_hourly = 24;
                  zfs-auto-snap_frequent = 4;
                };
              };
            };
          };
        })
      ];
    };
  };
}

Reference

Shock your ZFS pools to maintain good hygeine

Usage: shock [OPTIONS] --config <CONFIG> [DATASETS]...

Arguments:
  [DATASETS]...  The pools or datasets to shock

Options:
  -r, --recursive        Recursively operate on the specified datasets
  -v, --verbose          Enable verbose output
      --destroy          Perform destructive actions. Omit for dry run
  -c, --config <CONFIG>  Path to the TOML configuration
  -h, --help             Print help
  -V, --version          Print version

Configuration

# Keep up to N snapshots whose name starts with the specified prefix.
# Only snapshots within the same dataset will be counted, and any snapshots
# whose name does not match any prefix will be ignored.
[prefix]
zfs-auto-snap_monthly = 12
zfs-auto-snap_weekly = 4
zfs-auto-snap_daily = 7
zfs-auto-snap_hourly = 24
zfs-auto-snap_frequent = 4

Why this exists

Snapshot creation and pruning are inherently intertwined to the point that usually they are both done with the same tool. It can get difficult, however, to manage snapshots created on another host (and replicated to the current host) without actually running the snapshot creator on that pool.

As a more concrete example: zfs-auto-snapshot only prunes snapshots while creating snapshots, so if one is replicating a dataset whose snapshots are created by zfs-auto-snapshot there isn't a good way to prune them (especially since on NixOS there is a single global configuration for what snapshots are kept). Sanoid allows for comprehensive management policies, except it, sadly, only knows how to manage snapshots it has created (and the names used by zfs-auto-snapshot differ). For the situations where is is impractical or infeasible to use sanoid, shock exists as an easy way to bridge the gap.

License

This project is licensed under the MIT license.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion by you, shall be licensed as MIT, without any additional terms or conditions.

Dependencies

~5MB
~91K SLoC