17 releases

0.3.7 Dec 2, 2023
0.3.5 Nov 30, 2023
0.3.4 Mar 25, 2023
0.3.3 Dec 1, 2022
0.1.2 Nov 30, 2018

#44 in Cargo plugins

Download history 41/week @ 2023-12-17 23/week @ 2023-12-24 3/week @ 2024-01-07 2/week @ 2024-02-11 23/week @ 2024-02-18 22/week @ 2024-02-25 11/week @ 2024-03-03 21/week @ 2024-03-10 12/week @ 2024-03-17 8/week @ 2024-03-24 236/week @ 2024-03-31

278 downloads per month

MIT/Apache

42KB
841 lines

Cargo Advent of Code Helper

cargo-aoc is a simple CLI tool that aims to be a helper for the Advent of Code.

Implement your solution. Let us handle the rest.

Features

  • Input downloading
  • Running your solution
  • Automatic benchmarking of your solution using Criterion

Getting started

Install cargo aoc

cargo-aoc is hosted as a binary on crates.io. Boot a terminal and install the program using cargo install cargo-aoc

Setting up the CLI

You will need to find your session token for the AoC in order for cargo-aoc to work. Thankfully, finding your token is easy since it is stored in your Browser's cookies. Open up the devtools of your browser, and then :

  • Firefox: "Storage" tab, Cookies, and copy the "Value" field of the session cookie.
  • Google Chrome / Chromium: "Application" tab, Cookies, and copy the "Value" field of the session cookie.

Once you have it, simply run : cargo aoc credentials {token}

You're now ready to start coding !

NOTE: If for some reason your token has changed, dont forget to change it back.

cargo aoc credentials will show the currently stored user token

Setting up the project

In order for cargo-aoc to work properly, you have to set the project up correctly.

If you get lost during the process, you can take this example repository of AoC 2015 as a template.

First, you must add a dependency on aoc-runner and aoc-runner-derive in your Cargo.toml. At the end of the src/lib.rs, you will have to use the macro aoc_lib!{ year = XXXX }, where XXXX is the year of the AoC puzzles being solved.

When implementing a solution for a day, you have to provide functions and tag them accordingly. A function is either a solver or a generator.

Those two types of functions are being executed and benchmarked seperately. Lets have a closer look :

Generator functions

Generators allows you to provide a custom type to the solver functions. Sometimes in AoC, you have to parse an input and extract a logical structure out of it, before you can actually solve the problem.

Generator functions are tagged #[aoc_generator(dayX)].

Because examples are worth a thousand words, lets take a look at Year 2015, Day 2 :

From the puzzle's description, we know that [we] have a list of the dimensions (length l, width w, and height h) of each present, each present on one line, represented like so: {L}x{W}x{H}.

We might want to first parse the input and extract logical Gift structs out of it, like:

pub struct Gift {
    l: u32,
    w: u32,
    h: u32
}

In @Gobanos' reference implementation, we can see that he instead chose to settle for a custom type : type Gift = (u32, u32, u32);.

Thus, writing a generator for Gifts is fairly simple:

#[aoc_generator(day2)]
pub fn input_generator(input: &str) -> Vec<Gift> {
    input
        .lines()
        .map(|l| {
            let mut gift = l.trim().split('x').map(|d| d.parse().unwrap());
            (
                gift.next().unwrap(),
                gift.next().unwrap(),
                gift.next().unwrap(),
            )
        }).collect()
}

As you can see, generators take a &str (or a &[u8]) type as an input, and outputs any type that you want, so you can then use it in solver functions.

link to doc

Solver functions

Solver functions are typically your algorithms, they take any input type provided by a generator, and return any type that you want to use, provided that it implements the Display trait.

Solver functions are tagged #[aoc(day2, part1)]. Optionally, you can have multiple implementation for the same part of a day. You must then use a name to tag them correctly, for example : #[aoc(day2, part1, for_loop)].

Following with the previous example, implementing a solver for the part one could be done like this :

#[aoc(day2, part1)]
pub fn solve_part1(input: &[Gift]) -> u32 {
    input
        .iter()
        .map(|&(l, w, h)| {
            let (s1, s2) = smallest_side((l, w, h));
            2 * l * w + 2 * w * h + 2 * h * l + s1 * s2
        })
        .sum()
}

Notice how we're taking the Gifts generated previously, and using Rust's iterators to solve the problem efficiently, all the while keeping the code maintainable.

The output of this particular solver is an u32, which of course implements Display. When running your solution using cargo aoc, said result will then get printed in the console, along with other informations about execution time.

link to doc

Downloading your input manually

cargo aoc input will download an input and store it in input/{year}/day_{day}.txt.

Please note that by default, we're taking today's date as the argument. Of course, you can change this using : cargo aoc input -d {day} -y {year}

Running your solution

cargo aoc will run the latest implemented day, downloading your input beforehand. It will show you the result, and a short summary of how well it did perform.

Example output on my Chromebook, running @Gobanos' AOC2015 :

[olivier@olivier-pc advent-of-code-2015]$ cargo aoc
    Finished dev [unoptimized + debuginfo] target(s) in 0.12s
   Compiling aoc-autobuild v0.1.0 (/home/olivier/Workspace/Rust/advent-of-code-2015/target/aoc/aoc-autobuild)
    Finished release [optimized] target(s) in 0.87s
     Running `target/release/aoc-autobuild`
AOC 2015
Day 5 - Part 1 : 238
        generator: 18.122µs,
        runner: 420.958µs

Day 5 - Part 2 : 69
        generator: 5.499µs,
        runner: 1.142373ms

If you want to run an older puzzle, or only a specific part, specify those using cargo aoc -d {day} -p {part}.

Benchmarking your solution

Benchmarking is powered by Criterion. Use cargo aoc bench to launch the benchmarks, just like you would use cargo aoc.

Benchmarks for each days are then generated in target/aoc/aoc-autobench/target/criterion.

You can open the benchmark automatically in your Browser afterwards, using cargo aoc bench -o

Soon(tm), you will also be able to use our (free) online platform, to compare your results with those of the community.


Happy Advent of Code !

Dependencies

~9–25MB
~349K SLoC