25 unstable releases (11 breaking)
0.11.0 | Nov 24, 2024 |
---|---|
0.10.1 | Oct 30, 2024 |
0.9.0 | May 9, 2024 |
0.8.3 | Mar 15, 2024 |
0.2.0 | Aug 24, 2021 |
#734 in Cryptography
1,355 downloads per month
Used in miden-node-block-producer
645KB
8K
SLoC
Winterfell
This crate contains Winterfell STARK prover and verifier. It simply re-exports components defined in the prover and verifier crates.
License
This project is MIT licensed.
lib.rs
:
This crate contains Winterfell STARK prover and verifier.
A STARK is a novel proof-of-computation scheme to create efficiently verifiable proofs of the correct execution of a computation. The scheme was developed by Eli Ben-Sasson, Michael Riabzev et al. at Technion - Israel Institute of Technology. STARKs do not require an initial trusted setup, and rely on very few cryptographic assumptions. See references for more info.
Proof generation
To generate a proof that a computation was executed correctly, you'll need to do the following:
- Define an algebraic intermediate representation (AIR) for your computation. This can be done by implementing [Air] trait.
- Define an execution trace for your computation. This can be done by implementing [Trace] trait. Alternatively, you can use the [TraceTable] struct which already implements [Trace] trait in cases when this generic implementation works for your use case.
- Execute your computation and record its execution trace.
- Define your prover by implementing [Prover] trait. Then execute [Prover::prove()] function passing the trace generated in the previous step into it as a parameter. The function will return an instance of [Proof].
This Proof
can be serialized and sent to a STARK verifier for verification. The size
of proof depends on the specifics of a given computation, but for most computations it should
be in the range between 15 KB (for very small computations) and 300 KB (for very large
computations).
Proof generation time is also highly dependent on the specifics of a given computation, but also depends on the capabilities of the machine used to generate the proofs (i.e. on the number of CPU cores and memory bandwidth).
When the crate is compiled with the concurrent
feature enabled, proof generation will be
performed in multiple threads (usually, as many threads as there are logical cores on the
machine). The number of threads can be configured via RAYON_NUM_THREADS
environment
variable.
Prof verification
To verify a [Proof] generated as described in the previous sections, you'll need to do the following:
- Define an algebraic intermediate representation (AIR) for you computation. This AIR must be the same as the one used during the proof generation process.
- Execute [verify()] function and supply the AIR of your computation together with the [Proof] and related public inputs as parameters.
Proof verification is extremely fast and is nearly independent of the complexity of the computation being verified. In the vast majority of cases, proofs can be verified in 3 - 5 ms on a modern mid-range laptop CPU (using a single core).
There is one exception, however: if a computation requires a lot of sequence
assertions
(see [Assertion] for more info), the verification time will grow linearly in the number of
asserted values. But for the impact to be noticeable, the number of asserted values would
need to be in tens of thousands. And even for hundreds of thousands of assorted values, the
verification time should not exceed 50 ms.
Examples
The best way to understand the STARK proof generation and verification process is to go through a trivial example from start to finish. First, we'll need to pick a computation for which we'll be generating and verifying STARK proofs. To keep things simple, we'll use the following:
use winterfell::math::{fields::f128::BaseElement, FieldElement};
fn do_work(start: BaseElement, n: usize) -> BaseElement {
let mut result = start;
for _ in 1..n {
result = result.exp(3) + BaseElement::new(42);
}
result
}
This computation starts with an element in a finite field and then, for the specified number
of steps, cubes the element, and adds value 42
to it.
Suppose, we run this computation for a million steps and get some result. Using STARKs we can Prove that we did the work correctly without requiring any verifying party to re-execute the computation. Here is how to do it:
First, we need to define an execution trace for our computation. This trace should capture
the state of the computation at every step of its execution. In our case, the trace is just a
single column of intermediate values after each execution of the loop. For example, if we start
with value 3
and run the computation for 1,048,576 (same as 220) steps, the
execution trace will look like this:
Step | State |
---|---|
0 | 3 |
1 | 69 |
2 | 328551 |
3 | 35465687262668193 |
4 | 237280320818395402166933071684267763523 |
... | |
1,048,575 | 247770943907079986105389697876176586605 |
To record the trace, we'll use the [TraceTable] struct. The function below is just a
modified version of the do_work()
function which records every intermediate state of the
computation in the [TraceTable] struct:
use winterfell::{
math::{fields::f128::BaseElement, FieldElement},
TraceTable,
};
pub fn build_do_work_trace(start: BaseElement, n: usize) -> TraceTable<BaseElement> {
// Instantiate the trace with a given width and length; this will allocate all
// required memory for the trace
let trace_width = 1;
let mut trace = TraceTable::new(trace_width, n);
// Fill the trace with data; the first closure initializes the first state of the
// computation; the second closure computes the next state of the computation based
// on its current state.
trace.fill(
|state| {
state[0] = start;
},
|_, state| {
state[0] = state[0].exp(3u32.into()) + BaseElement::new(42);
},
);
trace
}
Next, we need to define algebraic intermediate representation (AIR) for our computation. This process is usually called arithmetization. We do this by implementing the [Air] trait. At the high level, the code below does three things:
- Defines what the public inputs for our computation should look like. These inputs are called "public" because they must be known to both, the prover and the verifier.
- Defines a transition function with a single transition constraint. This transition constraint must evaluate to zero for all valid state transitions, and to non-zero for any invalid state transition. The degree of this constraint is 3 (see more about constraint degrees "Constraint degrees" section of [Air] trait documentation).
- Define two assertions against an execution trace of our computation. These assertions tie a specific set of public inputs to a specific execution trace (see more about assertions "Trace assertions" section of [Air] trait documentation).
Here is the actual code:
use winterfell::{
math::{fields::f128::BaseElement, FieldElement, ToElements},
Air, AirContext, Assertion, GkrVerifier, EvaluationFrame,
ProofOptions, TraceInfo, TransitionConstraintDegree,
crypto::{hashers::Blake3_256, DefaultRandomCoin, MerkleTree},
};
// Public inputs for our computation will consist of the starting value and the end result.
pub struct PublicInputs {
start: BaseElement,
result: BaseElement,
}
// We need to describe how public inputs can be converted to field elements.
impl ToElements<BaseElement> for PublicInputs {
fn to_elements(&self) -> Vec<BaseElement> {
vec![self.start, self.result]
}
}
// For a specific instance of our computation, we'll keep track of the public inputs and
// the computation's context which we'll build in the constructor. The context is used
// internally by the Winterfell prover/verifier when interpreting this AIR.
pub struct WorkAir {
context: AirContext<BaseElement>,
start: BaseElement,
result: BaseElement,
}
impl Air for WorkAir {
// We'll specify which finite field to use for our computation, and also how
// the public inputs must look like.
type BaseField = BaseElement;
type PublicInputs = PublicInputs;
type GkrProof = ();
type GkrVerifier = ();
// Here, we'll construct a new instance of our computation which is defined by 3
// parameters: starting value, number of steps, and the end result. Another way to
// think about it is that an instance of our computation is a specific invocation of
// the do_work() function.
fn new(trace_info: TraceInfo, pub_inputs: PublicInputs, options: ProofOptions) -> Self {
// our execution trace should have only one column.
assert_eq!(1, trace_info.width());
// Our computation requires a single transition constraint. The constraint itself
// is defined in the evaluate_transition() method below, but here we need to specify
// the expected degree of the constraint. If the expected and actual degrees of the
// constraints don't match, an error will be thrown in the debug mode, but in release
// mode, an invalid proof will be generated which will not be accepted by any verifier.
let degrees = vec![TransitionConstraintDegree::new(3)];
// We also need to specify the exact number of assertions we will place against the
// execution trace. This number must be the same as the number of items in a vector
// returned from the get_assertions() method below.
let num_assertions = 2;
WorkAir {
context: AirContext::new(trace_info, degrees, num_assertions, options),
start: pub_inputs.start,
result: pub_inputs.result,
}
}
// In this method we'll define our transition constraints; a computation is considered to
// be valid, if for all valid state transitions, transition constraints evaluate to all
// zeros, and for any invalid transition, at least one constraint evaluates to a non-zero
// value. The `frame` parameter will contain current and next states of the computation.
fn evaluate_transition<E: FieldElement + From<Self::BaseField>>(
&self,
frame: &EvaluationFrame<E>,
_periodic_values: &[E],
result: &mut [E],
) {
// First, we'll read the current state, and use it to compute the expected next state
let current_state = frame.current()[0];
let next_state = current_state.exp(3u32.into()) + E::from(42u32);
// Then, we'll subtract the expected next state from the actual next state; this will
// evaluate to zero if and only if the expected and actual states are the same.
result[0] = frame.next()[0] - next_state;
}
// Here, we'll define a set of assertions about the execution trace which must be
// satisfied for the computation to be valid. Essentially, this ties computation's
// execution trace to the public inputs.
fn get_assertions(&self) -> Vec<Assertion<Self::BaseField>> {
// for our computation to be valid, value in column 0 at step 0 must be equal to the
// starting value, and at the last step it must be equal to the result.
let last_step = self.trace_length() - 1;
vec![
Assertion::single(0, 0, self.start),
Assertion::single(0, last_step, self.result),
]
}
// This is just boilerplate which is used by the Winterfell prover/verifier to retrieve
// the context of the computation.
fn context(&self) -> &AirContext<Self::BaseField> {
&self.context
}
}
Next, we need define our prover. This can be done by implementing [Prover] trait. The trait is pretty simple and has just a few required methods. Here is how our implementation could look like:
use winterfell::{
crypto::{hashers::Blake3_256, DefaultRandomCoin, MerkleTree},
math::{fields::f128::BaseElement, FieldElement, ToElements},
matrix::ColMatrix,
CompositionPoly, CompositionPolyTrace, DefaultConstraintCommitment,
DefaultTraceLde, ProofOptions, Prover, StarkDomain, Trace,
TracePolyTable, TraceTable,
};
#
#
#
#
#
#
#
#
#
// Our prover needs to hold STARK protocol parameters which are specified via ProofOptions
// struct.
struct WorkProver {
options: ProofOptions
}
impl WorkProver {
pub fn new(options: ProofOptions) -> Self {
Self { options }
}
}
// When implementing the Prover trait we set the `Air` associated type to the AIR of the
// computation we defined previously, and set the `Trace` associated type to `TraceTable`
// struct as we don't need to define a custom trace for our computation. For other
// associated types, we'll use default implementation provided by Winterfell.
impl Prover for WorkProver {
type BaseField = BaseElement;
type Air = WorkAir;
type Trace = TraceTable<Self::BaseField>;
type HashFn = Blake3_256<Self::BaseField>;
type VC = MerkleTree<Self::HashFn>;
type RandomCoin = DefaultRandomCoin<Self::HashFn>;
type TraceLde<E: FieldElement<BaseField = Self::BaseField>> = DefaultTraceLde<E, Self::HashFn, Self::VC>;
type ConstraintCommitment<E: FieldElement<BaseField = Self::BaseField>> =
DefaultConstraintCommitment<E, Self::HashFn, Self::VC>;
type ConstraintEvaluator<'a, E: FieldElement<BaseField = Self::BaseField>> =
DefaultConstraintEvaluator<'a, Self::Air, E>;
// Our public inputs consist of the first and last value in the execution trace.
fn get_pub_inputs(&self, trace: &Self::Trace) -> PublicInputs {
let last_step = trace.length() - 1;
PublicInputs {
start: trace.get(0, 0),
result: trace.get(0, last_step),
}
}
fn options(&self) -> &ProofOptions {
&self.options
}
fn new_trace_lde<E: FieldElement<BaseField = Self::BaseField>>(
&self,
trace_info: &TraceInfo,
main_trace: &ColMatrix<Self::BaseField>,
domain: &StarkDomain<Self::BaseField>,
partition_option: PartitionOptions,
) -> (Self::TraceLde<E>, TracePolyTable<E>) {
DefaultTraceLde::new(trace_info, main_trace, domain, partition_option)
}
fn build_constraint_commitment<E: FieldElement<BaseField = Self::BaseField>>(
&self,
composition_poly_trace: CompositionPolyTrace<E>,
num_constraint_composition_columns: usize,
domain: &StarkDomain<Self::BaseField>,
partition_options: PartitionOptions,
) -> (Self::ConstraintCommitment<E>, CompositionPoly<E>) {
DefaultConstraintCommitment::new(
composition_poly_trace,
num_constraint_composition_columns,
domain,
partition_options,
)
}
fn new_evaluator<'a, E: FieldElement<BaseField = Self::BaseField>>(
&self,
air: &'a Self::Air,
aux_rand_elements: Option<AuxRandElements<E>>,
composition_coefficients: winterfell::ConstraintCompositionCoefficients<E>,
) -> Self::ConstraintEvaluator<'a, E> {
DefaultConstraintEvaluator::new(air, aux_rand_elements, composition_coefficients)
}
}
Now, we are finally ready to generate and verify STARK proofs.
In the code below, we will execute our computation and get the result together with the proof that the computation was executed correctly. Then, we will use this proof (together with the public inputs) to verify that we did execute the computation and got the claimed result.
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
// We'll just hard-code the parameters here for this example. We'll also just run the
// computation just for 1024 steps to save time during testing.
let start = BaseElement::new(3);
let n = 1024;
// Build the execution trace and get the result from the last step.
let trace = build_do_work_trace(start, n);
let result = trace.get(0, n - 1);
// Define proof options; these will be enough for ~96-bit security level.
let options = ProofOptions::new(
32, // number of queries
8, // blowup factor
0, // grinding factor
FieldExtension::None,
8, // FRI folding factor
31, // FRI max remainder polynomial degree
);
// Instantiate the prover and generate the proof.
let prover = WorkProver::new(options);
let proof = prover.prove(trace).unwrap();
// The verifier will accept proofs with parameters which guarantee 95 bits or more of
// conjectured security
let min_opts = winterfell::AcceptableOptions::MinConjecturedSecurity(95);
// Verify the proof. The number of steps and options are encoded in the proof itself,
// so we don't need to pass them explicitly to the verifier.
let pub_inputs = PublicInputs { start, result };
assert!(winterfell::verify::<WorkAir,
Blake3_256<BaseElement>,
DefaultRandomCoin<Blake3_256<BaseElement>>,
MerkleTree<Blake3_256<BaseElement>>
>(proof, pub_inputs, &min_opts).is_ok());
That's all there is to it!
References
If you are interested in learning how STARKs work under the hood, here are a few links to get you started. From the standpoint of this library, arithmetization is by far the most important concept to understand.
- STARKs whitepaper: Scalable, transparent, and post-quantum secure computational integrity
- STARKs vs. SNARKs: A Cambrian Explosion of Crypto Proofs
Vitalik Buterin's blog series on zk-STARKs:
- STARKs, part 1: Proofs with Polynomials
- STARKs, part 2: Thank Goodness it's FRI-day
- STARKs, part 3: Into the Weeds
StarkWare's STARK Math blog series:
Dependencies
~5.5MB
~99K SLoC