Zero-knowledge proofs are becoming increasingly popular, but it can be a hard space to navigate and find the resources to get started as a developer. After spending some time studying the topic, I'm compiling here what I learned in hope it helps others. Note from the title that this is a beginner's guide, as in its author (me!) being a beginner on ZKP programming, so please reach out if you find anything off in this article!
This article will cover what a zero-knowledge proof is, how it is used from a developer's perspective, and go through a few languages for writing them. It will not go through the underlying math: we'll treat ZKPs as a cryptographic black box.
What is a zero-knowledge proof?
A zero-knowledge proof (ZKP for short) is a cryptographic proof about a fact that does not reveal any info about said fact.
A classic example to introduce ZKPs is that I (the prover) can prove to you (the verifier) that I know a valid coloring for a graph without revealing it. To do it, we play a game where I color the graph using a random set of colors, then cover it, and you choose an edge at random. If my coloring was valid, then both nodes connected by that edge must have different colors. Then we repeat, with a different set of colors each time, until you are satisfied. Since I re-paint the graph before every round, you don't learn anything about the coloring, but you can verify that it was valid every time.
Another well-known example of what could be considered a ZKP is a digital signature: I can produce a proof that I hold a secret string (my private key) that corresponds to a public one (my public key) by signing a message, and that signature does not reveal any bits from my private key.
What is a zkSNARK?
SNARK stands for Succinct Non-interactive ARgument of Knowledge. The important bits here are the first two words: succinct and non-interactive.
A proof is succinct if it's, well, small. This means that, regardless of the complexity of the fact that we are proving, we can keep proofs small and quick to verify. This is what makes ZKPs attractive for building rollups: you can generate a succinct proof that all operations that occurred on the L2 chain were valid, succinct enough that you can verify it in a contract on L1. Protocols like Mina take this to the extreme with recursive proofs, which allow them to prove the validity of the entire history of the chain with a constant-size proof.
As for non-interactivity, just think how cumbersome it would be to deal with signatures if you needed the signer to be online every single time someone needed to verify it. In the examples above, the graph coloring proof was interactive, requiring an online game between the prover and the verifier. Fortunately, zkSNARKs do not have this requirement.
ZKPs flavors
Turns out that zkSNARKs are not the only kind of interesting ZKPs. There are also STARKs (pioneered by Starkware and used to power Starknet) and Bulletproofs (used in Monero). These differ in the running time for generating and verifying a proof, as well as the resulting proof size, or in the need for trusted setups (more on this in the next section). MatterLabs has a great comparison between them here.
Even within SNARKs, you'll find that there are multiple flavors. Groth16 is one of the most popular. The PLONK family, which includes TurboPLONK and UltraPLONK, can remove the need for a circuit-specific trusted setup at the expense of proof generation and verification time, or add support for new types of operations.
Furthermore, there are different elliptic curves to pick from, or different polynomial commitment schemes that can be used. Consensys Gnark provides a great explainer on SNARK proving schemes and curves here.
Usually, picking a ZKP language to work with will determine which ZK backend you end up using, though there are some like Circom or Gnark that support more than a single proving system.
Trusted setup
Some flavors of proving systems require what's called a trusted setup. A trusted setup is a one-time ceremony, run by multiple participants, which produces a set of random values that power the ZK proving system.
These ceremonies are critical because if all of its participants collude they can break the proving system. Here, breaking the proving system means being able to generate a proof for an invalid claim that gets accepted by the verifier. Fortunately, having just one honest participant does the trick: that's why you'll find calls for as many community members as possible to participate in such a ceremony.
Certain flavors, like Groth16, require a trusted setup for each circuit: this means that, for every program you code, you need to run a new ceremony. Others, like PLONK, need just one universal trusted setup that can be reused on any program built with that flavor. And others, like STARKs, don't need any trusted setup at all.
Constraints
The main thing you need to know about ZKPs internals is that the proof operates on a mathematical constraint system. In other words, prover engines do not deal with code like x := y * z
, but with constraints of the form x - y * z = 0
. Depending on the language you're working with and the prover backend, these may impact how you structure your code.
For instance, Circom only allows you to write quadratic expressions on your variables, whereas Halo2 lets you write polynomial constraints of any degree. As we'll see later, one of the outputs of compiling a program in a ZKP language will be the set of constraints.
This has the implication that certain operations are easier to represent in a ZKP than others: addition can be represented in a single constraint, whereas bit manipulation may require hundreds, as every bit may need to be represented as a separate variable. This is the reason why specific cryptographic primitives are typically chosen when working on ZKPs. As an example, Pedersen hashes are hash functions that can be represented in an arithmetic circuit much more efficiently than keccak, so they are frequently found in ZKPs.
Finite fields
Another important bit you need to know is that constraints operate in a finite field of a size determined by the underlying elliptic curve. What this means is that all arithmetic operations are computed modulo a certain value, meaning they will wrap-around that value. Don't worry though, this value is usually large enough. For instance, in Circom it is:
21888242871839275222246405745257275088548364400416034343698204186575808495617
This also means that, if you want to define specific thresholds for your integer variables, you'll need to add extra constraints for it. Some languages like Noir will do it for you, whereas others will require you to do it manually.
Why all the buzz?
ZKPs have drawn a lot of attention lately. The main reason is that they can be used as a means to cheaply verify any computation, effectively becoming a general-purpose cryptographic building block.
Gubsheep describes ZKPs as programmable cryptography, in the sense that you can use them to program cryptographic proofs for general problems. For instance, you could prove set membership using ring signatures which are specifically built to tackle that problem, or just coding a zkSNARK circuit for it.
In the Devcon6 panel on Zero Knowledge, there's a great point made about their power in enabling interoperability across different systems: once you have a system that can verify a ZKP, then you can verify that any other system with any computational environment behaved as expected, as long as you can express its computation as a ZKP. This is what powers zk-rollups: if you can write the rules of your chain as a ZKP, then you can push that proof to Ethereum and verify it on the EVM - even if your rollup doesn't use the EVM as an execution environment!
What does a circuit look like?
Let's now look into how all this works. The first step is to write a program in your ZK language or framework of choice, such as Circom, Halo2, or Noir. These programs are often referred to as circuits, which express constraints among your set of variables, or signals.
For example, we can build a circuit in Circom that lets us prove that we know two numbers, a
and b
, such that c = (a*b)^2
, without revealing them:
template MultiplierSq() {
signal input a;
signal input b;
signal ab;
signal output c;
ab <== a * b;
c <== ab * ab;
}
Here we are defining two private input signals, an intermediate variable, and an output value. Note that, since Circom does not allow us to write anything more complex than quadratic constraints, we cannot directly write c <== (a*b)^2
, that's why we need the intermediate ab
signal.
Creating and verifying a proof
The workflow for using a ZKP circuit is not the same as a regular program, since we have two different agents: the prover and the verifier. And even within the prover, there are multiple steps involved to generate the proof. Let's go through them:
- We first write and compile the circuit. This outputs several artifacts, such as the set of constraints defined by the circuit, and a script or binary to be used in the step below.
- Depending on the SNARK flavor being used, it may be necessary to run a trusted setup ceremony to generate the proving and verifying keys, which are cryptographic material required for the proving and verifying process, as their names imply.
- When interacting with our zk-app, the user enters the private and public inputs for the circuit. The first step is to "execute" the circuit as if it were a program, using the script or binary generated during compilation. This calculates the values for all intermediate and output variables, given the input. In the example above, given
a = 2
andb = 3
, it will calculate thatab = 6
andc = 36
. This assignment of every signal to its concrete value is called a witness or trace. - Given the trace from step 3 and the proving key from step 2, the prover can now generate a zero-knowledge proof that all the constraints defined in the circuit hold, and only reveals the output value
c
. This proof can now be sent to the verifier. - The verifier, using the submitted proof and the verifying key, can now verify that the proof is correct for its public output. In our examples, the verifier can cryptographically check that the prover knows three values (
a
,b
,ab
) such thata * b = ab
andab * ab = c
, wherec = 36
.
Keep in mind that the verification step is cheap, so the verifier can be run as an EVM smart contract. This allows to trustlessly verify any off-chain computation that can be expressed as a ZKP.
If you want to try out the workflow above yourself, I strongly reccomend Circom's getting started guide that will take you through the necessary steps. In Circom, compiling a circuit will output a WASM or a C++ program that computes the witness, as well as a R1CS representation that, together with the trusted setup ceremony, outputs the proving and verifying keys. Then, in order to generate the proof, you supply the calculated witness and the proving key. To verify it, just use the generated proof along with the verifying key. You can also autogenerate a Solidity contract for running the verification on-chain.
Execution vs constraints
In the workflow above, you may have noted that the circuit is used twice by the prover: first to generate the witness, and then to derive the constraints that are covered by the proof. These two runs are completely different beasts, and understanding their difference is one of the keys to understanding how ZKPs work.
Let's go back to our Circom example from before, where we had two instructions:
ab <== a * b;
c <== ab * ab;
In Circom, a fat arrow is just syntax sugar for two different instructions, assignment and constrain, so the above can be expanded to:
ab <-- a*b
ab === a*b
c <-- ab * ab
c === ab * ab
The a <-- a*b
instructions means during the execution phase, assign a*b
to ab
, and gets ignored when compiling the constraints. On the other hand, ab === a*b
means add a constraint that forces ab
to be equal to a*b
, which gets ignored during execution. In other words, when writing a circuit you're writing two different programs, that belong to two different programming paradigms, in a single one.
While you will usually write assignments and constraints that are equivalent, sometimes you need to split them up. A good example of this is the IsZero
circuit.
The IsZero
circuit
Writing a circuit that outputs 1 if an input signal is zero and 0 otherwise is surprisingly difficult when you are constrained (pun intended) to just quadratic expressions. If you are thinking of using an expression like x === 0
, note that that will not return a boolean on whether x
is zero or not, but instead it will constrain the signal x
to be zero in this circuit.
Fortunately, Circom has a library of useful building blocks that includes an IsZero
circuit that returns 0
or 1
depending on whether the input is zero, and looks like this:
template IsZero() {
signal input in;
signal output out;
signal inv;
inv <-- in!=0 ? 1/in : 0;
out <-- -in*inv +1;
out === -in*inv +1;
in*out === 0;
}
This is a good example of decoupling execution and constraints in a circuit. Let's begin analyzing the constraints. The constraint in * out = 0
tells us that in
or out
is zero. If in
is zero, then out = -in * inv + 1 = 1
, so we're good. And if out
is zero, then it follows that in * inv = 1
, which means that in
cannot be zero. Great!
But note that the actual value of inv
never popped up in our reasoning, since we did not need it. This is valid for writing constraints, but when it comes to populating the trace, it falls short. We need to tell Circom how to calculate the value for inv
in the witness generation phase: that's where the inv <-- in!=0 ? 1/in : 0
expression comes in. Cairo appropriately calls this sort of expressions hints.
Note that we're using operations much more complex than a quadratic expression here: we have a conditional, a comparison, and a division. This is because the execution phase is unrelated to the constraints, and as such, is not bound by the restrictions of generating a ZKP. So we are back to pretty much usual programming when working in the execution phase, having access to additional operators, control flow statements, or even auxiliary functions.
Underconstrained computation bugs
However, this difference between execution and constraint generation phases can open the door to a very nasty kind of bugs. Going back to the IsZero
example, what would have happened if we forgot to write one of the constraints, say the in*out = 0
one?
Let's follow the workflow we know, assuming in=3
as input:
- The circuit would have compiled fine, since there are no errors.
- Witness generation would have followed the execution instructions, and correctly calculated the values
inv=3^-1, out=0
. - The prover would have generated a proof based on the witness and the public output, since the constraint
out = -in*inv +1
holds. - The verifier would have accepted it as valid, trusting that the prover has a private non-zero value.
So far so good. But what happens if a malicious prover wants to convince us that they have a zero value, when they actually have in=3
? Remember we don't have the in*out = 0
in this thought experiment.
- The malicious user can manually craft a witness where
in=3, inv=0, out=1
. - Since the witness satisfies the constraint
out = -in*inv +1
, the prover can generate a proof for it. - The verifier happily accepts it, and believes that the user has a zero value when they actually don't.
This kind of issues, dubbed under-constrained computation, are particularly hard to catch. Any unit test that treats the circuit as a black box would never trigger them, as they use the circuit's execution logic to build the witness, which will pass the constraints. The only way to reproduce them is to manually assemble a witness with custom logic, which requires that you exactly know the attack you are looking for.
Alternatively, there are efforts for verifying the soundness of a circuit you've written with regard to the function you want to encode, such as Ecne, whcih I still have to try out.
The bottom line here is that you should be particularly careful when you write execution code separate from your constraint generation code, since it opens the door to underconstrained ZK programs.
A real-world example
Tornado Cash is a great learning resource on how to build a real-world zk-app with Circom circuits. I strongly recommend going through the 3-page-long white paper, as well as Gubsheep's breaking down Tornado Cash session from the 0xparc course, plus Porter's overview of the white paper.
The gist of Tornado, in case you are not familiar with it, is that you can deposit ETH in discrete amounts (there are notes of 0.1, 1, 10, and 100 ETH) into a contract, and later anonymously withdraw each note into other addresses. This has the effect of mixing your notes with those from other people, granting anonimity as long as there is enough usage.
How does it work?
For the sake of the explanation, we'll go through a slightly simplified version of the protocol, and follow Gubsheep's version instead of the one actually implemented.
Whenever a user wants to deposit a note, they generate a secret value s
, and submit their 1 ETH note along with a hash of the secret H(s)
to the contract. The contract collects all submitted secret hashes into a merkle tree, and stores its root R
.
Now, when a user wants to withdraw their note, they need to prove that they have the secret that corresponds to H(s)
. To do this, the user submits a ZKP that shows that they know a private value s
such that H(s)
is in the merkle tree of root R
. It is not revealed which of all the leaves of the tree is the user' s H(s)
, as the ZKP only public output is R
, so this preserves anonimity. The merkle proof is checked by a circuit in the ZKP.
We also need a way to ensure that a user cannot withdraw the same note twice. Here is where the concept of nullifier comes in. A nullifier is a value, deterministically generated from the original secret, that is tracked to prevent double-spends.
So, in addition to proving membership to the merkle tree of root R
, the ZKP also outputs a value G(s)
to be used as a nullifier, where G
is another hash function different to H
. Then, whenever a note is withdrawn, its nullifier is added to a nullifier set in the smart contract, and the withdrawal is rejeted if it already exists. Note that G
needs to be different to H
, otherwise the ZKP would reveal H(s)
, which can be traced back to the original deposit and break anonimity.
In other words, whenever a user wants to withdraw, they submit a ZKP that states that they know a secret value that's in a merkle tree of root R
, and exposes the nullifier of that secret value. Then the contract can check that the merkle root matches and the nullifier hasn't been used it.
If you went through Tornado's white paper, you'll note that the nullifier concept is where their implementation diverges from what we described here. Tornado actually uses two different secret values, called randomness
r
and nullifierk
, and the public value they store is called the nullifier hash. Tornado also uses a single hash function (Pedersen hashes) instead of two different ones, and they compute the merkle tree leaf asH(r || k)
and the nullifier hash asH(k)
. The important thing is that the nullifier cannot be linked back to the merkle tree leaf, and that it is deterministically generated from the secret. You can check out Tornado's full circuits in Circom here.
Languages for ZKPs
There are multiple languages and frameworks that you can choose from for writing ZK circuits, either general purpose (like Circom) or specific to a certain platform (like Cairo). I have tried out three different languages, Circom, Halo2, and Noir, and implemented the same circuit to compare them. The circuit proves that the user knows the results of a private set of rock-paper-scissors matches, such that the total score matches a public output. The score is calculated as specified in the definition of advent of code 2022 day 2, which I used for inspiration for the problem.
Your total score is the sum of your scores for each round. The score for a single round is the score for the shape you selected (1 for Rock, 2 for Paper, and 3 for Scissors) plus the score for the outcome of the round (0 if you lost, 3 if the round was a draw, and 6 if you won).
What follows is an overview of each language, my personal impression of it, and how the rock-paper-scissors circuit looks like in it. Keep in mind I'm a beginner in all of them, so there may be errors. Please reach out if you find any so I can fix this article.
Other languages and frameworks I still have to try out are Zokrates, Cairo, SnarkyJS, Gnark, and Leo. I hope to update this post as I go through them!
Circom
I found Circom to be the best language to start learning ZKPs, as suggested by CyberPorter. To me, Circom has the ideal level of abstraction for learning: not too high level that it abstracts away some of the quirks of ZKP, and not too low level that you get bogged down in annoying details. It also has been around for quite some time (modulo how new everything ZK is!), so you can find more documentation and samples, as well as the awesome Circom workshops from 0xparc. Last but not least, there is circomlib, a library of common building blocks for Circom.
Circom ships with a rust-based compiler CLI, plus an npm package for trusted setups and proof generation and verification. The benefit of having an npm package is that you can generate proofs in the browser, opening the door to building zk-apps. You can also autogenerate Solidity contracts for verifying proofs on-chain. Circom also lets you to choose your proving engine between Groth16 and PLONK.
As mentioned before, in Circom you work assembling circuits, where each circuit has a set of input, internal, and output signals. Circuits are defined as templates, which can be parameterized with values known at compile-time, which are similar to C++ function templates. Circom also has the concept of functions as a reusable building block in the execution phase.
The main challenges are Circom is understanding when you are writing constraint-generation code, and when you are writing witness-generation code, and tracking what values are known at compile-time versus only known at runtime. Many language constructs are only available for witness generation, like boolean or comparison operators, or can only depend on values known at compile-time, such as control flow statements. Also, keeping in mind the difference between constraint and witness generation times can guard you against underconstrained computation bugs.
Rock-paper-scissors in Circom
The circuits for the rock-paper-scissors problem were fun to write in Circom. Since it's not possible to write conditional expressions in constraints, you need to resort to math-based tricks. As a simple example, the first circuit, which validates the input of a play by checking that it is 0, 1, or 2, looks like this:
template AssertIsRPS() {
signal input x;
signal isRP <== (x-0) * (x-1);
isRP * (x-2) === 0;
}
The code for calculating the score of a single round uses similar constructs, along with the IsEqual
circuit from CircomLib:
// Returns the score for a single round, given the plays by x and y
template Round() {
signal input x, y;
signal output out;
// ensure that each input is within 0,1,2
AssertIsRPS()(x);
AssertIsRPS()(y);
// check if match was a draw
signal isDraw <== IsEqual()([x, y]);
signal diffYX <== (y+3)-x;
// y wins if y-x = 1 mod 3
signal yWins1 <== (diffYX-1) * (diffYX-4);
signal yWins <== IsZero()(yWins1);
// x wins if y-x = 2 mod 3
signal xWins1 <== (diffYX-2) * (diffYX-5);
signal xWins <== IsZero()(xWins1);
// check that exactly one of xWins, yWins, isDraw is true
// we probably can do without these constraints
signal xOrYWins <== (xWins - 1) * (yWins - 1);
xOrYWins * (isDraw - 1) === 0;
xWins + yWins + isDraw === 1;
// score is 6 if y wins, 3 if draw, 0 if x wins
// plus 1, 2, 3 depending on the choice of RPS
out <== yWins * 6 + isDraw * 3 + y + 1;
}
Finally, the outermost circuit loops through a parameterizable number of rounds n
, and aggregates all scores. Note that a loop can be used here since it depends on n
, which is a template parameter known at compile-time.
template Game(n) {
signal input xs[n];
signal input ys[n];
signal scores[n];
signal output out;
var score = 0;
for (var i = 0; i < n; i++) {
scores[i] <== Round()(xs[i], ys[i]);
score += scores[i];
}
out <== score;
}
Halo2
Halo2 is not a language but a Rust-based framework, maintained by the ZCash team. Halo2 is specific to PLONKish, giving you very direct control of how circuits are represented in the arithmetization. This makes Halo2 very low-level, but ideal for writing highly optimized circuits.
To me, Halo2 had the steepest learning curve by far. Not just because of having to understand how PLONKish arithmetization works to build the circuits, but mostly because I found Halo2's API quite complex and its documentation hard to find. There are also few resources for learning Halo2: the best I found were the 0xparc course which provides a few really valuable code samples, as well as the examples in the main repo. You may also want to check out awesome-halo2 for updated resources.
When starting with Halo2, keep in mind that there are two flavors of the framework out there: the vanilla one maintained by ZCash, and the fork by Ethereum's Privacy and Scaling Explorations team which uses a different polynomial commitment (KZG instead of IPA) that's more friendly for Ethereum.
PLONKish arithmetization
The key to building circuits in Halo2 is to begin by designing how the underlying PLONKish matrix will look like. In PLONKish, circuits are defined over a matrix that you directly define. This matrix is composed by multiple columns, where a column may represent a public output (called instance), a private witness value (called advice), a constant value (called fixed), a flag on whether a constraint is enabled (called selector, more on this below), or a lookup value (used for lookup tables, an advanced feature).
Once you have the matrix set up, it's time to define the polynomial constraints using gates. A gate in Halo2 defines one or more constraints to be applied over a set of cells in the matrix. Each gate is controlled via a selector column: if the selector is enabled at a row, then the constraints imposed by the gate relative to that row will be enforced.
A gate can define constraints across multiple rows. For example, here's an example of an L-shaped multiplication gate straight out of the Halo2 book:
meta.create_gate("mul", |meta| {
// To implement multiplication, we need three advice cells and a selector
// cell. We arrange them like so:
//
// | a0 | a1 | s_mul |
// |-----|-----|-------|
// | lhs | rhs | s_mul |
// | out | | |
//
// Gates may refer to any relative offsets we want, but each distinct
// offset adds a cost to the proof. The most common offsets are 0 (the
// current row), 1 (the next row), and -1 (the previous row), for which
// `Rotation` has specific constructors.
let lhs = meta.query_advice(advice[0], Rotation::cur());
let rhs = meta.query_advice(advice[1], Rotation::cur());
let out = meta.query_advice(advice[0], Rotation::next());
let s_mul = meta.query_selector(s_mul);
// Finally, we return the polynomial expressions that constrain this gate.
// For our multiplication gate, we only need a single polynomial constraint.
//
// The polynomial expressions returned from `create_gate` will be
// constrained by the proving system to equal zero. Our expression
// has the following properties:
// - When s_mul = 0, any value is allowed in lhs, rhs, and out.
// - When s_mul != 0, this constrains lhs * rhs = out.
vec![s_mul * (lhs * rhs - out)]
});
In the code sample above, a0
and a1
are advice columns, and s_mul
is the selector that defines whether the mul
multiplication gate will be enforced. If it is, then the value for a0
in the next row will be constrained to be equal to the product of a0
and a1
on the current row.
In addition, Halo2 allows you to define equality constraints, where you require that a specific cell in your matrix must be equal to another. This is useful for "copying" values across your matrix, or for constraining a public instance cell to equal a specific private advice cell in order to expose a value.
As another example, you could define a circuit for proving the N-th Fibonacci number, where row i
of the matrix has the values x[i-2], x[i-1], x[i]
. This means you'd need three advice columns to begin with, let's call them a, b, c
. Then, you'd constrain c
to be equal to a + b
in the same row using a gate, which requires adding a selector column for controlling it. And you'd have to set up equality constraints so that a
and b
are equal to b
and c
from the previous row. Last, in order to expose the N-th number as a public value, you'd need an instance column, constrained to equal the value of c
at the last row.
Chips, gadgets, and regions
The main building blocks for circuits in Halo2 are gadgets and chips. Chips are the lowest level unit. A chip will typically expose a method for configuring its gates, as well as multiple methods for assigning values to its cells during synthesis (more on this phase below). You can also build chips composed out of other chips. Gadgets, on the other hand, operate at a higher level of abstraction, hiding the underlying chips' implementation details, though you can build your circuits directly out of chips and skip gadgets altogether.
To promote reusability, you'll notice that chips always operate over relative offsets. This allows multiple chips to be grouped into regions in a circuit. Once all regions and their shapes have been defined, it is the responsibility of a floor planner to arrange these regions on the matrix, so you don't need to directly define where each chip is actually placed. However, depending on how you structure your circuits, it is perfectly possible to pack everything into a single region and not delegate placement to the planner.
Halo2 Rust API
In Halo2, similar to Circom, the main thing to keep in mind is that your code will be invoked multiple times in different circumstances: whether it is to configure the matrix, generate the constraints, create a proof, or calculate a witness.
Your circuits need to implement a specific Circuit
trait that defines methods that will be called throughout this lifecycle, either with concrete or unknown Value
s:
pub trait Circuit<F: Field> {
type Config: Clone;
type FloorPlanner: FloorPlanner;
fn without_witnesses(&self) -> Self;
fn configure(meta: &mut ConstraintSystem<F>) -> Self::Config;
fn synthesize(
&self,
config: Self::Config,
layouter: impl Layouter<F>
) -> Result<(), Error>;
}
The important bits here are configure
and synthesize
. The TLDR is that configure
will set up the matrix shape and its gates, and synthesize
will calculte the witness and populate the matrix with the corresponding values. However, synthesize
may also be invoked with unknown values during other stages, so you need to always work with wrapped Value
s. For instance, and while it may seem anti-intuitive, polynomial constraints in gates are defined during configuration, but equality constraints are set up at synthesis.
The example in the Halo2 book has a good step by step on how to implement a simple circuit with a single chip in case you need a scaffold to follow.
Rock-paper-scissors in Halo2
The RPS implementation in Halo2 is much more verbose than in Circom, so we'll only reproduce some parts here. To begin with, the matrix is set up with the following columns:
- An advice column for each of the two players,
xs
andys
, where the value in rowi
represents what move the player chose in roundi
. - A third advice column
accum
that tracks the accumulated total score, so rowi
contains the sum of the scores of all rounds up toi
. - A public instance column, where the last cell is constrained to be equal to the value of
accum
, so only the total score is revealed and not the intermediate ones. - A selector for enabling the single gate that validates inputs and calculates the score of each round.
The main chip defines a custom gate that packs all constraints for each round: validating that inputs are in range, and calculating the score for the round and adding it to the accumulated total score. This chip uses the values of xs
, ys
, and accum
in a row as "inputs", and "outputs" the new score in the accum
column at the next row.
meta.create_gate("round", |meta| {
let s = meta.query_selector(selector);
let x = meta.query_advice(col_x, Rotation::cur());
let y = meta.query_advice(col_y, Rotation::cur());
let accum = meta.query_advice(col_accum, Rotation::cur());
// We store the output in the accum column in the next row
let out = meta.query_advice(col_accum, Rotation::next());
// Constraints for each round
vec![
// out = y_wins * 6 + is_draw * 3 + y + 1 + accum
s.clone() * (out - (y_wins.expr() * F::from(6) + is_draw.expr() * F::from(3) + y.clone() + const_val(1) + accum)),
// x in (0,1,2)
s.clone() * x.clone() * (x.clone() - const_val(1)) * (x.clone() - const_val(2)),
// y in (0,1,2)
s.clone() * y.clone() * (y.clone() - const_val(1)) * (y.clone() - const_val(2)),
]
});
The y_wins
and is_draw
above are IsZero
chips defined like the following. Note that we can reuse the same selector column for all constraints, since there is no row in which we would want to enable some but disable others.
// yWins <==> (y+2-x) * (y-1-x) == 0;
let y_wins = IsZeroChip::configure(
meta,
|meta| meta.query_selector(selector),
|meta| {
let x = meta.query_advice(col_x, Rotation::cur());
let y = meta.query_advice(col_y, Rotation::cur());
(y.clone() + const_val(2) - x.clone()) * (y - const_val(1) - x)
}
);
When synthesizing the circuit, we loop over the inputs for each round, calculate the accumulated score, and assign the calculated values to our matrix. Note that, for the "execution" mode, we can directly use conditional expressions to calculate the score.
// Assign one row per round
for row in 0..N {
let [x, y] = [xs[row], ys[row]];
// Enable the selector for the round gate
self.config.selector.enable(&mut region, row)?;
// This is requiring us to add a constant column to the chip config just with zeros
if row == 0 {
region.assign_advice_from_constant(|| "zero", col_accum, 0, F::ZERO)?;
}
// Set x and y advice columns to the input values
region.assign_advice(|| format!("x[{}]", row),col_x,row,|| x)?;
region.assign_advice(|| format!("y[{}]", row),col_y,row,|| y)?;
// Assign the is_zero chips to the same expressions defined in the gates
// yWins <==> (y+2-x) * (y-1-x) == 0;
let y_wins_chip = IsZeroChip::construct(y_wins.clone());
let y_wins_value = (y + Value::known(F::from(2)) - x) * (y - Value::known(F::ONE) - x);
let y_wins = y_wins_chip.assign(&mut region, row, y_wins_value)?;
// isDraw <==> y-x == 0;
let is_draw_chip = IsZeroChip::construct(is_draw.clone());
let is_draw_value = y - x;
let is_draw = is_draw_chip.assign(&mut region, row, is_draw_value)?;
// Calculate the score of this round
let round_score = y_wins.zip(is_draw).and_then(|(y_wins, is_draw)| {
let partial_score = if y_wins { 6 } else if is_draw { 3 } else { 0 };
Value::known(F::from(partial_score)) + y + Value::known(F::ONE)
});
// Assign the col_accum *in the next row* to the new score
accum_value = accum_value + round_score;
out_cell = region.assign_advice(
|| format!("out[{}]", row),
col_accum,
row + 1,
|| accum_value
);
};
The last bit is exposing the total score as a public output of the circuit, by constraining the instance column to match the accum
one in the last row of the matrix:
layouter.constrain_instance(out_cell.cell(), self.config.instance, N-1)
Noir
Noir, built by Aztec, is the newest language of the pack and is under active development. It is distributed as nargo
, a rust-based CLI, and as a set of npm packages. The npm packages releases seem to lag slightly behind the crate, which has the bleeding edge features and only got its first stable release a few days ago (early February). Both the rust crate and the npm package can be used to compile Noir programs, generate and verify proofs, and create a verifier Solidity contract.
Noir is heavily inspired in Rust, as in it shares almost the same syntax. It supports integer, booleans, tuples, structs, and arrays, as well as a basic field
type which represents an element in the native field of the proving backend (think an unsigned int capped to a big prime of about 254 bits), and is more performant than regular integers.
Unlike Circom, Noir does not allow you to define code used only for witness generation (aka hints). Noir supports control flow statements and operators out of the box, and converts them to equivalent constraints as needed. This helps prevent any unconstrained computation bugs, at the expense of performance. That said, you can define constraints unrelated to witness generation, using the special constrain
keyword.
Rock-paper-scissors in Noir
Noir feels a bit like cheating compared to the other languages, since it abstracts away most complexities related to ZKPs, and feels like writing regular Rust code. The entirety of the program fits in 30 lines, and reads like a regular program:
global N: Field = 3;
#[builtin(arraylen)]
fn len<T>(_input : [T]) -> comptime Field {}
fn round(x: Field, y: Field) -> Field {
constrain (x as u8) <= 2;
constrain (y as u8) <= 2;
let mut score = 0;
let diffYX = (y + 3 - x);
if x == y {
score = 3;
}
if (diffYX == 4) | (diffYX == 1) {
score = 6;
}
score + y + 1
}
fn main(xs: [Field; N], ys: [Field; N]) -> pub Field {
let mut result = 0;
for i in 0..len(xs) {
result = result + round(xs[i], ys[i]);
}
result
}
It's worth mentioning that I didn't run any analysis regarding the number of constraints generated by each language, but it is expected that Noir, given its higher abstraction level, produces a higher number of constraints, which impact proving time. Nevertheless, as the compiler matures, more optimizations should be added that get automatically incorporated to Noir-written programs.
Resources
If you got to this point and want to keep learning, here are some of the resources I used for getting up to speed with ZKPs. I encourage you to go through them if you want to learn directly from the masters!
- CyberPorter produced a set of 6 classes that provide a great overview of ZKPs, including the motivation, a mapping of the space and who's working on it, a deep dive on TornadoCash, an overview of Circom, Cairo, SnarkyJS, and Noir, and a walkthrough of the math behind Groth16.
- 0xPARC has two full courses online, one on Circom and another on Halo2. I strongly recommend starting with the Circom one, led mostly by the great Gubsheep, the guy behind the Dark Forest game. These courses include hands-on workshops that take you step by step on how to build several circuits. If you want to take a stab at Halo2, I'd say they are a must-see, since there are not many other resoruces on it.
- Vitalik has multiple blog posts on pretty much every cryptographic concept you need for understanding SNARKs and STARKs, as well as interesting uses for the tech. If you want to go deeper into the workings of ZKPs, search on his blog first.
- The Scroll team, who are building a ZK-EVM, run an excellent educational blog when it comes to ZKPs. In particular, the article on proof generation provides a great step-by-step on the whole workflow involved when creating a ZKP.
- Awesome lists are awesome, and Matter Labs, the team behind zkSync, keeps an awesome-zero-knowledge-proofs with links to all kind of resources.
Top comments (8)
Great post! I have been looking for educational resources that cover this complexity. Some feedback on how to make this post better is writing about how to generate proofs in the browser. Or a post that takes you from writing a circuit all the way to proving in the browser. Cheers!
I'm kind of surprised not to see
assert
s in Halo2 code. Does the Rust practice of keeping them out intotests
is kept in Halo2, or is there another reason? I felt like asserting in Circom is quite common. (I'm very new, sorry for any confusion.)Great Post!, please consider fixing the typo here:
The a <-- a*b instructions means during the execution phase, assign a*b to ab,
Monero link needs correction. Cheers!
👨‍🎤 Such a great blog!! I am inspired to do something very similar for the emerging zkVM space to build off of this work.
Hope I can meet you IRL and share a 🧉 with you 🦸!
These are only as useful as enough people understand them (otherwise the proof could be a spoof) so please keep up the great work!
AFAIU it's mandatory to put your trust in the verifier code. Including that it's not underconstrained... So maybe it will be a next big thing quite far away from us.
Awesome Job, very much appreciative of your effort and explanations here.
Cheers : )