Skip to content

Commit ea2d338

Browse files
committed
zkEVM review - main state machine
2 parents d34a0fd + 5433eec commit ea2d338

File tree

2 files changed

+22
-26
lines changed

2 files changed

+22
-26
lines changed
Lines changed: 13 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,23 @@
1-
The Polygon CDK Validium is one of two configuration options of the Polygon CDK, the other being the Polygon zkEVM rollup.
1+
The Polygon CDK validium is one of two configuration options of the Polygon CDK, the other being the Polygon zkEVM rollup.
22

3-
As per [definition](https://ethereum.org/developers/docs/scaling/validium), the Polygon CDK Validium uses validity proofs to enforce integrity of state transitions, but it does not store transaction data on the Ethereum network.
3+
As per [the definition](https://ethereum.org/developers/docs/scaling/validium), the Polygon CDK validium uses validity proofs to enforce integrity of state transitions, but it does not store transaction data on the Ethereum network.
44

5-
The Polygon CDK Validium is in fact a _zero-knowledge validium_ (zkValidium) because it utilises the Polygon zkEVM's off-chain Prover to produce zero-knowledge proofs, which are published as _validity proofs_.
5+
The Polygon CDK validium is in fact a _zero-knowledge validium_ (zkValidium) because it utilises the Polygon zkEVM's off-chain prover to produce zero-knowledge proofs, which are published as _validity proofs_.
66

7-
The use of the above-mentioned Prover, to a certain extent, adds trustlessness to the Polygon CDK Validium.
7+
The use of the above-mentioned prover, to a certain extent, adds trustlessness to the Polygon CDK validium.
88

9-
The validium option inherits, not just the Prover, but all the Polygon zkEVM's components and their functionalities, except that it does not publish transaction data on L1.
9+
The validium mode inherits, not just the prover, but all the Polygon zkEVM's components and their functionalities, except that it does not publish transaction data on L1.
1010

1111
The validium configuration has one major advantage over the zkEVM rollup option: And that is, reduced gas fees due to the off-chain storage of transaction data, where only a hash of the transaction data gets stored on the Ethereum network.
1212

13+
## Data availability committee (DAC)
1314

14-
### Data availability committee (DAC)
15-
16-
In relation to storing transaction data off-chain, comes with it the requirement to manage the data.
15+
In relation to storing transaction data off-chain, the CDK validium comes with the requirement to manage the data.
1716

1817
- First of all, the transaction data is not published to the L1 but only the hash of the data.
19-
- Secondly, a trusted-sequencer collects transactions from the Pool DB, puts them into batches and computes the hash of the transaction data.
18+
- Secondly, a trusted-sequencer collects transactions from the pool DB, puts them into batches and computes the hash of the transaction data.
2019

21-
It is due to the above two points that the Polygon CDK Validium has to have a set of _trusted actors_, who can monitor and even authenticate the hash values that the Sequencer proposes to be published on the L1.
20+
It is due to the above two points that the Polygon CDK validium has to have a set of _trusted actors_, who can monitor and even authenticate the hash values that the sequencer proposes to be published on the L1.
2221

2322
The hash values need to be verified as true _footprints_ of the transaction data corresponding to all transactions in the sequenced batches.
2423

@@ -28,14 +27,13 @@ After verifying the proposed hash values individually, each DAC member signs the
2827

2928
The sequencer uses a _multi-sig_, which is a custom-specified _m-out-of-n multi-party protocol_, to attach the required _m_ signatures to the hash of the transaction data. The _multi-sig_ contract lives on the L1 network.
3029

31-
Architecturally speaking, the Polygon CDK Validium is therefore nothing but a zkEVM with a DAC. That is, **Polygon CDK Validium = Polygon zkEVM + DAC**.
32-
30+
Architecturally speaking, the Polygon CDK validium is therefore nothing but a zkEVM with a DAC. That is, **Polygon CDK validium = Polygon zkEVM + DAC**.
3331

34-
### CDK Validium's data flow
32+
## Validium data flow
3533

3634
The DAC works together with the sequencer to control the flow of data and state changes.
3735

38-
The below diagram depicts a simplified outline of the Polygon CDK Validium architecture. It particularly shows how the DAC and the sequencer relate in the overall data flow.
36+
The diagram below depicts a simplified outline of the Polygon CDK validium architecture. It particularly shows how the DAC and the sequencer relate in the overall data flow.
3937

4038
![CDK validium data availability dataflow](../../img/cdk/cdk-val-dac-02.png)
4139

@@ -47,4 +45,4 @@ The entire process can be broken down as follows:
4745
4. **Signature generation**: Each DAC node generates a signature for each batch hash. This serves as an endorsement of the batch's integrity and authenticity.
4846
5. **Communication with Ethereum**: The sequencer collects the DAC members' signatures and the original batch hash, and submits them to the Ethereum network for verification.
4947
6. **Verification on Ethereum**: A designated _multi-sig_ smart contract on Ethereum verifies the submitted signatures against each DAC member's known signatures, and confirms that sufficient approval has been provided for the batch hash.
50-
7. **Final settlement with zero-knowledge proof**: The aggregator prepares a proof for the batch via the Prover and submits it to the Ethereum network. This proof confirms the validity of the transactions in the batch without revealing transaction details. The chain's state gets updated on Ethereum.
48+
7. **Final settlement with zero-knowledge proof**: The aggregator prepares a proof for the batch via the prover and submits it to the Ethereum network. This proof confirms the validity of the transactions in the batch without revealing transaction details. The chain's state gets updated on Ethereum.

docs/cdk/architecture/cdk-zkevm-option.md

Lines changed: 9 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -2,25 +2,23 @@ Polygon zkEVM is a zero-knowledge rollup (or zk-rollup) designed to emulate the
22

33
It is a scaling-solution to Ethereum as it rolls up many transactions into one batch.
44

5-
These batches are submitted to the L1, where their integrity is proved and verified before being included in the L1 State.
5+
These batches are submitted to the L1, where their integrity is proved and verified before being included in the L1 state.
66

77
Proving, verification of batches, and state changes are all controlled by smart contracts.
88

9-
Most important to understand, is the primary path taken by transactions from when users submit these transactions to the zkEVM network up until they are finalized and incorporated in the L1 State.
10-
11-
Polygon zkEVM achieves this by utilising several actors. The below diagram depicts the various actors and how they interact.
9+
Most important to understand is the primary path taken by transactions from when users submit these transactions to the zkEVM network up until they are finalized and incorporated in the L1 State.
1210

11+
Polygon zkEVM achieves this by utilizing several actors. The diagram below depicts the various actors and how they interact.
1312

1413
![zkEVM option architecture](../../img/cdk/cdk-zkevm-arch-overview.png)
1514

16-
1715
Here is an outline of the most prominent rollup components:
1816

19-
- The **Users**, who connect to the zkEVM network by means of an **RPC** node (e.g., MetaMask), submit their transactions to a database called Pool DB.
17+
- The **users**, who connect to the zkEVM network by means of an RPC node (e.g., Infura or Alchemy), submit their transactions to a database called the pool DB.
2018
- The **Pool DB** is the storage for transactions submitted by Users. These are kept in the pool waiting to be put in a batch by the Sequencer.
21-
- The **Sequencer** is a node responsible for fetching transactions from Pool DB, checks if the transactions are valid, then puts valid ones into a batch. The Sequencer submits all batches to the L1 and then sequences the batches. By so doing, proposing the sequence of batches to be included in the L1 State.
22-
- The **State DB** is a database for permanently storing state data (but not the Merkle trees).
23-
- The **Synchronizer** is the component that updates the State DB by fetching data from Ethereum through the Etherman.
19+
- The **sequencer** is a node responsible for fetching transactions from the pool DB, checking if the transactions are valid, then putting valid ones into a batch. The sequencer submits all batches to the L1 and then sequences the batches. This process proposes the sequence of batches to be included in the L1 state.
20+
- The **state DB** is a database for permanently storing state data (but not the Merkle trees).
21+
- The **synchronizer** is the component that updates the state DB by fetching data from Ethereum through the Etherman.
2422
- The **Etherman** is a low-level component that implements methods for all interactions with the L1 network and smart contracts.
25-
- The **Aggregator** is another node whose role is to produce proofs attesting to the integrity of the sequencer's proposed state-change. These proofs are zero-knowledge proofs (or ZK-proofs) and the Aggregator employs a cryptographic component called the Prover for this purpose.
26-
- The **Prover** is a complex cryptographic tool capable of producing ZK-proofs of hundreds of batches, and aggregating these into a single ZK-proof which is published as the validity proof.
23+
- The **aggregator** is another node whose role is to produce proofs attesting to the integrity of the sequencer's proposed state-change. These proofs are zero-knowledge proofs (or ZK-proofs) and the aggregator employs a cryptographic component called the prover for this purpose.
24+
- The **prover** is a complex cryptographic tool capable of producing ZK-proofs of hundreds of batches, and aggregating these into a single ZK-proof which is published as the validity proof.

0 commit comments

Comments
 (0)