Fair Sequencing

02/22/245 min read

Mantleby Mantle

Developers

Research

Fair Sequencing

Authors: Mantle Research Team (Franck Cassez, Aodhgan Gleeson & Andreas Penzkofer)

In this article we introduce an architecture and protocol to design a fair sequencer. In contrast to decentralised sequencing which is a choice of implementation (how the system is implemented), fair sequencing is a specification: provide censorship resistance and protect users from MEV attacks (as much as possible). Fair sequencing does not preclude any choice of implementation and in this article we propose a fair sequencer (architecture and protocol) that leverages Verifiable Random Functions (VRFs) to ensure fair ordering in the execution of transactions.

For a detailed description of the architecture, the protocol and its properties we refer to this article.

Background and motivation

Rollups (L2s) offer a solution for enhancing transaction throughput by executing transactions off-chain in batches before posting the results to Layer 1 (L1) or a data availability layer. However, the current L2 architectures heavily rely on centralised sequencers, resulting in significant risks:

  • Single Point of Failure: The centralised sequencer poses a significant risk, as any malfunction or malicious behavior could disrupt the entire rollup operation.
  • Absolute Control: With the sequencer controlling transaction selection and ordering, the rollup becomes vulnerable to censorship, manipulation, and MEV attacks.

Serious efforts (e.g. Espresso, Astria) for decentralising the sequencer have been undertaken aiming to address some of these issues. While the single point of failure is removed, it is less clear whether MEV attacks and censorship resistance are fully addressed.

Commonly in software development we would like to separate concerns. We question whether, in order to address the above issues, we can identify a simple, light-weight and more modular approach that does not require a full-fledged consensus protocol. A light-weight solution may also improve latency, which is of great interest in rollup solutions.

Our proposed architecture and protocol opens up a great opportunity for capturing value within the protocol rather than loosing it to external actors:

  • with MEV auction, value is extracted from the system by external actors (block builders, proposers, relayers); in contrast in our proposal, value is captured by the core protocol operators (pool manager, sequencer, executor) - see Figure 1 below.
  • First-Come-First-Serve (FCFS) policies incentivise latency races, see e.g. Offchain Labs. Adding a probabilistic dimension to a FCFS policy reduces these incentives and thus captures value otherwise lost to latency infrastructure.

Overall, the retained value can be reinvested into the protocol, bringing down user fees, increasing security and efficiency.

What is fair sequencing?

Fair sequencing has already received some attention in the past. Some prominent Web3 actors like ChainLink may be credited to have introduced the concept of a fair sequencing service. This approach has been refined into PROF (protected order flow) to integrate with profit-seeking MEV strategies, targeting L1 proposer-builder-separation (PBS) architecture.

Fair sequencing aims to address issues of censorship resistance and MEV attacks by ensuring a transparent and unbiased process for ordering transactions in batches. Our approach draws inspiration from concepts like ChainLink's fair sequencing service and PROF, which have been adapted for Layer 2 (L2) sequencing.

Architecture & protocol for fair sequencing

Figure 1: Overview of architecture & protocol

Our proposed solution for fair sequencing leverages verifiable random functions (VRFs) and proofs (zero-knowledge/ZK and Merkle) to achieve provable fairness. Our modular architecture (Figure 1) offers three essential properties:

  • Censorship Resistance: Users submit transactions to a mempool (not necessarily centralised). The mempool manager builds and schedules batches, while users receive receipts indicating the batch number for their transactions. This ensures that transactions are immune to censorship given they reach the mempool, ensuring cryptoeconomics security (the mempool manager has to stake some assets).
  • Fair Ordering: The fair sequencer requests a random permutation for the transactions received from the mempool manager using a VRF oracle, ensuring fairness in the ordering process. The order of transactions in a batch is pseudo-random, eliminating predictability and preventing MEV attacks. Verifiable Random Functions (VRF) can be provided by oracles. Currently Chainlink and Supra propose VRF as a service. In our proposed design, both oracles can be used.
  • Correctness of Execution: The executor processes the permuted batch and publishes the block to the data availability (DA) layer or Layer 1 (L1). Any discrepancy in execution can lead to slashing of the executor, ensuring accountability. The mechanism ensures that transactions are executed in the proposed random order, ensuring integrity and fairness.

Modular Design and Accountability

The proposed architecture divides the sequencing process into three distinct steps, each handled by separate actors: mempool manager, sequencer and executor). This provides modularity and accountability.

By publishing proofs to a DA layer, malicious actors can be identified and penalised, ensuring attributability. Each component by itself can be constructed in a distributed fashion to avoid single points of failure, without the need for more elaborate protocols such as consensus engines. Each component is light-weight, and low-latency processing can be achieved.

Conclusion

In our proposal, the sequencing and processing of transactions is divided into three steps carried out by separate actors (separation of concerns):

  • the mempool manager is in charge of building and extracting batches in the order they are received,
  • the (fair) sequencer requests a random permutation and it's proof, and computes the permuted batch,
  • the executor processes the batch and builds the blocks.

The actors have distinct tasks and are separtely accountable for each of them, which makes our design highly modular. Publishing the results of each task to the DA layer enables us to identify malicious actors and slash them if they misbehave, thus ensuring attributability (identify who to blame in case of a misbehaviour).


Join the Community