July 9, 2025
The CLOB is dead! Long live the CLOB!
Although DeFi technically began with the launch of smart contracts on Ethereum, early traders were stuck using clunky, low-liquidity, expensive, and often insecure CLOBs (Central Limit Order Books) like 0x, EtherDelta, and IDEX. These early efforts made trading possible, but not practical.
That changed in 2018, when DeFi exploded with the rise of AMMs (Automated Market Makers). Protocols like Uniswap and Curve allowed permissionless trading without order books or counterparties, becoming the catalyst that ushered in the first wave of DeFi adoption. The ability for anyone to provide liquidity, without having any specialized knowledge or infrastructure, led to the rise of LP tokens, yield farming, and ultimately the DeFi boom of 2020.
But as is always the case in crypto, we evolve. We push limits. We want more.
Today’s traders demand fast, cheap, accurate order books, with the transparency and permissionlessness of DeFi. And now, empowered by innovation and modular infrastructure, we find ourselves coming full circle. Once again chasing the dream of running a CLOB on the blockchain.
What is a CLOB?
A Central Limit Order Book (CLOB) is the gold standard in trading. They are used by every major stock exchange, and centralized crypto exchange in the world. A CLOB is a system that matches buy orders with sell orders based on price and time priority. A trader will place a limit order (IE: Buy 1 ETH at $3,000) or market order (IE: Buy 1 ETH at the best available price) and the CLOB will automatically find a match.

At any moment, the CLOB acts as a real-time snapshot of the market. Every bid and every ask. It is transparent, fast and incredibly efficient. Why then, did DeFi ever veer away from the CLOB? In centralized systems, CLOBs are a piece of cake. Databases are fast and networks are low-latency. But when we introduce the blockchain to the equation, 3 basic constraints arise:
Block times and finality delays
Throughput constrictions and gas costs
Latency
These are the reasons why decentralized CLOBs struggled to succeed in the early days of DeFi. Serious traders had serious expectations around performance, and the blockchain infrastructure of the day simply could not rise to the occasion.
However, we are no longer in the early days of DeFi. The technology and infrastructure is maturing. It is the opinion of many that we are finally approaching the point where a CLOB could succeed in a DeFi environment.
Why are we excited about CLOBs again?
For all of their ingenuity, AMMs do have limits.
They were a pivotal breakthrough in early DeFi that set the tone for where we are today. They are excellent for bootstrapping liquidity. However, they are, plainly put, not optimized for performance. Slippage, impermanent loss, front-running and unpredictable pricing are all pitfalls of the AMM model. The pitfalls, specifically for institutional and high-frequency traders, are a non-starter.
On the other hand, CLOBs offer precision. A trader can place exact orders, manage execution strategy and see real-time market depth. This is why the big players; Nasdaq, Binance, Coinbase, etc are all running on a CLOB. Now, thanks to advancements in off-chain computation, rollups and modular DA layers, crypto is finally ready to revisit the CLOB.

Ok so CLOBs are cool, but why all the fuss and excitement? Institutions are watching. Adoption doesn’t come without the institutions. Crypto is maturing and liquidity is consolidating. DeFi needs infrastructure that can compete with centralized exchanges, without becoming one itself. This is why the modern CLOB is so exciting. It’s not simply revisiting an old idea, it’s reinventing it, powered by a new stack.
Show me the CLOBs!
As of today, CLOBs on the blockchain have moved beyond theory. A handful of major players have brought to life their individual solutions to the same core challenge: How to deliver real-time order execution on-chain. Let’s take a look at the protocols leading the charge:
Clober (clober.io)

TLDR: Chain: EVM / MONAD Architecture: Fully On-Chain Differentiator: LOBSTER Limitations: UX complexity / Liquidity fragmentation A fully on-chain (EVM based) CLOB decentralized exchange. Clober was designed specifically to replace the AMM and aims for transparency, composability and MEV-resistance. The backbone of Clober is the Limit Order Book with Segment Tree for Efficient oRder-matching, more colloquially known as the LOBSTER algorithm. In a normal CLOB environment, one taker order (a market or limit order that is generally instantly executed) could potentially be filled by multiple makers. For example, I may place a market order to purchase 3 ETH. That order may be filled by 3 different makers who had listed 1 ETH each. This model creates an inherent issue in DeFi, where an order consisting of multiple makers could quickly increase the gas cost of a transaction. In some cases, it may even exceed the block gas limit. This would result in the transaction failing, even when liquidity is available. LOBSTER side-steps this limitation by deferring settlement of the taker orders. The “Total Claimable Amount” of a taker order is simply recorded. The taker can then manually “claim” the proceeds, simplifying the transaction and reducing the cost to execute. Keeping track of the claims is a complicated matter and where LOBSTER is putting in most of its work. It uses a segment tree data structure to track cumulative fills at price ticks and updates in 0(log n) time, vs linear time. This results in accurate fulfillment of orders, while also keeping the gas costs of both the taker and the maker on par with a Uniswap swap transaction. Clober looks to further simplify their protocol through the use of one-sided order books, containing bids only. This simplifies the logic and relies on arbitrage for equilibrium. On top of this, they make use of a single smart contract called `BookManager`. This `BookManager` manages all of the markets, whereas other solutions tend to use a separate contract for each book. Maker orders are represented as Order NFTs, making the orders themselves tradable and composable. Finally, all multi-step operations are executed as atomic transactions. Either the entire transaction succeeds, or it doesn’t trigger. In this way Clober achieves trustless matching and settlement at a manageable cost. Despite these creative solutions, some growing pains still remain:
Clober is limited by the gas constraints and limitations of Ethereum.
More robust scaling and latency solutions will be required for true high-frequency trading.
Managing claims and cancellations manually introduces UX complexity
Liquidity fragmentation appears across separate single-sided markets.
Dexalot (dexalot.com)

TLDR:
Chain: Avalanche
Architecture: Fully On-Chain
Differentiator: Avalanche Subnet
Limitations: Tooling and Support / Latency to and from Subnet
A fully on-chain CLOB, operating in a dual-chain model in the Avalanche ecosystem. Dexalot utilizes the Avalanche C-Chain for asset custody and relies on a dedicated Avalanche subnet for order matching and execution. This model allows for high speeds and tight spreads while enjoying minimal slippage without sacrificing DeFi’s trustlessness and transparency. The Avalanche subnet is a custom EVM-based blockchain, optimized for trading performance. The validators of this chain are secured through Dexalot’s native staking token ($ALOT). This native token is also used for gas fees on the subnet, with future plans for it to also behave as a governance token to enable community control. At launch, Dexalot boasted 10 validators. 8 of which were protocol owned and 2 were verified community members. Validators of the subnet also are required to be validators on Avalanche’s main chain.
Dexalot aims to simplify depositing from the host chain (currently Avalanche and Arbitrum) through the integration of LayerZero. Messaging through LayerZero allows Dexalot to abstract the complexities of what is happening behind the scenes, so the end-user is experiencing a straightforward deposit, although their funds are being bridged from the host chain to the subnet. Using the subnet for Dexalot ensures that high volume traffic on Dexalot does not congest traffic on Avalanche’s C-Chain. This creative use of the Avalanche ecosystem does allow for an impressive CLOB implementation. Yet, as always, is accompanied by constraints and trade-offs:
Cross-Chain Liquidity comes with inherent complexity. Using cross-chain messaging via LayerZero vs traditional bridging does reduce some custody risk but still requires explicit coordination between host chains and subnets. Assets need to originate from one of the supported host chains (which can cause problems for aggregators). This model relies completely on the security of cross-chain messaging. A failure of exploit here could impact funds.
Latency exists when moving assets between the subnet and an external chain as well. This delay may be one, to a few block confirmations on each side. Although this is not a huge delay, it still creates friction in the user experience.
Scaling and throughput is limited by the capacity of it’s Avalanche subnet as well as the performance of an EVM-based order book. To date, Dexalot’s subnet has handled millions of transactions, however significant growth could quickly become constrained by gas limits and node performance. The use of Solidity contracts for order matching means that each order placed or cancelled is an in-chain transaction. This guarantees transparency but also means traders are subject to congestion and gas prices. In extreme market conditions, on-chain matching could quickly become a bottleneck.
The User Experience of Dexalot has moments of friction. Users are required to switch networks after depositing assets in order to trade. Users must also obtain amounts of $ALOT to make use of Dexalot. Although these are not insurmountable issues, they still introduce hurdles to traders who are coming from platforms where they did not experience these hurdles.
Tooling and support becomes a non-trivial issue. Any time custom chains are being used, the ecosystem around that chain will take time to mature.
A trade-off exists between decentralization and convenience. Validators are centralized, especially early on, to ensure reliability. Dexalot also relies on LayerZero cross-chain messaging. Although LayerZero has a strong security design, it is not completely atomic or trustless. Dexalot chose a more seamless product at the cost of introducing some centralized elements/new trust assumptions.
Ekiden (ekiden.fi)

TLDR:
Chain: APTOS
Architecture: Hybrid
Differentiator: “Instant” trades
Limitations: Not composable / Added trust assumptions
A hybrid DEX built on Aptos, Ekiden combines an off-chain central limit order book with on-chain settlement. It’s designed to feel like a CEX in terms of speed and experience, but everything is non-custodial and provable on-chain.
Ekiden splits things up into three layers:
Client Layer – Your basic dApp interface and REST API. Users connect with standard Aptos wallets and stay in full control of their assets.
Off-Chain Layer – This is where the action happens. A high-speed matching engine keeps the order book and processes trades in ~5–15ms.
On-Chain Layer (Aptos) – Settlement happens here, using Move-based contracts. These include:
Vaults for storing user assets and collateral
A position registry
An orderbook verifier
A clearinghouse for final settlement and liquidation
Trades are executed off-chain and then bundled into a Merkle root that gets posted to Aptos with a proof. This means trades are instant, but still provable and auditable. If you ever want to verify a trade happened fairly, you can.
Funds never leave the user’s control. Assets are deposited into a vault where they remain while the user trades with speed off-chain and settles back on-chain. It’s fast, non-custodial, and fully verifiable.
Some highlights:
Trades feel instant, which is perfect for pro traders and market makers.
Merkle proofs provide an audit trail.
Built natively on Aptos, so it can plug into the wider ecosystem.
But, like those who came before, it is not without trade-offs:
The off-chain matching engine still introduces a degree of trust. You can verify what happened later, but not while it’s happening.
You can’t use currently open trades inside other smart contracts, which limits composability.
And like any off-chain system, you’re relying on someone to keep the order data available, accurate and synced.
Bullet (bullet.xyz)

TLDR:
Chain: Solana
Architecture: Enshrined
Differentiator: BulletX
Limitations: Susceptible to Solana Congestion / Withdrawal latency
A high-performance Solana-native rollup, Bullet utilizes Sovereign SDK (a modular rollup framework) to create a custom-high throughput chain anchored to Solana. An enshrined on-chain CLOB along with using Solana for settlement, consensus and data availability gives Bullet isolated, high-speed blockspace while also inheriting Solana’s security and fast finality. Solana’s 1000+ validators effectively serve as the data availability layer for Bullet. They store the batched transaction data and can vote on rollup blocks. To maximize speed, Bullet’s sequencer is built with native RUST code. Implementing a streaming transaction model, each tx is processed as it arrives instead of at fixed block intervals. After execution, a soft confirmation from the sequencer updates the order book state. Every 3 seconds, batches are submitted to the L1 for inclusion into a Solana block. Bullet allows anybody to become a sequencer by staking a bond. Malicious sequencers who post bad batches can be slashed and removed from the network. Bullet further circumvents this risk by allowing a force inclusion. A user can send their tx to a special L1 inbox contract to guarantee that it is included within a fixed number of Solana slots, leveraging Solana’s base layer censorship-resistance. The full nodes on Bullet do not vote on consensus, they only execute transactions. After a batch of Bullet transactions are posted to Solana proper, the nodes pull the batch data from Solana and re-execute all the transactions. This results in the hard final state of the rollup. This process does introduce a few seconds of lag and also result in significant hardware requirements for nodes. The choice to use Solana for data availability was a conscious one. This provides high throughput and a large existing liquidity/asset base. Deposits are near instant from Solana to Bullet. The hope is to benefit from the scaling improvements of Solana in the future. The heart of Bullet is BulletX, the on-chain CLOB for spot and derivative trading. Bullet’s CLOB and matching logic are enshrined in the rollup’s core state transition function. Every order, cancel and trade execution is processed by the sequencer within the rollup state. These are alter verified by full nodes just like any other transaction. This results in the entire matching engine being transparent and provably correct.
Ultra low latency trading is achieved, at least to the end-user in this fashion. Between the streaming tx model and an on-chain CLOB, users have the illusion of live market behavior. Confirmation occurs via the sequencer's soft confirmation in a matter of milliseconds. Although the actual confirmation to the L1 follows later in batches, from the user’s perspective, execution is immediate.
So then, what are the drawbacks and concessions made with Bullet?
Data throughput is ultimately limited by it’s DA layer (Solana). Bullet can currently handle about 100kb/s of data posting to Solana, equating to roughly 500 transaxctions per second. By optimizing batching, this can be increased. However if Bullet’s user load grows to several thousands of TPS, of if data-heavy transactions are required, Solana’s bandwidth could become a long-term constraint.
Bullet is trading decentralization for latency. A single sequencer operated by the team, or a selected node will manage ordering. This is what drives the remarkable 2ms trade confirmations. This centralization is temporary in that in the long run, anyone can spin up a sequencer with a token bond. Although more complexity arises when running multiple sequencers concurrently. Requiring careful coordination to avoid forks and race conditions.
Full nodes has beefy hardware requirements, which could limit the number of community members able to practically run a node. This could begin to centralize validation to well-resourced entities.
The Hybrid finality (soft vs hard) means that the user experiences what appears to be instant execution, however under the hood and over time, true finality is taking place. A non issue for most users, until they are ready to withdraw their funds. The optimistic rollup faces users with a 24hr withdrawal latency. Bullet can provide proofs enough to quickly satisfy the trust requirements of some dApps, but for a user to actually exit Bullet fully, will require a lengthy delay. Bullet currently supplements the user experience with “Relay” intent-based bridging. This allows third parties to advance the funds and then claim them 24hrs later with an added fee.
Whereas Solana is used for DA, should Solana experience an outage or congestion spike, Bullet’s batch submissions could be delayed, which would directly cause a delay in hard finality.
So where does this leave us?
CLOBs are no longer theoretical. They are here and they are evolving. They are finally catching up to the needs of practical DeFi.
Each of the protocols we looked at is chasing the same goal: centralized exchange performance with decentralized custody and execution guarantees. Some lean fully on-chain. Others embrace hybrid architectures. Some settle to Solana, some to Aptos, some to their own subnets. What is clear, is the belief that infrastructure has matured and the time for CLOBs is now!
A fast, composable, trustless order book is the next frontier. It’s what serious traders want. It’s what institutions need. It’s what the next stage of DeFi demands.
Building a CLOB that can truly scale? That’s not just about matching engines or clever rollups. It’s about throughput. It’s about data availability. It’s about rethinking the infrastructure layer itself. Even the best-designed CLOB will choke if the underlying chain can’t keep up.
That’s why this moment is different. We finally have all of the pieces. Fast execution layers. Modular DA. New primitives that let us decouple performance from security and cost. With the right infrastructure, we don’t just get CLOBs that work. We get CLOBs that win.
And maybe, just maybe, this time they’ll stick.