Era

ZKsync
FoundryLayer 2
500,000 USDC
View results
Submission Details
Severity: high
Valid

A lot of priority operations will be executed twice when the ERA chain will be migrated to the gateway

Summary

A lot of priority operations will be executed twice when the ERA chain will be migrated to the gateway

Vulnerability Details

The ERA chain is intended to be migrated to the GW, but this chain has a special case, the priority queue.
For ERA chain to be integrated into the ZK Chains ecosystem, the GatewayUpgrade will be applied in it.

function upgrade(ProposedUpgrade calldata _proposedUpgrade) public override returns (bytes32) {
...
s.priorityTree.setup(s.priorityQueue.getTotalPriorityTxs());
...
}

This upgrade, apart from other settings it initializes the priority tree. At the time of writing this report the ERA diamond chain has the following data:

totalPriorityTxs (tail) = 3264282
firstUnprocessedPriorityTx (head) = 3264203

Hence, the setup on the priority tree will set the _startIndex to 3264282 in the diamond proxy from L1. Once the chain will be migrated to the GW it will migrate this _startIndex from the priority tree:

function initFromCommitment(Tree storage _tree, PriorityTreeCommitment memory _commitment) internal {
uint256 height = _commitment.sides.length; // Height, including the root node.
if (height == 0) {
revert InvalidCommitment();
}
_tree.startIndex = _commitment.startIndex;
_tree.unprocessedIndex = _commitment.unprocessedIndex;
_tree.tree._nextLeafIndex = _commitment.nextLeafIndex;
_tree.tree._sides = _commitment.sides;
bytes32 zero = ZERO_LEAF_HASH;
_tree.tree._zeros = new bytes32[](height);
for (uint256 i; i < height; ++i) {
_tree.tree._zeros[i] = zero;
zero = Merkle.efficientHash(zero, zero);
}
_tree.historicalRoots[_tree.tree.root()] = true;
}

However, the data from the priority queue from the L1 diamond will not be migrated and each time someone initiates a priority operation on the GW this will happen:

function _writePriorityOpHash(bytes32 _canonicalTxHash, uint64 _expirationTimestamp) internal {
if (s.priorityTree.startIndex > s.priorityQueue.getFirstUnprocessedPriorityTx()) {
s.priorityQueue.pushBack(
PriorityOperation({
canonicalTxHash: _canonicalTxHash,
expirationTimestamp: _expirationTimestamp,
layer2Tip: uint192(0) // TODO: Restore after fee modeling will be stable. (SMA-1230)
})
);
}
s.priorityTree.push(_canonicalTxHash);
}

Since the startIndex will be 3264282 and the firstUnprocessedPriorityTx 0 because it is not initialized on the GW, a lot of priority transactions will be added in both data structures, the priority queue and tree. In specific, it will need 3264282 operations in order to only save these transaction on the priority tree. When executing these priority operations, this will happen:

function executeBatchesSharedBridge(
uint256, // _chainId
uint256 _processFrom,
uint256 _processTo,
bytes calldata _executeData
) external nonReentrant onlyValidator chainOnCurrentBridgehub {
...
for (uint256 i = 0; i < nBatches; i = i.uncheckedInc()) {
if (s.priorityTree.startIndex <= s.priorityQueue.getFirstUnprocessedPriorityTx()) {
_executeOneBatch(batchesData[i], priorityOpsData[i], i);
} else {
if (priorityOpsData[i].leftPath.length != 0) {
revert PriorityOpsDataLeftPathLengthIsNotZero();
}
if (priorityOpsData[i].rightPath.length != 0) {
revert PriorityOpsDataRightPathLengthIsNotZero();
}
if (priorityOpsData[i].itemHashes.length != 0) {
revert PriorityOpsDataItemHashesLengthIsNotZero();
}
_executeOneBatch(batchesData[i], i);
}
emit BlockExecution(batchesData[i].batchNumber, batchesData[i].batchHash, batchesData[i].commitment);
}
...
}

As previously explained the startIndex will be 3264282 and the firstUnprocessedPriorityTx 0, hence it will execute the batch on the priority queue for the upcoming 3264282 priority transactions.
Even though the protocol is intended to implement the priority tree as the main data structure for the priority operations, the priority queue here would work fine just until 3264282 operations have been executed. Because just after this point is reached, the executor will stop executing the operations in the priority queue and will start in the priority tree. The problem is that since it has been adding the priority operations in both data structures at the same time, it will be forced to reexecute the same 3264282 operations.

Impact

High, the executor will force the chain to reexecute the same priority operations, this can involve double spending on ERA chain.

Tools Used

Manual review

Recommendations

This has no trivial solution, but from my point of view, priority operations should only insert the operations in just a single data structure, in this case either on the priority queue or the tree, because if not, the priority tree will force the reexecution of the same operations.

Updates

Lead Judging Commences

inallhonesty Lead Judge 5 months ago
Submission Judgement Published
Validated
Assigned finding tags:

priority operations will be executed twice when the ERA chain will be migrated to the gateway

Support

FAQs

Can't find an answer? Chat with us on Discord, Twitter or Linkedin.