Era

ZKsync
FoundryLayer 2
500,000 USDC
View results
Submission Details
Severity: high
Valid

Priority operation reexecution for ERA chain in GW and L1 when migrating from GW to L1

Summary

If the ERA chain decides to migrate the settlement layer from GW to L1 the same priority operations will be executed in the GW and L1 which is not the intended behavior

Vulnerability Details

The ERA chain is planned for migration to the Gateway (GW), but it presents a unique scenario due to the priority queue. To integrate the ERA chain into the ZK Chains ecosystem, the GatewayUpgrade will be applied. The following code snippet outlines this upgrade:

function upgrade(ProposedUpgrade calldata _proposedUpgrade) public override returns (bytes32) {
...
s.priorityTree.setup(s.priorityQueue.getTotalPriorityTxs());
...
}

This upgrade performs several configurations, including initializing the priority tree. As of this report, the ERA diamond chain has the following data:

totalPriorityTxs (tail) = 3264282
firstUnprocessedPriorityTx (head) = 3264203

But let's suppose that before migration all priority operations have been executed and both tail and head are the same.
Thus, the setup of the priority tree will set the _startIndex to 3264282 in the diamond proxy on Layer 1 (L1). Upon migration to the GW, this _startIndex will be transferred from the priority tree as follows:

function initFromCommitment(Tree storage _tree, PriorityTreeCommitment memory _commitment) internal {
uint256 height = _commitment.sides.length; // Height, including the root node.
if (height == 0) {
revert InvalidCommitment();
}
_tree.startIndex = _commitment.startIndex;
_tree.unprocessedIndex = _commitment.unprocessedIndex;
_tree.tree._nextLeafIndex = _commitment.nextLeafIndex;
_tree.tree._sides = _commitment.sides;
bytes32 zero = ZERO_LEAF_HASH;
_tree.tree._zeros = new bytes32[](height);
for (uint256 i; i < height; ++i) {
_tree.tree._zeros[i] = zero;
zero = Merkle.efficientHash(zero, zero);
}
_tree.historicalRoots[_tree.tree.root()] = true;
}

However, the data from the priority queue in the L1 diamond will not be transferred, and every time a priority operation is initiated on the GW, the following process will occur:

function _writePriorityOpHash(bytes32 _canonicalTxHash, uint64 _expirationTimestamp) internal {
if (s.priorityTree.startIndex > s.priorityQueue.getFirstUnprocessedPriorityTx()) {
s.priorityQueue.pushBack(
PriorityOperation({
canonicalTxHash: _canonicalTxHash,
expirationTimestamp: _expirationTimestamp,
layer2Tip: uint192(0) // TODO: Restore after fee modeling will be stable. (SMA-1230)
})
);
}
s.priorityTree.push(_canonicalTxHash);
}

Since the startIndex is 3264282 and the firstUnprocessedPriorityTx is 0 (due to non-initialization on the GW), a large number of priority transactions will be added to both the priority queue and tree. Specifically, 3264282 operations will be required to populate the priority tree. When executing these priority operations, the following will take place:

function executeBatchesSharedBridge(
uint256, // _chainId
uint256 _processFrom,
uint256 _processTo,
bytes calldata _executeData
) external nonReentrant onlyValidator chainOnCurrentBridgehub {
...
for (uint256 i = 0; i < nBatches; i = i.uncheckedInc()) {
if (s.priorityTree.startIndex <= s.priorityQueue.getFirstUnprocessedPriorityTx()) {
_executeOneBatch(batchesData[i], priorityOpsData[i], i);
} else {
if (priorityOpsData[i].leftPath.length != 0) {
revert PriorityOpsDataLeftPathLengthIsNotZero();
}
if (priorityOpsData[i].rightPath.length != 0) {
revert PriorityOpsDataRightPathLengthIsNotZero();
}
if (priorityOpsData[i].itemHashes.length != 0) {
revert PriorityOpsDataItemHashesLengthIsNotZero();
}
_executeOneBatch(batchesData[i], i);
}
emit BlockExecution(batchesData[i].batchNumber, batchesData[i].batchHash, batchesData[i].commitment);
}
...
}

As mentioned earlier, the startIndex will be 3264282, and the firstUnprocessedPriorityTx will be 0. Consequently, the batch execution will process the priority queue for the next 3264282 priority transactions. Since all these priority operations will not make the priority tree unprocessedIndex increment, if the ERA chain is migrated back to L1 this index will remain to 0 even though some priority operations have been executed. So the L1 will enforce that the priority operations that have been already executed on the GW to be executed again on L1 because they have been registered on the priority tree and the unprocessedIndex is 0. Notice that this happens because in the GW the s.priorityQueue.getFirstUnprocessedPriorityTx was 0 but 3264282 in L1.

Impact

High, priority operations will be enforced to be reexecuted leading to serious problems like double spendings

Tools Used

Manual review

Recommendations

When the ERA chain is settled on the GW it will push the priority operations in both data structures, so I would increment the unprocessedIndex from the priority tree regardless from which data structure is executing the operations. Because since operations will be duplicated in both data structures, once the priority tree will have finished to process all his operations, the unprocessedIndex will already point to the correct priority operation to process. This way since the unprocessedIndex will be up to date, when migrating back to L1 it will be correct and L1 will process priority operations properly.

Updates

Lead Judging Commences

inallhonesty Lead Judge 5 months ago
Submission Judgement Published
Validated
Assigned finding tags:

priority operations will be executed twice when the ERA chain will be migrated to the gateway

Support

FAQs

Can't find an answer? Chat with us on Discord, Twitter or Linkedin.