The MultiHopOracle
works by having internally N oracle feed by which it hops. For it to work feeds should be like:
Each of these hops has its own heartbeat. Currently staleness is checked by returning the minimum update timestamp of all the hops and comparing it to the staleness threshold. While this sounds logical it can be wrong and lead to wrong data.
Let's have the following scenario:
We have a MultiHopOracle
with stale threshold of 20 seconds and three oracles:
USDC -> ETH
ETH -> WBTC
WBTC -> USDT
Each of them has a different heartbeat:
We have the following prices:
WBTC is worth 98000 at timestamp 10
ETH is worth 4000 at timestamp 10
USDC = USDT = $1
We are making a call to _getData
at timestamp = 10 with $100000 worth of USDC;
The calculations are:
First hop
100000 USDC = 25 ETH
Second hop
25 ETH = 0.975 WBTC (using old prices due to the heartbeat of the oracle)
Third hop
0.975 WBTC = 95500 USDT
95500 USDT ~ 4500 only through three hops (around 5%).
This happens because currently every next hop can be more stale than the previous one. Even though the difference in this scenario is 10 seconds (which is not stale by our setup) the result is a wrong calculation and losses for the user.
Losses either for the protocol or the user depending on concrete case. Loss can go from 5% and go up depending on volatility of the oracles in the MultiHopOracle
Manual Review
Implement a check that every next hop should be as stale or less stale than the previous one.
This is by design, staleness is a strategy aspect: it requires all data to have been updated within n minutes. No more precision needed.
The contest is live. Earn rewards by submitting a finding.
This is your time to appeal against judgements on your submissions.
Appeals are being carefully reviewed by our judges.