The QuantAMMGradientBasedRule contract's gradient calculations suffer from significant precision loss when performing sequential fixed-point operations. Testing reveals a systematic bias of ~3% in gradient calculations that compounds over time, with early iterations showing deviations up to 8.79%. This affects all pools using gradient-based rules and could lead to significant price deviations.
Location: pkg/pool-quantamm/contracts/rules/base/QuantammGradientBasedRule.sol
The issue occurs in gradient calculations where multiple fixed-point operations compound precision loss. The critical vulnerability stems from two components:
MulFactor Calculation (Lines 150-155):
Gradient Update (Lines 160-170):
The vulnerability manifests through:
Initial large precision loss (~8.79%) in early iterations
Stabilization to consistent ~3.01% deviation
Linear growth in cumulative error
Systematic bias that never self-corrects
Proof of Concept:
Test Results over 100 iterations with λ=0.33:
This demonstrates that precision loss is both significant and systematic, affecting every gradient calculation.
Severity: MEDIUM
Technical Impact:
Initial ~8.79% gradient deviation
Stabilizes to consistent ~3.01% bias
Cumulative error grows linearly
Systematic error affects all calculations
Never self-corrects
Economic Impact:
While mathematical precision loss exists (~3.01% bias), QuantAMM's actual protection mechanisms include:
a) Price Bias Impact
~3.01% systematic bias exists in calculations
Impact moderated by gradual weight updates
Oracle-based price information helps maintain alignment
b) Arbitrage Dynamics
Weight changes are gradual and oracle-informed
System includes front-running protection mechanisms
Natural price discovery through market interaction
c) Weight Error Management
Mathematical errors do compound
Economic impact limited by:
Gradual weight updates
Oracle price feeds
Market price discovery
Foundry testing framework
Custom precision loss test suite
Mathematical analysis
100-iteration sequential testing
Implement Higher Precision Calculations:
Add Precision Safeguards:
Track maximum allowed precision loss
Use higher precision for intermediate steps
Consider using fixed-point library with more decimals
Add explicit precision loss checks
Architectural Changes:
Consider alternative gradient calculation methods
Implement precision loss monitoring
Add bounds for acceptable gradient deviations
Consider maximum iteration limits based on precision requirements
Please read the CodeHawks documentation to know which submissions are valid. If you disagree, provide a coded PoC and explain the real likelyhood and the detailed impact on the mainnet without any supposition (if, it could, etc) to prove your point.
The contest is live. Earn rewards by submitting a finding.
This is your time to appeal against judgements on your submissions.
Appeals are being carefully reviewed by our judges.