QuantAMM

QuantAMM
49,600 OP
View results
Submission Details
Severity: medium
Invalid

Shared Gradient States Across Update Rules Enable Weight Manipulation

Summary

The QuantAMM protocol's gradient-based update rules share a common state mapping that persists across rule transitions, allowing potential manipulation of pool weights through inconsistent gradient interpretations between different rules.

Vulnerability Details

The QuantAMM protocol implements several weight update rules that inherit from QuantammGradientBasedRule to dynamically adjust pool weights. These rules include MomentumUpdateRule, AntiMomentumUpdateRule, ChannelFollowingUpdateRule, and PowerChannelUpdateRule.

The base contract QuantammGradientBasedRule maintains gradient states in a shared mapping:

QuantammGradientBasedRule.sol#L18-L19

mapping(address => int256[]) internal intermediateGradientStates;

This mapping stores intermediate gradient calculations used by all rules for exponential moving averages and weight adjustments. The critical vulnerability stems from three key issues:

  1. The intermediateGradientStates mapping is shared across all rules for a given pool address

  2. The mapping is marked as internal rather than private, allowing inherited contracts to modify it

  3. The setRuleForPool() function in UpdateWeightRunner lacks gradient state validation during rule transitions

Different rules interpret these gradients in fundamentally different ways:

// MomentumUpdateRule gradient calculation
locals.intermediateValue = convertedLambda.mul(locals.intermediateGradientState[i]) +
(_newData[i] - _poolParameters.movingAverage[i]).div(oneMinusLambda);
// ChannelFollowingUpdateRule gradient interpretation
gradientSquared = locals.newWeights[locals.i].mul(locals.newWeights[locals.i]);
envelope = (-gradientSquared.div(widthSquared.mul(TWO))).exp();

When a pool transitions between rules via setRuleForPool(), the gradient states persist without normalization, allowing the new rule to misinterpret the previous rule's gradient calculations.

Impact

Medium severity as the shared gradient state can be exploited to manipulate pool weights during rule transitions, leading to artificial price movements and arbitrage opportunities. While weight guards provide some protection, the core issue allows gradual manipulation across update cycles.

Proof of Concept

  1. Attacker initializes pool with MomentumUpdateRule:

pool = createPool([tokenA, tokenB], [5e17, 5e17]);
setUpdateRule(pool, MomentumUpdateRule);
  1. Manipulates gradient state through price movements:

// Price change triggers gradient calculation
_newData = [1.1e18, 0.9e18]; // 10% price change
lambda = 0.95e18; // High lambda
// In MomentumUpdateRule._getWeights()
intermediateValue = 0.95e18 * previousState + (1.1e18 - 1e18) / 0.05e18;
// Results in large gradient ≈ 2e18
  1. Transitions to ChannelFollowingUpdateRule:

setUpdateRule(pool, ChannelFollowingUpdateRule);
// Inherited gradient state affects envelope calculation
gradientSquared = 2e18 * 2e18; // = 4e18 (400% effective change)
envelope = exp(-4e18 / (2 * width^2)); // ≈ 0.018e18 (98.2% dampening)
  1. Exploits distorted weights through arbitrage before normalization.

Tools Used

Manual Review

Recommended Mitigation Steps

Implement gradient state isolation by extending the state mapping with rule type and add gradient normalization during rule transitions. Additionally, implement proper validation in the UpdateWeightRunner to ensure safe state transitions between different update rules, preventing potential manipulation of pool weights through gradient state inheritance.

Updates

Lead Judging Commences

n0kto Lead Judge 7 months ago
Submission Judgement Published
Invalidated
Reason: Incorrect statement

Support

FAQs

Can't find an answer? Chat with us on Discord, Twitter or Linkedin.