QuantAMM

QuantAMM
49,600 OP
View results
Submission Details
Severity: medium
Valid

Weight distortion on QuantAMMMathGuard._guardQuantAMMWeights

Summary

QuantAMMMathGuard._guardQuantammWeightsdoesn't clamp weights between min and max as intended due to incorrect clamping logic

Vulnerability Details

Root Cause

Let's take a look at QuantAMMMathGuard._clampWeights implementation

int256 absoluteMin = _absoluteWeightGuardRail;
int256 absoluteMax = ONE -
(PRBMathSD59x18.fromInt(int256(_weights.length - 1)).mul(_absoluteWeightGuardRail));
int256 sumRemainerWeight = ONE;
int256 sumOtherWeights;
for (uint i; i < weightLength; ++i) {
if (_weights[i] < absoluteMin) {
_weights[i] = absoluteMin;
sumRemainerWeight -= absoluteMin;
} else if (_weights[i] > absoluteMax) {
_weights[i] = absoluteMax;
sumOtherWeights += absoluteMax;
}
}
if (sumOtherWeights != 0) {
int256 proportionalRemainder = sumRemainerWeight.div(sumOtherWeights);
for (uint i; i < weightLength; ++i) {
if (_weights[i] != absoluteMin) {
_weights[i] = _weights[i].mul(proportionalRemainder);
}
}
}

Clamping logic was written very strangely. It works for two-token case but can fail for three or more token case.

For example, consider the following scenario:

  • absoluteMin= 0.2

  • weightLength= 3

  • weights = [0.189, 0.61, 0.201]

Let's calculate update weights step-by-step:

  • sumRemainerWeight = 0.8because we only have one absoluteMin violation (0.189)

  • sumOtherWeights = 0.6because we have just one absoluteMax violation (0.61)

  • propertionalRemainer = 0.8 / 0.6 = 1.3333

  • And updated weights will be

    • weights[0] = 0.2 (clamped to absoluteMin

    • weights[1] = 0.6 * 1.3333 = 0.8(previously clamped to absoluteMaxbut later multiplied by proportionalRemainder

    • weights[2] = 0.201 * 1.3333 = 0.268(multiplied by proportionalRemainder

So situation got worse by _clampWeights

Now we have to apply the following from _normalizeWeightUpdates:

// There might a very small (1e-18) rounding error, add this to the first element.
// In some edge cases, this might break a guard rail, but only by 1e-18, which is modelled to be acceptable.
_newWeights[0] = _newWeights[0] + (ONE - newWeightsSum);

And the situation gets even worse, because weights[0]will be a negative value:

weights[0] = 1 - 0.8 - 0.268 = -0.068

So the final result will be [-0.68, 0.8, 0.268], which made situation even worse in despite of the guard's original intention

Note

The following comment is absolutely wrong because guard rail can be broken up to ONE, not up to 1e-18

// There might a very small (1e-18) rounding error, add this to the first element.
// In some edge cases, this might break a guard rail, but only by 1e-18, which is modelled to be acceptable.

PoC

Put the following code snippet to MathGuard.t.soland run forge test MathGuard.t.sol --match-test testWeightDistortion -vv

function testWeightDistortion() public view {
int256[] memory prevWeights = new int256[]();
prevWeights[0] = 0.3e18;
prevWeights[1] = 0.4e18;
prevWeights[2] = 0.3e18;
int256[] memory newWeights = new int256[]();
newWeights[0] = 0.189e18;
newWeights[1] = 0.61e18;
newWeights[2] = 0.201e18;
int256 epsilonMax = 0.5e18;
int256 absoluteWeightGuardRail = 0.2e18;
int256[] memory res =
mockQuantAMMMathGuard.mockGuardQuantAMMWeights(newWeights, prevWeights, epsilonMax, absoluteWeightGuardRail);
for (uint256 i; i < res.length; i++) {
console.logInt(res[i]);
}
}

Console Output:

[PASS] testWeightDistortion() (gas: 18092)
Logs:
-68000000000000000
800000000000000000
268000000000000000

Impact

  • Incorrect weight calculation

  • Incorrect multiplier calculation

Tools Used

Foundry

Recommendations

Revisit clamping logic.

Updates

Lead Judging Commences

n0kto Lead Judge 10 months ago
Submission Judgement Published
Validated
Assigned finding tags:

finding_clampWeights_normalizeWeightUpdates_incorrect_calculation_of_sumOtherWeights_proportionalRemainder

Likelihood: Medium/High, when a weight is above absoluteMax. Impact: Low/Medium, weights deviate much faster, and sum of weights also.

Support

FAQs

Can't find an answer? Chat with us on Discord, Twitter or Linkedin.