QuantAMM

QuantAMM
49,600 OP
View results
Submission Details
Severity: low
Invalid

Significant Precision Loss in QuantAMMMathGuard Weight Normalization

Summary

The QuantAMMMathGuard contract's weight normalization process can result in significant precision loss (up to ~1.96e13) when handling high-precision weight values, far exceeding the expected rounding error of 1e-18 mentioned in the comments. This could lead to unexpected weight distributions and potential economic exploits.

Vulnerability Details

Location: pkg/pool-quantamm/contracts/rules/base/QuantAMMMathGuard.sol

The issue occurs in _normalizeWeightUpdates() where rounding errors compound during weight normalization. The critical vulnerability stems from two key components:

  1. Error Accumulation (Lines 80-105):

int256 newWeightsSum;
if (maxAbsChange > _epsilonMax) {
int256 rescaleFactor = _epsilonMax.div(maxAbsChange);
for (uint i; i < _newWeights.length; ++i) {
int256 newDelta = (_newWeights[i] - _prevWeights[i]).mul(rescaleFactor);
_newWeights[i] = _prevWeights[i] + newDelta;
newWeightsSum += _newWeights[i]; // Accumulates rounding errors
}
} else {
for (uint i; i < _newWeights.length; ++i) {
newWeightsSum += _newWeights[i]; // Accumulates rounding errors
}
}
  1. Error Concentration (Lines 106-109):

// Comment incorrectly claims: "very small (1e-18) rounding error"
_newWeights[0] = _newWeights[0] + (ONE - newWeightsSum);

The vulnerability manifests through:

  1. Each addition to newWeightsSum introduces small rounding errors

  2. With 8 tokens, these errors compound across multiple operations

  3. All accumulated error is concentrated in the first weight

  4. The actual error can be ~1.96e13 times larger than the claimed 1e-18

A simple change of 1 in the last digit can trigger this issue:

Token Index: 0
Previous Weight: 123456789123456789
New Weight: 123456789123456790 (intended +1)
Result Weight: 123437189023509980 (actual -19600099946809)
Delta: -19600099946809 // ~1.96e13 loss

This demonstrates that the comment's assumption of "very small (1e-18) rounding error" is off by 13 orders of magnitude in the worst case.

Proof of Concept:

// SPDX-License-Identifier: UNLICENSED
pragma solidity ^0.8.26;
import "forge-std/Test.sol";
import "@prb/math/contracts/PRBMathSD59x18.sol";
import { MockCalculationRule } from "../../../contracts/mock/MockCalculationRule.sol";
import { MockPool } from "../../../contracts/mock/MockPool.sol";
import { MockQuantAMMMathGuard } from "../../../contracts/mock/MockQuantAMMMathGuard.sol";
contract QuantammMathGuardPrecisionLoss is Test {
using PRBMathSD59x18 for int256;
MockQuantAMMMathGuard mockQuantAMMMathGuard;
function setUp() public {
mockQuantAMMMathGuard = new MockQuantAMMMathGuard();
}
// Test rounding edge cases with 8 tokens
function test_EightTokenRoundingEdgeCases() public {
int256[] memory prevWeights = new int256[]();
int256[] memory newWeights = new int256[]();
// Set initial weights with very precise values
prevWeights[0] = 0.123456789123456789e18;
prevWeights[1] = 0.123456789123456789e18;
prevWeights[2] = 0.123456789123456789e18;
prevWeights[3] = 0.123456789123456789e18;
prevWeights[4] = 0.123456789123456789e18;
prevWeights[5] = 0.123456789123456789e18;
prevWeights[6] = 0.123456789123456789e18;
prevWeights[7] = 0.135822076235749286e18; // Remainder to sum to 1
// Try tiny changes that could cause rounding issues
newWeights[0] = 0.123456789123456790e18; // +1 in last digit
newWeights[1] = 0.123456789123456788e18; // -1 in last digit
newWeights[2] = 0.123456789123456789e18; // no change
newWeights[3] = 0.123456789123456789e18;
newWeights[4] = 0.123456789123456789e18;
newWeights[5] = 0.123456789123456789e18;
newWeights[6] = 0.123456789123456789e18;
newWeights[7] = 0.135822076235749287e18; // Adjusted for sum = 1
int256 epsilonMax = 0.005e18;
int256 absoluteWeightGuardRail = 0.01e18;
int256[] memory result = mockQuantAMMMathGuard.mockGuardQuantAMMWeights(
newWeights,
prevWeights,
epsilonMax,
absoluteWeightGuardRail
);
// Log precise values
for(uint i = 0; i < 8; i++) {
emit log_named_uint("Token Index", i);
emit log_named_int("Previous Weight (precise)", prevWeights[i]);
emit log_named_int("New Weight (precise)", newWeights[i]);
emit log_named_int("Result Weight (precise)", result[i]);
emit log_named_int("Delta", result[i] - prevWeights[i]);
}
// Check sum is exactly 1
int256 totalWeight;
for(uint i = 0; i < 8; i++) {
totalWeight += result[i];
}
assertEq(totalWeight, 1e18, "Weights don't sum to exactly 1");
// Check no weight lost more than 1 in last digit of precision
for(uint i = 0; i < 8; i++) {
int256 precisionLoss = (result[i] - newWeights[i]).abs();
assertLe(precisionLoss, 1, "Lost more than 1 in last digit");
}
}
}

Test Results:

Token Index: 0
Previous Weight: 123456789123456789
New Weight: 123456789123456790 (intended +1)
Result Weight: 123437189023509980 (actual -19600099946809)
Delta: -19600099946809 // ~1.96e13 loss

Impact

Severity: HIGH

  1. Technical Impact:

    • Precision loss ~20 trillion times larger than expected

    • Weight changes deviate significantly from intended values

    • Breaks assumption of minimal rounding errors

    • Affects all 8-token pools

  2. Economic Impact:

    • Unexpected weight distributions

    • Potential arbitrage opportunities

    • MEV exploitation possible

    • Loss of intended pool behavior

Tools Used

  • Foundry testing framework

  • Manual code review

  • Mathematical analysis of precision requirements

  • Custom test suite for rounding edge cases

Recommendations

  1. Implement Safe Rounding:

function _normalizeWeights(int256[] memory weights) internal pure {
// Track precision loss
int256 maxPrecisionLoss;
// Calculate normalization factor with extra precision
int256 sum = 0;
for (uint i = 0; i < weights.length; i++) {
sum += weights[i];
}
int256 normalizationFactor = ONE.mul(ONE).div(sum);
// Apply normalization with precision tracking
for (uint i = 0; i < weights.length; i++) {
int256 originalWeight = weights[i];
weights[i] = weights[i].mul(normalizationFactor).div(ONE);
int256 precisionLoss = (weights[i] - originalWeight).abs();
if (precisionLoss > maxPrecisionLoss) {
maxPrecisionLoss = precisionLoss;
}
}
require(maxPrecisionLoss <= MAX_ALLOWED_PRECISION_LOSS, "Excessive precision loss");
}
  1. Add Precision Safeguards:

    • Track and limit maximum precision loss

    • Use higher precision for intermediate calculations

    • Consider using fixed-point library with more decimal places

    • Add explicit precision loss checks

  2. Architectural Changes:

    • Consider using basis points (0.01%) for weight representation

    • Implement two-phase normalization process

    • Add precision loss monitoring and alerts

    • Consider maximum token limits based on precision requirements

Differentiation from Known Issues

This issue is distinct from:

  1. mathguard-weight-overflow.md: Deals with integer overflow (~-1e34), not precision loss

  2. mathguard-negative-weights.md: Focuses on negative weights, not precision degradation

  3. normalization-epsilon-guard-violation.md: Addresses epsilon violations, not precision loss

References

Updates

Lead Judging Commences

n0kto Lead Judge 7 months ago
Submission Judgement Published
Invalidated
Reason: Non-acceptable severity
Assigned finding tags:

Informational or Gas / Admin is trusted / Pool creation is trusted / User mistake / Suppositions

Please read the CodeHawks documentation to know which submissions are valid. If you disagree, provide a coded PoC and explain the real likelyhood and the detailed impact on the mainnet without any supposition (if, it could, etc) to prove your point.

Support

FAQs

Can't find an answer? Chat with us on Discord, Twitter or Linkedin.