QuantAMM

QuantAMM
49,600 OP
View results
Submission Details
Severity: low
Invalid

Precision Loss in AntiMomentumUpdateRule Normalization

Summary

The AntiMomentumUpdateRule contract experiences precision loss during weight normalization calculations, leading to weight deviations of approximately 1.12% and affecting the invariant that weights must sum to exactly 100%. While significant, the gradual nature of these deviations and existing system mitigations reduce the immediate economic impact.

Vulnerability Details

Location: pkg/pool-quantamm/contracts/rules/AntimomentumUpdateRule.sol

The issue occurs in the normalization calculation where division operations cause precision loss that propagates through the weight calculations:

locals.normalizationFactor = locals.normalizationFactor.div(locals.sumKappa);

Proof of Concept

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.26;
import "forge-std/Test.sol";
import "@prb/math/contracts/PRBMathSD59x18.sol";
import "../../../contracts/mock/mockRules/MockAntiMomentumRule.sol";
import "../../../contracts/mock/MockPool.sol";
import "../utils.t.sol";
contract QuantammAntiMomentumTest is Test, QuantAMMTestUtils {
MockAntiMomentumRule public rule;
MockPool public mockPool;
function setUp() public {
// Deploy MockAntiMomentumRule contract
rule = new MockAntiMomentumRule(address(this));
// Deploy MockPool contract
mockPool = new MockPool(3600, 1 ether, address(rule));
}
function assertRunCompletes(
uint256 numAssets,
int256[][] memory parameters,
int256[] memory previousAlphas,
int256[] memory prevMovingAverages,
int256[] memory movingAverages,
int128[] memory lambdas,
int256[] memory prevWeights,
int256[] memory data
) internal {
// Initialize pool and rule
mockPool.setNumberOfAssets(numAssets);
rule.initialisePoolRuleIntermediateValues(
address(mockPool),
prevMovingAverages,
previousAlphas,
numAssets
);
// Calculate weights
rule.CalculateUnguardedWeights(
prevWeights,
data,
address(mockPool),
parameters,
lambdas,
movingAverages
);
}
function testPrecisionLossInNormalization() public {
// Setup parameters for a 2-asset pool with vector kappa
int256[][] memory parameters = new int256[][]();
parameters[0] = new int256[]();
// Use large kappa values to amplify precision loss
parameters[0][0] = 7.1e18; // 7.1
parameters[0][1] = 2.9e18; // 2.9
// sumKappa will be 10
int256[] memory previousAlphas = new int256[]();
previousAlphas[0] = PRBMathSD59x18.fromInt(1);
previousAlphas[1] = PRBMathSD59x18.fromInt(1);
int256[] memory prevMovingAverages = new int256[]();
prevMovingAverages[0] = 1e18; // 1.0
prevMovingAverages[1] = 1e18; // 1.0
int256[] memory movingAverages = new int256[]();
// Create a scenario where normalizationFactor will be a number that causes precision loss
movingAverages[0] = 1.1e18; // 1.1
movingAverages[1] = 0.9e18; // 0.9
int128[] memory lambdas = new int128[]();
lambdas[0] = int128(0.7e18);
lambdas[1] = int128(0.7e18);
int256[] memory prevWeights = new int256[]();
prevWeights[0] = 0.5e18; // 50%
prevWeights[1] = 0.5e18; // 50%
int256[] memory data = new int256[]();
// Use specific price data that will result in a normalizationFactor that's not cleanly divisible
data[0] = 1.1e18; // 1.1
data[1] = 0.9e18; // 0.9
// Calculate expected results without precision loss
// This calculation should match the one in AntiMomentumUpdateRule but with higher precision
int256[] memory expectedResults = new int256[]();
expectedResults[0] = 0.5e18; // Should maintain 50% if no precision loss
expectedResults[1] = 0.5e18; // Should maintain 50% if no precision loss
// Run the update
assertRunCompletes(
2,
parameters,
previousAlphas,
prevMovingAverages,
movingAverages,
lambdas,
prevWeights,
data
);
// Get actual results
int256[] memory actualResults = rule.GetResultWeights();
// Check if the sum of weights is exactly 1e18 (100%)
int256 weightSum = actualResults[0] + actualResults[1];
assertEq(weightSum, 1e18, "Weight sum should be exactly 100%");
// Document the precision loss
emit log_named_int("Expected weight 0", 0.5e18);
emit log_named_int("Actual weight 0", actualResults[0]);
emit log_named_int("Precision loss", actualResults[0] - 0.5e18); // ~0.0112e18 or 1.12%
// Verify precision loss occurred but weights still sum to 100%
assertTrue(actualResults[0] != 0.5e18, "Should show precision loss");
assertTrue(actualResults[1] != 0.5e18, "Should show precision loss");
assertEq(actualResults[0] + actualResults[1], 1e18, "Weights must sum to 100%");
}
function testPrecisionLossOverMultipleUpdates() public {
// Initial setup similar to first test
int256[][] memory parameters = new int256[][]();
parameters[0] = new int256[]();
parameters[0][0] = 7.1e18;
parameters[0][1] = 2.9e18;
int256[] memory previousAlphas = new int256[]();
previousAlphas[0] = PRBMathSD59x18.fromInt(1);
previousAlphas[1] = PRBMathSD59x18.fromInt(1);
int256[] memory prevMovingAverages = new int256[]();
int256[] memory movingAverages = new int256[]();
int128[] memory lambdas = new int128[]();
lambdas[0] = int128(0.7e18);
lambdas[1] = int128(0.7e18);
int256[] memory prevWeights = new int256[]();
prevWeights[0] = 0.5e18;
prevWeights[1] = 0.5e18;
int256[] memory data = new int256[]();
// Run multiple updates and track cumulative precision loss
for (uint i = 0; i < 5; i++) {
// Update moving averages and data for each iteration
prevMovingAverages[0] = 1e18 + int256(i) * 0.1e18;
prevMovingAverages[1] = 1e18 - int256(i) * 0.1e18;
movingAverages[0] = prevMovingAverages[0] + 0.05e18;
movingAverages[1] = prevMovingAverages[1] - 0.05e18;
data[0] = movingAverages[0];
data[1] = movingAverages[1];
// Run update
assertRunCompletes(
2,
parameters,
previousAlphas,
prevMovingAverages,
movingAverages,
lambdas,
prevWeights,
data
);
// Get results and prepare for next iteration
int256[] memory results = rule.GetResultWeights();
prevWeights = results;
// Log results for each iteration
emit log_named_uint("Update iteration", i + 1);
emit log_named_int("Weight 0", results[0]);
emit log_named_int("Weight 1", results[1]);
emit log_named_int("Cumulative precision loss 0", results[0] - 0.5e18);
emit log_named_int("Cumulative precision loss 1", results[1] - 0.5e18);
// Verify weight sum remains correct
assertEq(results[0] + results[1], 1e18, "Weight sum should remain 100%");
}
}
}

Test Results:

// Single Update Test
Expected weight 0: 500000000000000000 (0.5e18 = 50%)
Actual weight 0: 511230909090909092 (0.51123e1851.12%)
Precision loss: 11230909090909092 (0.01123e181.12%)
// Multiple Updates Test (First iteration)
Weight 0: 505573233082706763 (≈ 50.56%)
Weight 1: 494426766917293232 (≈ 49.44%)
Total Weight Sum: 999999999999999995 (< 1e18)
Missing Weight: 5 (0.000000000000000005e18)

Impact

Severity: MEDIUM

  1. Technical Impact:

    • ~1.12% weight deviation from expected values

    • Compounds gradually over multiple updates

    • Weight sum invariant deviation

    • Affects all pools using AntiMomentumUpdateRule

  2. Economic Impact:
    Mitigated by system design:

    • Gradual weight changes limit immediate impact

    • Oracle price feeds provide external validation

    • Market forces help correct deviations

    • System's built-in protections against rapid changes

Tools Used

  • Foundry testing framework

  • Custom test suite for precision analysis

  • Mathematical modeling of normalization effects

  • Manual code review

Recommendations

Implement Scaled Division:

function calculateNormalizationFactor(
int256 normalizationFactor,
int256 sumKappa
) internal pure returns (int256) {
// Scale up before division to preserve precision
int256 SCALING_FACTOR = 1e18;
return normalizationFactor.mul(SCALING_FACTOR).div(sumKappa);
}

Add Precision Safeguards:

  • Track maximum allowed precision loss

  • Use higher precision for intermediate steps

  • Consider using fixed-point library with more decimals

  • Add explicit precision loss checks

Architectural Changes:

  • Consider alternative normalization methods

  • Implement precision loss monitoring

  • Add bounds for acceptable deviations

  • Consider maximum iteration limits

References

Updates

Lead Judging Commences

n0kto Lead Judge 7 months ago
Submission Judgement Published
Invalidated
Reason: Non-acceptable severity
Assigned finding tags:

Informational or Gas / Admin is trusted / Pool creation is trusted / User mistake / Suppositions

Please read the CodeHawks documentation to know which submissions are valid. If you disagree, provide a coded PoC and explain the real likelyhood and the detailed impact on the mainnet without any supposition (if, it could, etc) to prove your point.

invalid_sum_of_weights_can_exceeds_one_no_guard

According the sponsor and my understanding, sum of weights does not have to be exactly 1 to work fine. So no real impact here. Please provide a PoC showing a realistic impact if you disagree. This PoC cannot contains negative weights because they will be guarded per clampWeights.

Support

FAQs

Can't find an answer? Chat with us on Discord, Twitter or Linkedin.