QuantAMM

QuantAMM
49,600 OP
View results
Submission Details
Severity: low
Invalid

Precision Loss and Weight Imbalance in DifferenceMomentumUpdateRule

Summary

The DifferenceMomentumUpdateRule contract exhibits both precision loss and dangerous weight imbalance issues. The precision loss compounds over multiple updates, while the weight imbalance grows exponentially until the system fails with invalid (negative) weights. This breaks two critical invariants: weights must sum to exactly 100%, and weights must remain positive.

Vulnerability Details

Location: pkg/pool-quantamm/contracts/rules/DifferenceMomentumUpdateRule.sol

The issues manifest in two ways:

  1. Precision loss in normalization calculations

  2. Exponential weight divergence leading to system failure

Proof of Concept

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
import "forge-std/Test.sol";
import "@prb/math/contracts/PRBMathSD59x18.sol";
import "../../../contracts/mock/mockRules/MockDifferenceMomentumRule.sol";
import "../../../contracts/mock/MockPool.sol";
import "../utils.t.sol";
contract QuantammDifferenceMomentumUpdateRuleTest is Test, QuantAMMTestUtils {
MockDifferenceMomentumRule public rule;
MockPool public mockPool;
function setUp() public {
// Deploy MockDifferenceMomentumRule contract
rule = new MockDifferenceMomentumRule(address(this));
// Deploy MockPool contract
mockPool = new MockPool(3600, 1 ether, address(rule));
}
function testDifferenceMomentumCompoundingPrecisionLoss() public {
// Similar setup to first test
int256[][] memory parameters = new int256[][]();
parameters[0] = new int256[]();
parameters[0][0] = 7.1e18;
parameters[0][1] = 2.9e18;
parameters[1] = new int256[]();
parameters[1][0] = 0.5e18;
int256[] memory prevWeights = new int256[]();
prevWeights[0] = 0.5e18;
prevWeights[1] = 0.5e18;
int256[] memory data = new int256[]();
int256[] memory movingAverages = new int256[]();
int256[] memory prevShortMovingAverage = new int256[]();
int256[] memory prevMovingAverages = new int256[]();
int128[] memory lambdas = new int128[]();
lambdas[0] = int128(0.7e18);
// Initialize pool
mockPool.setNumberOfAssets(2);
// Track cumulative precision loss over multiple updates
int256 cumulativePrecisionLoss = 0;
for (uint i = 0; i < 5; i++) {
// Update values for each iteration
prevShortMovingAverage[0] = 1e18 + int256(i) * 0.1e18;
prevShortMovingAverage[1] = 1e18 - int256(i) * 0.1e18;
prevMovingAverages[0] = prevShortMovingAverage[0];
prevMovingAverages[1] = prevShortMovingAverage[1];
movingAverages[0] = prevMovingAverages[0] + 0.05e18;
movingAverages[1] = prevMovingAverages[1] - 0.05e18;
data[0] = movingAverages[0];
data[1] = movingAverages[1];
rule.initialisePoolRuleIntermediateValues(
address(mockPool),
prevMovingAverages,
prevShortMovingAverage,
2
);
rule.CalculateUnguardedWeights(
prevWeights,
data,
address(mockPool),
parameters,
lambdas,
movingAverages
);
int256[] memory results = rule.GetResultWeights();
prevWeights = results;
int256 weightSum = results[0] + results[1];
int256 iterationPrecisionLoss = 1e18 - weightSum;
cumulativePrecisionLoss += iterationPrecisionLoss;
emit log_named_uint("Update iteration", i + 1);
emit log_named_int("Weight sum", weightSum);
emit log_named_int("Weight 0", results[0]);
emit log_named_int("Weight 1", results[1]);
emit log_named_int("Weight imbalance", results[0] - results[1]);
emit log_named_int("Iteration precision loss", iterationPrecisionLoss);
emit log_named_int("Cumulative precision loss", cumulativePrecisionLoss);
}
assertTrue(cumulativePrecisionLoss > 0, "Should show compounding precision loss");
}
}

Test Results:

// Update 1
Weight 0: 0.3967e18 (39.67%)
Weight 1: 0.6032e18 (60.32%)
Weight imbalance: -0.2064e18 (-20.64%)
Precision loss: 9 wei
// Update 2
Weight 0: 0.2914e18 (29.14%)
Weight 1: 0.7085e18 (70.85%)
Weight imbalance: -0.4170e18 (-41.70%)
Precision loss: 9 wei
// Update 3
Weight 0: 0.1816e18 (18.16%)
Weight 1: 0.8183e18 (81.83%)
Weight imbalance: -0.6366e18 (-63.66%)
Precision loss: 14 wei
// Update 4
Weight 0: 0.0643e18 (6.43%)
Weight 1: 0.9356e18 (93.56%)
Weight imbalance: -0.8713e18 (-87.13%)
Precision loss: 20 wei
// Update 5
REVERTS: "Invalid weight"

Attack Scenario

  1. Attacker identifies assets with trending price movements

  2. Creates pool with specific kappa values that accelerate weight divergence

  3. Takes advantage of predictable weight shifts through arbitrage

  4. Forces pool into extreme weight imbalances

  5. Can potentially cause system failure through weight invalidation

Impact

Severity: HIGH

  1. Technical Impact:

    • Weight imbalance grows exponentially

    • Precision loss compounds with each update

    • System fails when weights approach zero

    • Critical invariant violations:

      • Weights don't sum to exactly 100%

      • Weights can become invalid (negative)

    • Affects all pools using DifferenceMomentumUpdateRule

  2. Economic Impact:

    • Extreme asset allocation skews

    • Predictable weight movements enable arbitrage

    • System instability from growing imbalances

    • Potential complete pool failure

    • LP value misalignment

    • Accumulated losses over time

Tools Used

  • Foundry testing framework

  • Custom test suite for precision and weight analysis

  • Mathematical modeling of weight evolution

  • Manual code review

Recommendations

Implement Weight Bounds:

function validateWeights(int256[] memory weights) internal pure {
int256 MIN_WEIGHT = 0.1e18; // 10%
int256 MAX_WEIGHT = 0.9e18; // 90%
for (uint i = 0; i < weights.length; i++) {
require(weights[i] >= MIN_WEIGHT, "Weight too low");
require(weights[i] <= MAX_WEIGHT, "Weight too high");
}
}

Add Kappa Constraints:

function validateKappa(int256[] memory kappa) internal pure {
int256 MAX_KAPPA = 5e18; // Maximum kappa value
int256 sumKappa = 0;
for (uint i = 0; i < kappa.length; i++) {
require(kappa[i] <= MAX_KAPPA, "Kappa too large");
sumKappa += kappa[i];
}
require(sumKappa <= 10e18, "Sum kappa too large");
}

Implement Weight Rebalancing:

function rebalanceWeights(int256[] memory weights) internal pure returns (int256[] memory) {
int256 targetWeight = 1e18 / int256(weights.length);
int256 maxDeviation = 0.3e18; // 30%
for (uint i = 0; i < weights.length; i++) {
int256 deviation = abs(weights[i] - targetWeight);
require(deviation <= maxDeviation, "Weight deviation too large");
}
return normalizeWeights(weights);
}

Add Circuit Breakers:

  • Monitor weight divergence rate

  • Track cumulative precision loss

  • Implement emergency stops for extreme imbalances

  • Add weight validation checks

  • Consider rebalancing triggers

References

Updates

Lead Judging Commences

n0kto Lead Judge 7 months ago
Submission Judgement Published
Invalidated
Reason: Non-acceptable severity
Assigned finding tags:

Informational or Gas / Admin is trusted / Pool creation is trusted / User mistake / Suppositions

Please read the CodeHawks documentation to know which submissions are valid. If you disagree, provide a coded PoC and explain the real likelyhood and the detailed impact on the mainnet without any supposition (if, it could, etc) to prove your point.

Support

FAQs

Can't find an answer? Chat with us on Discord, Twitter or Linkedin.