QuantAMM

QuantAMM
49,600 OP
View results
Submission Details
Severity: low
Invalid

Significant Precision Loss in QuantAMM Gradient Calculations

Summary

The QuantAMMGradientBasedRule contract's gradient calculations suffer from significant precision loss when performing sequential fixed-point operations. Testing reveals a systematic bias of ~3% in gradient calculations that compounds over time, with early iterations showing deviations up to 8.79%. This affects all pools using gradient-based rules and could lead to significant price deviations.

Vulnerability Details

Location: pkg/pool-quantamm/contracts/rules/base/QuantammGradientBasedRule.sol

The issue occurs in gradient calculations where multiple fixed-point operations compound precision loss. The critical vulnerability stems from two components:

  1. MulFactor Calculation (Lines 150-155):

function _calculateMulFactor(int256 lambda) internal pure returns (int256) {
int256 oneMinusLambda = ONE - lambda;
int256 THREE = 3e18;
// Multiple fixed-point operations lead to precision loss
return oneMinusLambda.pow(THREE).div(lambda);
}
  1. Gradient Update (Lines 160-170):

// Calculate intermediate value: λa(t-1) + (p(t) - p̅(t))/(1-λ)
int256 intermediateValue = lambda.mul(previousGradient) +
(newPrice - movingAverage).div(oneMinusLambda);
// Calculate final gradient with mulFactor
return mulFactor.mul(intermediateValue);

The vulnerability manifests through:

  1. Initial large precision loss (~8.79%) in early iterations

  2. Stabilization to consistent ~3.01% deviation

  3. Linear growth in cumulative error

  4. Systematic bias that never self-corrects

Proof of Concept:

// SPDX-License-Identifier: UNLICENSED
pragma solidity ^0.8.26;
import "forge-std/Test.sol";
import "@prb/math/contracts/PRBMathSD59x18.sol";
import { MockCalculationRule } from "../../../contracts/mock/MockCalculationRule.sol";
import { MockPool } from "../../../contracts/mock/MockPool.sol";
import { QuantAMMGradientBasedRule } from "../../../contracts/rules/base/QuantammGradientBasedRule.sol";
import { QuantAMMPoolParameters } from "../../../contracts/rules/base/QuantammBasedRuleHelpers.sol";
contract TestQuantAMMGradientRule is QuantAMMGradientBasedRule {
using PRBMathSD59x18 for int256;
function calculateGradient(
int256[] memory newData,
QuantAMMPoolParameters memory poolParameters
) public returns (int256[] memory) {
return _calculateQuantAMMGradient(newData, poolParameters);
}
function setInitialGradient(
address poolAddress,
int256[] memory initialValues,
uint numberOfAssets
) public {
_setGradient(poolAddress, initialValues, numberOfAssets);
}
// Helper function to expose mulFactor calculation for testing
function calculateMulFactor(int128 lambda) public pure returns (int256) {
int256 convertedLambda = int256(lambda);
int256 oneMinusLambda = 1e18 - convertedLambda;
int256 THREE = 3e18;
return oneMinusLambda.pow(THREE).div(convertedLambda);
}
}
contract QuantammGradientPrecisionLossTest is Test {
using PRBMathSD59x18 for int256;
TestQuantAMMGradientRule gradientRule;
MockPool mockPool;
// Constants for testing
uint8 constant NUM_ASSETS = 2;
int256 constant INITIAL_PRICE = 1000e18;
function setUp() public {
gradientRule = new TestQuantAMMGradientRule();
mockPool = new MockPool(3600, 1e18, address(0));
}
function testPrecisionLossInMulFactor() public {
// Test precision loss in mulFactor calculation with different lambda values
int128[5] memory lambdas = [
int128(0.5e18), // 0.5
int128(0.33e18), // 0.33
int128(0.25e18), // 0.25
int128(0.2e18), // 0.2
int128(0.1e18) // 0.1
];
for (uint i = 0; i < lambdas.length; i++) {
int256 mulFactor = gradientRule.calculateMulFactor(lambdas[i]);
emit log_named_int(
string(abi.encodePacked("MulFactor for lambda=", vm.toString(lambdas[i]))),
mulFactor
);
}
}
function testPrecisionLossOverTime() public {
// Setup pool parameters
QuantAMMPoolParameters memory params;
params.numberOfAssets = NUM_ASSETS;
params.pool = address(mockPool);
params.lambda = new int128[](NUM_ASSETS);
params.movingAverage = new int256[](NUM_ASSETS);
// Use a moderate lambda value where precision loss should be observable
int128 testLambda = int128(0.33e18); // 0.33
// Initialize pool
int256[] memory initialGradients = new int256[]();
for(uint i = 0; i < NUM_ASSETS; i++) {
params.lambda[i] = testLambda;
params.movingAverage[i] = INITIAL_PRICE;
initialGradients[i] = INITIAL_PRICE;
}
gradientRule.setInitialGradient(address(mockPool), initialGradients, NUM_ASSETS);
// Track gradients over more updates
uint numIterations = 100; // Increased from 20 to 100
int256[] memory gradients = new int256[]();
int256[] memory expectedGradients = new int256[]();
int256 cumulativeError = 0;
int256 maxError = 0;
// Create price data with small consistent changes
int256[] memory newPrices = new int256[]();
int256 priceChange = INITIAL_PRICE / 100; // 1% change
for(uint i = 0; i < numIterations; i++) {
// Update prices
for(uint j = 0; j < NUM_ASSETS; j++) {
newPrices[j] = INITIAL_PRICE + (int256(i) * priceChange);
}
// Calculate actual gradients
int256[] memory result = gradientRule.calculateGradient(newPrices, params);
gradients[i] = result[0];
// Calculate expected gradient using high precision math
expectedGradients[i] = calculateExpectedGradient(
int256(testLambda),
newPrices[0],
params.movingAverage[0],
i == 0 ? INITIAL_PRICE : gradients[i-1]
);
// Calculate error
int256 iterationError = (gradients[i] - expectedGradients[i]).abs();
cumulativeError += iterationError;
if (iterationError > maxError) {
maxError = iterationError;
}
// Log every 10th iteration to keep output manageable
if (i % 10 == 0) {
emit log_named_uint("Iteration", i);
emit log_named_int("Actual Gradient", gradients[i]);
emit log_named_int("Expected Gradient", expectedGradients[i]);
emit log_named_int("Iteration Error", iterationError);
emit log_named_int("Cumulative Error", cumulativeError);
emit log_named_int("Error Percentage", (iterationError * 100e18) / expectedGradients[i]);
}
// Update moving average for next iteration
for(uint j = 0; j < NUM_ASSETS; j++) {
params.movingAverage[j] = newPrices[j];
}
}
// Final error statistics
emit log_named_string("=== Final Error Statistics ===", "");
emit log_named_int("Maximum Single Iteration Error", maxError);
emit log_named_int("Total Cumulative Error", cumulativeError);
emit log_named_int("Average Error Per Iteration", cumulativeError / int256(numIterations));
}
// Helper function to calculate expected gradient with higher precision
function calculateExpectedGradient(
int256 lambda,
int256 newPrice,
int256 movingAverage,
int256 previousGradient
) internal pure returns (int256) {
int256 ONE = 1e18;
int256 oneMinusLambda = ONE - lambda;
// Calculate intermediate value: λa(t-1) + (p(t) - p̅(t))/(1-λ)
int256 intermediateValue = lambda.mul(previousGradient) +
(newPrice - movingAverage).div(oneMinusLambda);
// Calculate mulFactor: (1-λ)³/λ
int256 mulFactor = oneMinusLambda.pow(3e18).div(lambda);
return mulFactor.mul(intermediateValue);
}
}

Test Results over 100 iterations with λ=0.33:

Initial State (Iteration 0):
Actual: 300763000000000007910
Expected: 300763000000000007910
Error: 0
Early Deviation (Iteration 10):
Actual: 20307325768988261393
Expected: 19713345506751253013
Error: 593980262237008380 (~0.59e18)
Error %: 3.01%
Error Stabilization (30-90 iterations):
Consistent Error: ~0.59e18
Consistent Error %: 3.01%
Final Statistics:
Maximum Error: 8793407830999997852 (~8.79e18)
Total Error: 71004889603030285709 (~71e18)
Average Error: 710048896030302857 (~0.71e18)

This demonstrates that precision loss is both significant and systematic, affecting every gradient calculation.

Impact

Severity: MEDIUM

  1. Technical Impact:

    • Initial ~8.79% gradient deviation

    • Stabilizes to consistent ~3.01% bias

    • Cumulative error grows linearly

    • Systematic error affects all calculations

    • Never self-corrects

  2. Economic Impact:
    While mathematical precision loss exists (~3.01% bias), QuantAMM's actual protection mechanisms include:

    a) Price Bias Impact

    • ~3.01% systematic bias exists in calculations

    • Impact moderated by gradual weight updates

    • Oracle-based price information helps maintain alignment

    b) Arbitrage Dynamics

    • Weight changes are gradual and oracle-informed

    • System includes front-running protection mechanisms

    • Natural price discovery through market interaction

    c) Weight Error Management

    • Mathematical errors do compound

    • Economic impact limited by:

      • Gradual weight updates

      • Oracle price feeds

      • Market price discovery

Tools Used

  • Foundry testing framework

  • Custom precision loss test suite

  • Mathematical analysis

  • 100-iteration sequential testing

Recommendations

  1. Implement Higher Precision Calculations:

function _calculateMulFactor(int256 lambda) internal pure returns (int256) {
// Use 27 decimals for intermediate calculations
int256 ONE = 1e27;
int256 scaledLambda = lambda * 1e9;
int256 oneMinusLambda = ONE - scaledLambda;
// Calculate with higher precision
int256 mulFactor = (oneMinusLambda * oneMinusLambda * oneMinusLambda) /
(scaledLambda * ONE * ONE);
// Scale back to 18 decimals
return mulFactor / 1e9;
}
  1. Add Precision Safeguards:

    • Track maximum allowed precision loss

    • Use higher precision for intermediate steps

    • Consider using fixed-point library with more decimals

    • Add explicit precision loss checks

  2. Architectural Changes:

    • Consider alternative gradient calculation methods

    • Implement precision loss monitoring

    • Add bounds for acceptable gradient deviations

    • Consider maximum iteration limits based on precision requirements

References

Updates

Lead Judging Commences

n0kto Lead Judge 10 months ago
Submission Judgement Published
Invalidated
Reason: Non-acceptable severity
Assigned finding tags:

Informational or Gas / Admin is trusted / Pool creation is trusted / User mistake / Suppositions

Please read the CodeHawks documentation to know which submissions are valid. If you disagree, provide a coded PoC and explain the real likelyhood and the detailed impact on the mainnet without any supposition (if, it could, etc) to prove your point.

Support

FAQs

Can't find an answer? Chat with us on Discord, Twitter or Linkedin.