Summary
In the ChannelFollowingUpdateRule and DifferenceMomentumUpdateRule contracts, there is a function named _getWeights. This function is responsible for calculating the associated weights and returning them to its caller. The logic involves iterating over several arrays to perform the weight calculations. However, these arrays introduce ambiguities due to their usage, particularly regarding array indexing. Specifically, there is an issue with accessing elements using indices tied to arrays whose lengths are not confirmed or validated. This lack of verification can lead to unexpected behavior, such as out-of-bounds errors, data inconsistencies, or potential vulnerabilities.
Vulnerability Details
ChannelFollowingUpdateRule::_getWeights:
function _getWeights(
int256[] calldata _prevWeights,
int256[] memory _data,
int256[][] calldata _parameters,
QuantAMMPoolParameters memory _poolParameters
) internal override returns (int256[] memory newWeightsConverted) {
ChannelFollowingLocals memory locals;
locals.kappa = _parameters[0];
locals.width = _parameters[1];
locals.amplitude = _parameters[2];
locals.exponents = _parameters[3];
locals.inverseScaling = _parameters[4];
locals.preExpScaling = _parameters[5];
_poolParameters.numberOfAssets = _prevWeights.length;
locals.prevWeightLength = _prevWeights.length;
locals.useRawPrice = _parameters[6][0] == ONE;
locals.newWeights = _calculateQuantAMMGradient(_data, _poolParameters);
@>
@>
for (locals.i = 0; locals.i < locals.prevWeightLength;) {
locals.denominator = _poolParameters.movingAverage[locals.i];
if (locals.useRawPrice) {
locals.denominator = _data[locals.i];
}
locals.newWeights[locals.i] = ONE.div(locals.denominator).mul(locals.newWeights[locals.i]);
unchecked {
++locals.i;
}
}
locals.signal = new int256[](locals.prevWeightLength);
.
.
...
}
DifferenceMomentumUpdateRule::_getWeights:
function _getWeights(
int256[] calldata _prevWeights,
int256[] memory _data,
int256[][] calldata _parameters,
QuantAMMPoolParameters memory _poolParameters
) internal override returns (int256[] memory newWeightsConverted) {
QuantAMMDifferenceMomentumLocals memory locals;
locals.kappaStore = _parameters[0];
_poolParameters.numberOfAssets = _prevWeights.length;
int128[] memory lambdaShort = new int128[]();
for (locals.i; locals.i < lambdaShort.length;) {
lambdaShort[locals.i] = int128(_parameters[1][locals.i]);
unchecked {
++locals.i;
}
}
int256[] memory currentShortMovingAverages =
_quantAMMUnpack128Array(shortMovingAverages[_poolParameters.pool], _poolParameters.numberOfAssets);
int256[] memory newShortMovingAverages = _calculateQuantAMMMovingAverage(
currentShortMovingAverages, _data, lambdaShort, _poolParameters.numberOfAssets
);
shortMovingAverages[_poolParameters.pool] = _quantAMMPack128Array(newShortMovingAverages);
for (uint256 i; i < newShortMovingAverages.length;) {
unchecked {
++i;
}
}
locals.prevWeightLength = _prevWeights.length;
@>
@>
locals.newWeights = new int256[](locals.prevWeightLength);
for (locals.i = 0; locals.i < locals.prevWeightLength;) {
locals.newWeights[locals.i] = newShortMovingAverages[locals.i] - _poolParameters.movingAverage[locals.i];
locals.denominator = _poolParameters.movingAverage[locals.i];
locals.newWeights[locals.i] = ONE.div(locals.denominator).mul(locals.newWeights[locals.i]);
if (locals.kappaStore.length == 1) {
locals.normalizationFactor += locals.newWeights[locals.i];
} else {
locals.normalizationFactor += (locals.newWeights[locals.i].mul(locals.kappaStore[locals.i]));
}
unchecked {
++locals.i;
}
}
newWeightsConverted = new int256[](locals.prevWeightLength);
.
.
...
}
The length of the moving average array could be smaller than that of the previous weights array. However, the length of the previous weights array is used as a condition to iterate over all elements of the new short moving averages and moving averages. This discrepancy creates a potential issue, as relying on the length of the previous weights array without validating its relationship with the moving average array may lead to out-of-bounds access or incomplete calculations. Such a mismatch can cause logical errors, unexpected behaviors, or even vulnerabilities within the system. Ensuring proper alignment and validation of array lengths is critical to maintaining the integrity of the iteration process and avoiding potential risks.
for (locals.i = 0; locals.i < locals.prevWeightLength;) {
locals.denominator = _poolParameters.movingAverage[locals.i];
.
.
...
}
for (locals.i = 0; locals.i < locals.prevWeightLength;) {
locals.newWeights[locals.i] = newShortMovingAverages[locals.i] - _poolParameters.movingAverage[locals.i];
.
.
...
}
If either the short moving averages array or the normal moving averages array has a length smaller than that of the previous weights array, an out-of-bounds error will occur. This issue arises because the iteration logic assumes that the lengths of these arrays align with the length of the previous weights array. When this assumption fails, the code may attempt to access elements at indices that do not exist in the shorter arrays, leading to runtime errors. Such errors can disrupt the execution flow, potentially causing crashes or other unintended consequences. To prevent this, it is essential to implement proper validation and ensure that all array lengths are compatible before performing iterative operations.
Impact
Due to the out-of-bounds error, the protocol will encounter a Denial of Service (DoS) vulnerability, which acts as an overlooked attack vector. This oversight can disrupt the protocol’s functionality, leading to significant issues that may compromise its reliability and security.
The potential consequences of this DoS vulnerability include:
Multiblock Failures: The protocol could fail to process multiple blocks effectively, causing interruptions in its operations.
Unstable Weight Updates: The error can disrupt the weight update mechanism, leading to incorrect calculations and system instability.
Market Exploitation: Malicious actors could exploit these vulnerabilities to manipulate the market for personal gain, harming users and the protocol’s reputation.
Impermanent Loss: Errors in weight updates and market fluctuations may exacerbate impermanent loss for liquidity providers.
Other Issues: Additional unforeseen problems, such as data corruption or cascading failures, may arise from this vulnerability.
To address this, it is critical to implement robust array validation and error handling mechanisms, ensuring the protocol remains stable and secure under all conditions.
Tools Used
Manual review
Recommendations
We can implement an if or require check to ensure that the length of the short or normal moving averages array is always greater than or equal to the length of the previous weights array. This validation step would act as a safeguard, preventing out-of-bounds errors during iteration.
By enforcing this condition, the protocol can maintain the integrity of its operations and avoid runtime errors. Specifically, this check ensures that:
The iteration logic does not access invalid indices, thereby preventing crashes or unexpected behavior.
The protocol’s weight calculation and update processes remain stable and consistent.
Potential attack vectors, such as Denial of Service (DoS) caused by an oversight, are mitigated effectively.
Integrating such a validation mechanism is essential for enhancing the protocol’s robustness and reliability.
we can do the following changes...
ChannelFollowingUpdateRule::_getWeights:
function _getWeights(
int256[] calldata _prevWeights,
int256[] memory _data,
int256[][] calldata _parameters,
QuantAMMPoolParameters memory _poolParameters
) internal override returns (int256[] memory newWeightsConverted) {
ChannelFollowingLocals memory locals;
locals.kappa = _parameters[0];
locals.width = _parameters[1];
locals.amplitude = _parameters[2];
locals.exponents = _parameters[3];
locals.inverseScaling = _parameters[4];
locals.preExpScaling = _parameters[5];
_poolParameters.numberOfAssets = _prevWeights.length;
locals.prevWeightLength = _prevWeights.length;
// the 7th parameter to determine if momentum should use the price or the average price as the denominator
// using the average price has shown greater performance and resilience due to greater smoothing
locals.useRawPrice = _parameters[6][0] == ONE;
// Calculate price gradients
locals.newWeights = _calculateQuantAMMGradient(_data, _poolParameters);
+ require(_poolParameters.movingAverage.length >= locals.prevWeightLength, "not enough moving averages");
for (locals.i = 0; locals.i < locals.prevWeightLength;) {
locals.denominator = _poolParameters.movingAverage[locals.i];
if (locals.useRawPrice) {
locals.denominator = _data[locals.i];
}
// 1/p(t) * ∂p(t)/∂t calculated and stored as used in multiple places
locals.newWeights[locals.i] = ONE.div(locals.denominator).mul(locals.newWeights[locals.i]);
unchecked {
++locals.i;
}
}
locals.signal = new int256[](locals.prevWeightLength);
.
.
...
}
DifferenceMomentumUpdateRule::_getWeights:
function _getWeights(
int256[] calldata _prevWeights,
int256[] memory _data,
int256[][] calldata _parameters,
QuantAMMPoolParameters memory _poolParameters
) internal override returns (int256[] memory newWeightsConverted) {
QuantAMMDifferenceMomentumLocals memory locals;
locals.kappaStore = _parameters[0];
_poolParameters.numberOfAssets = _prevWeights.length;
int128[] memory lambdaShort = new int128[]();
for (locals.i; locals.i < lambdaShort.length;) {
lambdaShort[locals.i] = int128(_parameters[1][locals.i]);
unchecked {
++locals.i;
}
}
int256[] memory currentShortMovingAverages =
_quantAMMUnpack128Array(shortMovingAverages[_poolParameters.pool], _poolParameters.numberOfAssets);
int256[] memory newShortMovingAverages = _calculateQuantAMMMovingAverage(
currentShortMovingAverages, _data, lambdaShort, _poolParameters.numberOfAssets
);
shortMovingAverages[_poolParameters.pool] = _quantAMMPack128Array(newShortMovingAverages);
for (uint256 i; i < newShortMovingAverages.length;) {
unchecked {
++i;
}
}
locals.prevWeightLength = _prevWeights.length;
+ require(_poolParameters.newShortMovingAverages.length >= locals.prevWeightLength, "not enough short moving averages");
+ require(_poolParameters.movingAverage.length >= locals.prevWeightLength, "not enough moving averages");
locals.newWeights = new int256[](locals.prevWeightLength);
for (locals.i = 0; locals.i < locals.prevWeightLength;) {
locals.newWeights[locals.i] = newShortMovingAverages[locals.i] - _poolParameters.movingAverage[locals.i];
locals.denominator = _poolParameters.movingAverage[locals.i];
// (EWMA_short - EWMA_long) / EWMA_long calculated and stored as used in multiple places
locals.newWeights[locals.i] = ONE.div(locals.denominator).mul(locals.newWeights[locals.i]);
if (locals.kappaStore.length == 1) {
locals.normalizationFactor += locals.newWeights[locals.i];
} else {
locals.normalizationFactor += (locals.newWeights[locals.i].mul(locals.kappaStore[locals.i]));
}
unchecked {
++locals.i;
}
}
newWeightsConverted = new int256[](locals.prevWeightLength);
.
.
...
}