Dria

Swan
NFTHardhat
21,000 USDC
View results
Submission Details
Severity: high
Valid

Incorrect variance calculation in `Statistics` completely breaks protocol due to underflow

Summary

The standard deviation calculation in Statistics will almost always revert from underflow due to incorrect variance calculation.

Vulnerability Details

Once all validations are complete in LLMOracleCoordinator for a given task, the validations are finalized in finalizeValidation and a standard deviation of all scores is calculated. There is an issue however with the way variance is calculated. The variance calculation only account for scores above the mean and not below the mean. Anytime a score is below the mean, the following line will revert due to underflow.

uint256 diff = data[i] - mean;

POC

You can modify any test to include different score values and you will receive an error. Currently all tests use the same score for all scores in the array. This is the only instance this error wouldn't occur.

describe("with validation of different scores", function () {
const [numGenerations, numValidations] = [2, 2];
// Make array with two different scores
const scores = [parseEther("0.2"), parseEther("0.3")];
let generatorAllowancesBefore: bigint[];
let validatorAllowancesBefore: bigint[];
this.beforeAll(async () => {
taskId++;
generatorAllowancesBefore = await Promise.all(
generators.map((g) => token.allowance(coordinatorAddress, g.address))
);
validatorAllowancesBefore = await Promise.all(
validators.map((v) => token.allowance(coordinatorAddress, v.address))
);
});
it("should validate with different scores", async function () {
await safeRequest(coordinator, token, requester, taskId, input, models, {
difficulty,
numGenerations,
numValidations,
});
for (let i = 0; i < numGenerations; i++) {
await safeRespond(coordinator, generators[i], output, metadata, taskId, BigInt(i));
}
// First validator submits different scores
await safeValidate(coordinator, validators[0], scores, metadata, taskId, 0n);
// Second validator does same validation
await safeValidate(coordinator, validators[1], scores, metadata, taskId, 1n);
});
});

Impact

Critical- this is completely protocol breaking. All tests suites currently use the same value for all scores. That is the only instance this will not fail. It will almost always be the case there will be some variance in the scores

Tools Used

Manual Review

Recommendations

Account for both sides of variance

function variance(uint256[] memory data) internal pure returns (uint256 ans, uint256 mean) {
mean = avg(data);
uint256 sum = 0;
for (uint256 i = 0; i < data.length; i++) {
// Safe difference calculation
uint256 diff;
if (data[i] >= mean) {
diff = data[i] - mean;
} else {
diff = mean - data[i];
}
sum += diff * diff;
}
ans = sum / data.length;
}
Updates

Lead Judging Commences

inallhonesty Lead Judge 12 months ago
Submission Judgement Published
Validated
Assigned finding tags:

Underflow in computing variance

Support

FAQs

Can't find an answer? Chat with us on Discord, Twitter or Linkedin.