The LLM Oracle system uses a validation mechanism where multiple validators score generation outputs. The final result is determined by these validation scores, and validators receive rewards for participating. The system is designed to be permissionless but requires validators to be registered.
The validate()
function in LLMOracleCoordinator
is vulnerable to front-running attacks that allow malicious actors to manipulate task results. When a task requires validation, any registered validator can submit their scores until the required number of validations is reached. However, there is no mechanism to prevent a malicious validator from monitoring the mempool and front-running legitimate validation transactions.
The root cause lies in the first-come-first-served nature of the validation process combined with the predictable proof-of-work difficulty in assertValidNonce()
. When the difficulty parameter is set to medium or low values, it becomes computationally feasible for an attacker to quickly generate valid nonces and front-run legitimate validators.
This attack becomes particularly severe when combined with the ability to manipulate validation scores. Since the attacker controls all validation slots, they can strategically assign scores to ensure their preferred generation wins. For example, they could assign extremely high scores to their chosen generation and minimal scores to others, guaranteeing their selection through the scoring mechanism in finalizeValidation()
and getBestResponse()
. The statistical validation meant to filter out anomalous scores becomes ineffective since the attacker controls the entire score distribution, allowing them to carefully craft scores that pass validation while still achieving their desired outcome.
High. An attacker can completely control the outcome of tasks by being all validators for a task, effectively centralizing what should be a decentralized oracle system. This undermines the core security assumption of the protocol that multiple independent validators will provide honest scores.
Medium. While the attack requires the attacker to:
Be a registered validator
Have sufficient computational power to solve proof-of-work challenges quickly
Have the ability to monitor and front-run transactions
These requirements are reasonably achievable for motivated attackers, especially when difficulty parameters are not set high enough.
Several measures could be implemented to mitigate this vulnerability:
Set a minimum difficulty threshold to ensure proof-of-work challenges cannot be solved too quickly.
The contest is live. Earn rewards by submitting a finding.
This is your time to appeal against judgements on your submissions.
Appeals are being carefully reviewed by our judges.