The protocol uses a custom token ID encoding scheme to pack a collectionId and an itemId into a single uint256 value. This is implemented by left-shifting the collectionId by 128 bits and adding the itemId into the lower 128 bits:
Decoding is performed by shifting the token ID right by 128 bits to recover the collectionId, and by casting the token ID to uint128 to recover the itemId:
This design implicitly assumes that both collectionId and itemId fit within 128 bits. However, the functions accept full uint256 values as inputs and do not enforce any bounds on these parameters.
If either collectionId or itemId exceeds 2^128 - 1, higher-order bits will be silently truncated during encoding or decoding. In particular, when decoding itemId, any non-zero bits in the upper 128 bits of tokenId are discarded, leading to loss of information and incorrect reconstruction of the original values.
This behavior does not cause a revert and may result in malformed token IDs that decode to unexpected collection or item identifiers, breaking internal assumptions across the protocol.
The issue does not require special permissions and affects public helper functions. While normal protocol flows may only use small, well-formed IDs, the lack of validation allows incorrect or malicious inputs to be passed, especially in future extensions, integrations, or edge cases. The silent nature of the truncation increases the likelihood of unnoticed misuse.
Incorrect encoding of token IDs can lead to logical inconsistencies, such as tokens being associated with the wrong collection or item number. This can break metadata resolution, ownership tracking, and collection-specific logic, potentially leading to user confusion or incorrect application behavior.
The following fuzz test demonstrates that the token ID encoding and decoding logic breaks when either collectionId or itemId exceeds the assumed 128-bit boundary. In such cases, decoding does not restore the original values due to silent truncation.
To run the test, use the following Foundry command:
Output:
As shown by the test results, oversized inputs lead to incorrect decoding without reverting, which confirms the presence of the aforementioned vulnerability in the protocol.
If exploited or triggered unintentionally, this issue can cause token IDs to decode incorrectly, resulting in NFTs being misattributed to the wrong collection or edition. Since the truncation happens silently without reverting, the protocol may continue operating with corrupted identifiers, making the issue difficult to detect and debug.
Add input validation in the encodeTokenId function to ensure that both collectionId and itemId do not exceed the maximum value of a 128-bit unsigned integer (type(uint128).max). If either value exceeds this limit, the function should revert the transaction. This prevents any truncation or incorrect decoding from occurring and ensures the integrity of token IDs.
The contest is live. Earn rewards by submitting a finding.
Submissions are being reviewed by our AI judge. Results will be available in a few minutes.
View all submissionsThe contest is complete and the rewards are being distributed.