Beatland Festival

First Flight #44
Beginner FriendlyFoundrySolidityNFT
100 EXP
View results
Submission Details
Impact: low
Likelihood: low
Invalid

L01. Unsafe Token ID Encoding and Decoding

Unsafe Token ID Encoding and Decoding

Root + Impact

Description

The encodeTokenId() and decodeTokenId() functions combine a collectionId and itemId into a single uint256 token ID by shifting the collectionId left by 128 bits and adding the itemId in the lower 128 bits.

However, decodeTokenId() assumes the itemId always fits into 128 bits without validation. If itemId exceeds 128 bits, it will be silently truncated during decoding, causing loss of uniqueness and potential collisions between token IDs.

function encodeTokenId(uint256 collectionId, uint256 itemId) public pure returns (uint256) {
@> return (collectionId << COLLECTION_ID_SHIFT) + itemId;
}
function decodeTokenId(uint256 tokenId) public pure returns (uint256 collectionId, uint256 itemId) {
@> collectionId = tokenId >> COLLECTION_ID_SHIFT;
@> itemId = uint256(uint128(tokenId)); // truncates itemId to 128 bits without check
}

Risk

Likelihood: Low

  • If a large itemId (> 2^128) is ever used, this truncation will occur.

  • Lack of input validation in token creation enables this scenario.

Impact: Low

  • Token ID collisions may happen due to truncated itemIds.

  • Incorrect URIs, duplicate token minting, and broken uniqueness guarantees.

  • This could undermine system correctness or trust in token authenticity.


Proof of Concept

uint256 collectionId = 1001;
uint256 itemId = type(uint256).max; // exceeds 128 bits
uint256 tokenId = encodeTokenId(collectionId, itemId);
(uint256 decodedCol, uint256 decodedItem) = decodeTokenId(tokenId);
// decodedItem is truncated, not equal to original
assert(decodedItem != itemId);

Explanation:
In this example, an extremely large itemId that exceeds the 128-bit size limit is encoded with a valid collectionId. When decoding, the itemId portion is truncated to fit into 128 bits, resulting in a value different from the original itemId. This demonstrates how the current decode logic silently loses information, which can cause token ID collisions and incorrect behavior.

Recommended Mitigation

  • The encodeTokenId function now includes a require statement that enforces the itemId to fit within 128 bits. This prevents any overflow or truncation issues later during decoding.

  • The encoding operation uses a bitwise OR (|) instead of addition (+). This is more appropriate for combining two values stored in distinct bit ranges, ensuring no accidental carry or overlap.

  • The decodeTokenId function extracts the itemId using a bitmask (tokenId & ((1 << COLLECTION_ID_SHIFT) - 1)), which safely recovers the lower 128 bits as the itemId without truncation.

  • These changes guarantee that encoding and decoding are perfectly reversible and maintain the uniqueness and correctness of token IDs.

- function encodeTokenId(uint256 collectionId, uint256 itemId) public pure returns (uint256) {
- return (collectionId << COLLECTION_ID_SHIFT) + itemId;
- }
-
- function decodeTokenId(uint256 tokenId) public pure returns (uint256 collectionId, uint256 itemId) {
- collectionId = tokenId >> COLLECTION_ID_SHIFT;
- itemId = uint256(uint128(tokenId));
- }
+ function encodeTokenId(uint256 collectionId, uint256 itemId) public pure returns (uint256) {
+ require(itemId <= type(uint128).max, "Item ID exceeds 128 bits");
+ return (collectionId << COLLECTION_ID_SHIFT) | itemId;
+ }
+
+ function decodeTokenId(uint256 tokenId) public pure returns (uint256 collectionId, uint256 itemId) {
+ collectionId = tokenId >> COLLECTION_ID_SHIFT;
+ itemId = tokenId & ((1 << COLLECTION_ID_SHIFT) - 1);
+ }
Updates

Lead Judging Commences

inallhonesty Lead Judge
4 months ago
inallhonesty Lead Judge 4 months ago
Submission Judgement Published
Invalidated
Reason: Non-acceptable severity

Support

FAQs

Can't find an answer? Chat with us on Discord, Twitter or Linkedin.