Part 4 β Module 3: Storage Deep Dive
Difficulty: Intermediate
Estimated reading time: ~40 minutes | Exercises: ~4-5 hours
π Table of Contents
The Storage Model
SLOAD & SSTORE β The Full Picture
Slot Computation β From Variables to Tries
- State Variables: Sequential Assignment
- Why keccak256: Collision Resistance in 2^256 Space
- Mapping Slot Computation
- Dynamic Array Slot Computation
- Nested Structures: Mappings of Mappings, Mappings of Structs
- The -1 Trick: Preimage Attack Prevention
Storage Packing in Assembly
Transient Storage in Assembly
Production Storage Patterns
- ERC-1967 Proxy Slots in Assembly
- ERC-7201 Namespaced Storage
- SSTORE2: Bytecode as Immutable Storage
- Storage Proofs and Reading Any Contractβs Storage
- How to Study Storage-Heavy Contracts
Exercises
Wrap-Up
The Storage Model
In Module 2 you learned memory β a scratch pad that vanishes when the call ends. Now the permanent layer: storage. Every state variable youβve ever written in Solidity lives here. Every token balance, every approval, every governance vote β itβs all storage slots.
This section teaches what storage actually is at the EVM level, how itβs organized under the hood, and why it costs what it costs.
π‘ Concept: The 2^256 Key-Value Store
Why this matters: Understanding the storage model is the foundation for everything else in this module β slot computation, packing, and gas optimization all depend on knowing what youβre working with.
Each contract has its own key-value store with 2^256 possible keys (called slots). Both keys and values are 32 bytes (256 bits). Every slot defaults to zero β this is why Solidity initializes state variables to zero for free (reading an unwritten slot returns 0x00...00).
This is NOT an array or contiguous memory. Itβs a sparse map. A contract with 3 state variables uses 3 slots out of 2^256. The storage trie only tracks non-zero slots, so unused slots cost nothing to maintain.
Contract Storage (conceptual model)
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Slot 0 β 0x0000...002a (simpleValue = 42) β
β Slot 1 β 0x0000...0000 (mapping base slot) β
β Slot 2 β 0x0000...0003 (array length = 3) β
β Slot 3 β 0x0000...0000 (nested mapping) β
β ... β
β Slot 2^256 - 1 β 0x0000...0000 β
β β
β 99.999...% of slots are zero (never written) β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
π» Quick Try:
Read any deployed contractβs storage with cast:
# Read WETH's slot 0 (name string pointer) on mainnet
cast storage 0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2 0 --rpc-url https://eth.llamarpc.com
Any slot you read will return a 32-byte hex value. Unwritten slots return
0x0000...0000.
π Deep Dive: From Slot to World State (Merkle Patricia Trie)
Where do storage slots actually live? In Ethereumβs world state β a tree structure that organizes all account data.
World State (Modified Merkle Patricia Trie)
β
βββ Account 0xAbC... βββ
β βββ nonce
β βββ balance
β βββ codeHash
β βββ storageRoot βββ Storage Trie
β β
β βββ keccak256(slot 0) β value
β βββ keccak256(slot 1) β value
β βββ keccak256(slot N) β value
β
βββ Account 0xDeF... βββ
β βββ storageRoot βββ (its own Storage Trie)
β
βββ ... (millions of accounts)
How it works:
- Each account in the world state has a
storageRootβ the root hash of its storage trie. - The storage trie is a Modified Merkle Patricia Trie (MPT) where:
- Path =
keccak256(slot_number)(hashed to distribute keys evenly) - Leaf = RLP-encoded slot value
- Path =
- Reading a slot means traversing the trie from root to leaf, following the path derived from the slot number.
- Each internal node is a 32-byte hash that points to the next level. A full traversal from root to leaf typically touches 7-8 nodes.
Why this matters for you:
- The trie structure explains why storage is expensive β itβs a database lookup, not a RAM read.
- It explains Merkle proofs: you can prove a slotβs value by providing the path from root to leaf.
- It explains why
keccak256is everywhere in storage β the trie itself uses hashed paths for even distribution.
π Deep Dive: Why Cold Access Costs 2100 Gas
Module 1 showed you the numbers: SLOAD cold = 2100 gas, warm = 100 gas. Now you know why.
Cold access (2100 gas): The slot hasnβt been accessed in this transaction. The EVM must traverse the storage trie from scratch, loading 7-8 nodes from the node database (LevelDB, PebbleDB, or similar). Each node requires a database I/O operation β reading from disk or SSD. The 2100 gas charge reflects this I/O cost.
Warm access (100 gas): The slot was already accessed earlier in this transaction. The trie nodes are cached in RAM from the first traversal. Now itβs just a hash-table lookup in the access set β essentially free compared to disk I/O.
SSTORE new slot (20,000 gas): Writing to a never-used slot means creating new trie nodes, computing new hashes at every level, and eventually writing everything to disk. This is the most expensive single operation in the EVM.
The key insight: Gas costs map to real computational work β disk reads, hash computations, and database writes. They arenβt arbitrary numbers.
Recap: See Module 1 β EIP-2929 Deep Dive for access lists (EIP-2930) and the full warm/cold model.
πΌ Job Market Context
βHow does EVM storage work?β
- Good: βItβs a key-value store mapping 256-bit keys to 256-bit values, persisted in the state trieβ
- Great: βEach contract has a 2^256 key-value store backed by a Merkle Patricia Trie in the world state. Cold access costs 2,100 gas because it requires loading trie nodes from disk. Warm access costs 100 gas because the node is cached. This is why Uniswap V2βs reentrancy guard uses 1β2 instead of 0β1β0 β the SSTORE from non-zero to non-zero avoids the 20,000 gas creation costβ
π© Red flag: Not knowing the cold/warm distinction, or thinking storage is like a regular database
Pro tip: Being able to explain why storage costs what it does (trie traversal, disk I/O) signals deep EVM understanding that sets you apart from βI memorize gas tablesβ candidates
π‘ Concept: Verkle Trees β Whatβs Changing
Ethereum plans to migrate from Merkle Patricia Tries to Verkle Trees (EIP-6800).
What changes:
- Proofs shrink dramatically β from ~1KB (Merkle) to ~150 bytes (Verkle). This uses polynomial commitments instead of hash-based proofs.
- Stateless clients become viable β a node can verify a block without storing the full state, just by checking the proof included with the block.
- Gas costs may be restructured β cold access might become cheaper because proof verification is more efficient.
What stays the same:
- The slot computation model (sequential assignment, keccak256 for mappings/arrays) is unchanged.
- Your Solidity and assembly code doesnβt change.
sload/sstoreopcodes work identically.
Bottom line: Verkle trees change the infrastructure under your contract, not the contract itself. But understanding that the trie exists β and that itβs being actively redesigned β is part of having complete EVM knowledge.
SLOAD & SSTORE β The Full Picture
Module 1 showed you sload(0) to read the owner variable. Now we go deeper β the full cost model, the refund mechanics, and the write ordering patterns that production code uses.
π‘ Concept: SLOAD & SSTORE in Yul
The opcodes:
assembly {
// Read: load 32 bytes from slot number `slot`
let value := sload(slot)
// Write: store 32 bytes at slot number `slot`
sstore(slot, newValue)
}
Both operate on raw 256-bit slot numbers. No type safety, no bounds checking, no Solidity-level protections. You can read or write ANY slot β including slots that βbelongβ to other state variables.
π» Quick Try:
contract StorageBasic {
uint256 public counter; // slot 0
function increment() external {
assembly {
let current := sload(0) // read slot 0
sstore(0, add(current, 1)) // write slot 0
}
}
}
Deploy, call increment(), then check counter(). This is exactly what counter++ compiles to β an SLOAD, ADD, SSTORE sequence.
Verify with forge inspect:
forge inspect StorageBasic storageLayoutThis shows the compilerβs slot assignments. Use it to confirm your assumptions.
π Deep Dive: The SSTORE Cost State Machine (EIP-2200 + EIP-3529)
SSTORE is not one gas cost β itβs a state machine that depends on three values:
- Original value β what the slot held at the start of the transaction
- Current value β what the slot holds right now (may differ if already written in this tx)
- New value β what youβre writing
SSTORE Cost State Machine (post-London, EIP-3529)
βββββββββββββββββββββββββββββββββββββββββββββββββββ
Is the slot warm?
βββ No (cold) β Add 2,100 gas surcharge, then proceed as warm
βββ Yes (warm) β
β
Is current == new? (no-op)
βββ Yes β 100 gas (warm read cost only)
βββ No β
β
Is current == original? (first write in tx)
βββ Yes β
β βββ original == 0? β 20,000 gas (CREATE: zero to nonzero)
β βββ original != 0? β 2,900 gas (UPDATE: nonzero to nonzero)
β
βββ No β 100 gas (already dirty -- just update the journal)
Refund cases (credited at end of transaction):
βββββββββββββββββββββββββββββββββββββββββββββββββ
β’ current != 0 AND new == 0 β +4,800 gas refund
β’ current != original AND new == original β restore refund:
βββ original == 0 β revoke the 4,800 refund
βββ original != 0 β +2,100 gas refund
Refund cap (EIP-3529): max refund = gas_used / 5
The four cases you need to internalize:
| Case | Example | Gas (warm) | Why |
|---|---|---|---|
| CREATE | 0 β 42 | 20,000 | New trie node created |
| UPDATE | 42 β 99 | 2,900 | Existing node modified |
| DELETE | 42 β 0 | 2,900 + 4,800 refund | Node removed from trie |
| NO-OP | 42 β 42 | 100 | Nothing changes |
π DeFi Pattern Connection
The Uniswap V2 reentrancy guard optimization:
OpenZeppelinβs original pattern: _status = _ENTERED (0β1) at start, _status = _NOT_ENTERED (1β0) at end. This means:
- Entry: 20,000 gas (zero β nonzero CREATE)
- Exit: 2,900 gas + 4,800 refund (nonzero β zero DELETE)
- Net: ~18,100 gas
Uniswap V2 changed to: unlocked = 2 (1β2) at start, unlocked = 1 (2β1) at end:
- Entry: 2,900 gas (nonzero β nonzero UPDATE)
- Exit: 2,900 gas (nonzero β nonzero UPDATE)
- Net: 5,800 gas
Savings: ~12,300 gas per call. This works because the slot is never zero after deployment.
EIP-3529 refund cap (post-London):
Before London, the max refund was 1/2 of gas used. Gas token schemes (CHI, GST2) exploited this: write to storage when gas is cheap, clear it when gas is expensive to reclaim refunds. EIP-3529 reduced the cap to 1/5, killing the economic viability of gas tokens.
π Intermediate Example: Write Ordering Strategy
When a function reads and writes multiple slots, order matters for clarity and potential optimization:
// Pattern: batch reads, then writes
function liquidate(address user) external {
assembly {
// --- READS (all sloads first) ---
let collateral := sload(collateralSlot)
let debt := sload(debtSlot)
let price := sload(priceSlot)
let factor := sload(factorSlot)
// --- COMPUTE ---
let health := div(mul(collateral, price), mul(debt, factor))
// --- WRITES (all sstores last) ---
sstore(collateralSlot, sub(collateral, seized))
sstore(debtSlot, sub(debt, repaid))
}
}
Why this pattern:
- Clarity: All state reads are grouped, making it easy to audit what state the function depends on.
- Gas: Once a slot is warm (first SLOAD at 2,100 gas), subsequent reads are 100 gas. Grouping reads doesnβt change gas, but grouping writes after computation prevents accidentally reading stale values from a slot you just wrote.
- DeFi standard: Lending protocols (Aave, Compound) and AMMs (Uniswap) follow this read-compute-write pattern universally.
πΌ Job Market Context
βWhy does the SSTORE cost depend on the original value?β
- Good: βGas reflects the work the trie must do β creating a node costs more than updating one.β
- Great: βItβs a three-state model: original, current, and new. The EVM tracks the original value per-transaction because restoring it (dirty β original) is cheaper than a fresh write. EIP-3529 capped refunds at 1/5 of gas used to kill gas token farming.β
βWhat happened with gas tokens?β
- Good: βThey exploited SSTORE refunds to bank gas when cheap and reclaim when expensive.β
- Great: βCHI and GST2 wrote to storage (20,000 gas each) during low-gas periods, then cleared those slots during high-gas periods to claim refunds. Pre-London, the 50% refund cap made this profitable. EIP-3529 reduced it to 20%, making the scheme uneconomical.β
β οΈ Common Mistakes
- Writing to slot 0 when you meant a mapping β
sstore(0, value)overwrites slot 0 directly. If slot 0 is the base slot for a mapping, youβve just corrupted the length/sentinel. Always compute the derived slot withkeccak256 - Not checking the return value of
sloadβsloadreturns 0 for uninitialized slots AND for slots explicitly set to 0. You canβt distinguish βnever writtenβ from βset to zeroβ without additional bookkeeping - Forgetting SSTORE gas depends on current value β Writing the same value thatβs already stored still costs gas (warm access: 100 gas). But writing a new value to a slot thatβs already non-zero costs 2,900 (not 20,000). Understanding the state machine saves significant gas
Slot Computation β From Variables to Tries
This is the section Module 1 teased: how does the EVM know WHERE to store a mapping entry or an array element? The answer is keccak256 β and understanding the exact formulas unlocks the ability to read any contractβs storage from the outside.
π‘ Concept: State Variables β Sequential Assignment
State variables receive slots sequentially starting from slot 0, in declaration order:
contract Example {
uint256 public a; // slot 0
uint256 public b; // slot 1
address public owner; // slot 2
bool public paused; // slot 2 (packed with owner! see below)
uint256 public total; // slot 3
}
Packing rules: Variables smaller than 32 bytes share a slot if they fit. In the example above, owner (20 bytes) and paused (1 byte) together use 21 bytes, which fits in one 32-byte slot. Variables are right-aligned within the slot and packed in declaration order.
Slot 2 layout:
Byte 31 12 11 0
ββββββββββββββββββββββ¬βββββββββββββββ
β unused (11 bytes) β paused β owner (20 bytes) β
β 0x000000000000... β 0x01 β 0xAbCd...1234 β
ββββββββββββββββββββββ΄βββββββββββββββ
Note:
booltakes 1 byte,addresstakes 20 bytes. Together they fit in one 32-byte slot. Auint256after them starts a new slot because 32 + 21 > 32.
π» Quick Try:
# Inspect any contract's storage layout
forge inspect Example storageLayout
This outputs JSON showing each variableβs slot number and byte offset within the slot. Use it to verify your assumptions before writing assembly.
π‘ Concept: Why keccak256 β Collision Resistance in 2^256 Space
Mappings and dynamic arrays canβt use sequential slots β they have an unbounded number of entries. Instead, they use keccak256 to compute slot numbers.
The problem: A mapping(address => uint256) could have entries for any of 2^160 addresses. You canβt reserve sequential slots for all possible keys.
The solution: Hash the key with the mappingβs base slot to produce a deterministic but βrandomβ slot number:
slot_for_key = keccak256(abi.encode(key, baseSlot))
Why this works: keccak256 distributes outputs uniformly across 2^256 space. The probability of two different (key, baseSlot) pairs producing the same slot is ~2^-128 (birthday bound) β astronomically unlikely. In practice, collisions donβt happen.
Why NOT key + baseSlot? Arithmetic would create predictable, overlapping ranges. Mapping A at slot 1 with key 0 would produce slot 1. Mapping B at slot 0 with key 1 would also produce slot 1. Collision. Hashing eliminates this by βscramblingβ the output.
π‘ Concept: Mapping Slot Computation
For mapping(KeyType => ValueType) at base slot p:
slot(key) = keccak256(abi.encode(key, p))
Both key and p are left-padded to 32 bytes and concatenated (64 bytes total), then hashed.
Step-by-step example:
contract Token {
mapping(address => uint256) public balances; // slot 0
}
To read balances[0xBEEF]:
key = 0x000000000000000000000000000000000000BEEF (address, left-padded to 32 bytes)
slot = 0x0000000000000000000000000000000000000000000000000000000000000000 (base slot 0)
hash input = key ++ slot (64 bytes)
slot(0xBEEF) = keccak256(hash_input)
In Yul (using scratch space from Module 2):
assembly {
// Store key at scratch word 1, base slot at scratch word 2
mstore(0x00, key) // key in bytes 0x00-0x1f
mstore(0x20, 0) // base slot (0) in bytes 0x20-0x3f
let slot := keccak256(0x00, 0x40) // hash 64 bytes
let balance := sload(slot)
}
This pattern β store two 32-byte values in scratch space, hash 64 bytes β is the canonical way to compute mapping slots in assembly.
π Deep Dive: Deriving the Mapping Formula
Why abi.encode(key, slot) and not abi.encodePacked(key, slot)?
abi.encodePacked for an address produces 20 bytes. For a uint256, 32 bytes. So encodePacked(address_key, uint256_slot) is 52 bytes, while encodePacked(uint256_key, uint256_slot) is 64 bytes. Different key types produce different-length inputs, which could create subtle collision scenarios.
abi.encode always pads to 32 bytes per value, so the hash input is always exactly 64 bytes regardless of key type. This consistency eliminates any ambiguity.
Why is the base slot the SECOND argument?
Convention, but it has a useful property: for nested mappings, the result of the first hash becomes the βbase slotβ for the next level. Putting the slot second means the chaining reads naturally:
// mapping(address => mapping(uint256 => bool)) at slot 5
level1 = keccak256(abi.encode(outerKey, 5)) // base slot for inner mapping
level2 = keccak256(abi.encode(innerKey, level1)) // final slot
The slot βflowsβ through the second position at each level.
β οΈ Common Mistakes
- Wrong argument order in keccak256 β For mappings, itβs
keccak256(abi.encode(key, baseSlot))β key first, slot second. Reversing them computes a completely different (but valid) slot, leading to silent data corruption - Using
encodePackedinstead ofencodeβ Solidity usesabi.encode(32-byte padded) for slot derivation, notabi.encodePacked. If you use packed encoding in assembly, youβll compute wrong slots that donβt match Solidityβs getters - Assuming mapping slots are sequential β Each mapping entry lives at
keccak256(key, slot), scattered across the 2^256 space. Thereβs no way to enumerate all keys without off-chain indexing (events)
πΌ Job Market Context
βHow do you compute a mappingβs storage slot?β
- Good: β
keccak256(abi.encode(key, baseSlot))β - Great: βThe slot is
keccak256(abi.encode(key, mappingSlot)). The key goes first, the mappingβs base slot second, both padded to 32 bytes. This scatters entries uniformly across the 2^256 space, making collisions astronomically unlikely. For nested mappings likemapping(address => mapping(uint => uint)), you apply the formula twice: first hash the outer key with the base slot, then hash the inner key with that result. This is howcast storageand block explorers read arbitrary mapping valuesβ
π© Red flag: Not being able to derive the formula or confusing the argument order
Pro tip: Show you can use forge inspect Contract storage-layout and cast storage <address> <slot> to read any on-chain mapping β this is a practical skill auditors use daily
π‘ Concept: Dynamic Array Slot Computation
For Type[] storage arr at base slot p:
- Length is stored at slot
pitself:arr.length = sload(p) - Element
iis at slotkeccak256(abi.encode(p)) + i
Dynamic Array Layout
βββββββββββββββββββββ
Slot p: array length
β
Slot keccak256(p) + 0: element 0
Slot keccak256(p) + 1: element 1
Slot keccak256(p) + 2: element 2
...
Slot keccak256(p) + n-1: element n-1
Why hash the base slot? Array elements need contiguous slots (for efficient iteration), but those slots must not conflict with other state variablesβ sequential slots (0, 1, 2β¦). Hashing the base slot βteleportsβ the element region to a random location in the 2^256 space, far from the sequential region.
In Yul:
assembly {
let length := sload(baseSlot) // array length
mstore(0x00, baseSlot) // hash input: base slot
let dataStart := keccak256(0x00, 0x20) // note: only 32 bytes, not 64
let element_i := sload(add(dataStart, i))
}
Note: Array slot computation hashes only 32 bytes (
keccak256(abi.encode(p))), while mapping slot computation hashes 64 bytes (keccak256(abi.encode(key, p))). This is because the array base slot alone is sufficient β the index is added arithmetically.
πΌ Job Market Context
βWhere is a dynamic arrayβs data stored?β
- Good: βThe length is at the base slot, elements start at
keccak256(baseSlot)β - Great: βThe base slot stores the array length. The first element lives at
keccak256(abi.encode(baseSlot)), and elementiis at that value plusi. This means arrays can overlap with mapping slots in theory, but the probability is negligible because both use keccak256. Forbytesandstring, short values (β€31 bytes) are packed into the base slot itself with the length in the lowest byte β this is the βshort string optimizationβ that saves a full SLOAD for common casesβ
π© Red flag: Not knowing the short string optimization, or thinking arrays are stored sequentially starting at their declaration slot
π‘ Concept: Nested Structures β Mappings of Mappings, Mappings of Structs
Mapping of mappings:
mapping(address => mapping(uint256 => uint256)) public nested; // slot 3
To read nested[0xCAFE][7]:
Step 1: Outer mapping
level1_slot = keccak256(abi.encode(0xCAFE, 3))
Step 2: Inner mapping (using level1_slot as the base)
final_slot = keccak256(abi.encode(7, level1_slot))
value = sload(final_slot)
In Yul:
assembly {
// Level 1: hash(outerKey, baseSlot)
mstore(0x00, outerKey)
mstore(0x20, 3) // base slot of outer mapping
let level1 := keccak256(0x00, 0x40)
// Level 2: hash(innerKey, level1)
mstore(0x00, innerKey)
mstore(0x20, level1)
let finalSlot := keccak256(0x00, 0x40)
let value := sload(finalSlot)
}
Mapping of structs:
struct UserData {
uint256 balance; // offset 0
uint256 debt; // offset 1
uint256 lastUpdate; // offset 2
}
mapping(address => UserData) public users; // slot 4
To read users[addr].debt:
base = keccak256(abi.encode(addr, 4)) // base slot for this user's struct
debt_slot = base + 1 // offset 1 within the struct
value = sload(debt_slot)
Struct fields occupy sequential slots from the computed base. Field 0 at base, field 1 at base+1, field 2 at base+2. The packing rules from sequential assignment apply within each struct too.
π Intermediate Example: Trace Aave V3βs ReserveData Layout
Aave V3βs core data structure is mapping(address => DataTypes.ReserveData) in the Pool contract. ReserveData is a struct with ~15 fields spanning multiple slots.
Letβs trace how to read the liquidityIndex for WETH:
// From Aave V3 DataTypes.sol (simplified)
struct ReserveData {
ReserveConfigurationMap configuration; // offset 0 (1 slot, bitmap)
uint128 liquidityIndex; // offset 1 (packed with next field)
uint128 currentLiquidityRate; // offset 1 (packed in same slot)
uint128 variableBorrowIndex; // offset 2 (packed with next field)
uint128 currentVariableBorrowRate; // offset 2 (packed in same slot)
// ... more fields at offset 3, 4, ...
}
Step 1: Find the mappingβs base slot. Use forge inspect or read Aaveβs code. Suppose the mapping is at slot 52.
Step 2: Compute the struct base for WETH (0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2):
structBase = keccak256(abi.encode(WETH_ADDRESS, 52))
Step 3: liquidityIndex is at offset 1. Itβs a uint128 packed in the low 128 bits of that slot:
slot = structBase + 1
packed = sload(slot)
liquidityIndex = and(packed, 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF) // low 128 bits
Step 4: Verify with cast storage:
# Compute the slot off-chain, then read it
cast storage <AAVE_POOL_ADDRESS> <computed_slot> --rpc-url https://eth.llamarpc.com
This is the power of understanding slot computation: you can read any protocolβs internal state directly, without needing an ABI or getter function.
π‘ Concept: The -1 Trick β Preimage Attack Prevention
Part 1 Module 6 introduced ERC-1967 proxy slots:
implementation_slot = keccak256("eip1967.proxy.implementation") - 1
// = 0x360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc
Why subtract 1?
If the slot were exactly keccak256("eip1967.proxy.implementation"), an attacker could observe that the slot has a known keccak256 preimage (the string "eip1967.proxy.implementation"). While this doesnβt directly enable an attack, it creates a theoretical risk:
The Solidity compiler computes mapping slots as keccak256(abi.encode(key, baseSlot)). If a carefully crafted implementation contract has a mapping whose (key, baseSlot) combination happens to hash to the same value as keccak256("eip1967.proxy.implementation"), the mapping entry would collide with the proxyβs implementation slot.
Subtracting 1 eliminates this risk. The final slot is keccak256(X) - 1, which has no known preimage under keccak256. Finding a Y such that keccak256(Y) = keccak256(X) - 1 requires breaking keccak256βs preimage resistance.
ERC-7201 uses the same principle (see below) with an additional hashing step.
πΌ Job Market Context
βWhy does ERC-7201 subtract 1 before hashing?β
- Good: βTo prevent storage collisions between namespaces and regular variable slotsβ
- Great: βThe -1 trick prevents preimage attacks. If you hash a namespace string directly, someone could craft a contract where a regular variableβs sequential slot number happens to equal
keccak256(namespace). By subtracting 1 before the final hash, you force the input to bekeccak256(string) - 1, which has no known preimage β making it impossible to construct a colliding sequential slot. Vyper uses the same principle in its storage layoutβ
π© Red flag: Not knowing what a preimage attack is in this context
Pro tip: ERC-7201 is increasingly asked about in interviews for upgradeable contract positions β itβs the modern replacement for unstructured storage
Storage Packing in Assembly
You know Solidity auto-packs small variables (sequential slots above). Now youβll do it by hand in assembly β the same patterns used by Aave V3βs bitmap configuration, Uniswap V3βs Slot0, and every gas-optimized protocol.
π‘ Concept: Manual Pack/Unpack with Bit Operations
Packing two uint128 values into one 256-bit slot:
Bit 255 128 127 0
ββββββββββββββββββββββββββ¬βββββββββββββββββββββββββ
β high (uint128) β low (uint128) β
ββββββββββββββββββββββββββ΄βββββββββββββββββββββββββ
Pack:
assembly {
let packed := or(shl(128, high), and(low, 0xffffffffffffffffffffffffffffffff))
sstore(slot, packed)
}
Unpack:
assembly {
let packed := sload(slot)
let low := and(packed, 0xffffffffffffffffffffffffffffffff) // mask low 128 bits
let high := shr(128, packed) // shift right 128 bits
}
You saw this concept in Part 1βs BalanceDelta (two
int128values in oneint256). Now youβre implementing the raw assembly version.
Packing address (20 bytes) + uint96 into one slot:
Bit 255 96 95 0
ββββββββββββββββββββ¬βββββββββββββββββββ
β address (160b) β uint96 (96b) β
ββββββββββββββββββββ΄βββββββββββββββββββ
assembly {
// Pack
let packed := or(shl(96, addr), and(value, 0xffffffffffffffffffffffff))
sstore(slot, packed)
// Unpack address (high 160 bits)
let addr := shr(96, sload(slot))
// Unpack uint96 (low 96 bits)
let val := and(sload(slot), 0xffffffffffffffffffffffff)
}
The address mask is 0xffffffffffffffffffffffff (24 hex chars = 96 bits). The address is shifted left by 96 bits to occupy the high 160 bits.
π Deep Dive: Read-Modify-Write Pattern
The most important assembly storage pattern: updating one field without touching the others.
Goal: Update the "low" uint128 field, keep "high" unchanged.
Step 1: SLOAD β 0xAAAAAAAA_BBBBBBBB (high=AAAA, low=BBBB)
Step 2: CLEAR field β 0xAAAAAAAA_00000000 (AND with NOT mask)
mask for low 128 bits = 0xFFFFFFFF_FFFFFFFF (128 ones)
inverted mask = 0xFFFFFFFF_00000000 (128 ones, 128 zeros)
result = AND(packed, inverted_mask)
Step 3: SHIFT new β 0x00000000_CCCCCCCC (new value in position)
new_low already fits in low 128 bits, no shift needed
Step 4: OR together β 0xAAAAAAAA_CCCCCCCC (combined)
result = OR(cleared, shifted_new)
Step 5: SSTORE β written back to slot
Full Yul code for updating the low field:
assembly {
let packed := sload(slot)
// Clear the low 128 bits: AND with a mask that has 1s in the high 128, 0s in the low 128
let mask := not(0xffffffffffffffffffffffffffffffff) // = 0xFFFF...0000 (128 high bits set)
let cleared := and(packed, mask)
// OR in the new value (already in the low 128 bit position)
let updated := or(cleared, and(newLow, 0xffffffffffffffffffffffffffffffff))
sstore(slot, updated)
}
Common audit finding: Off-by-one in shift amounts or mask widths. If you clear 127 bits instead of 128, the highest bit of the low field βbleedsβ into the high field. Always verify masks with small test values.
β οΈ Common Mistakes
- Off-by-one in shift amounts β Packing a
uint96next to anaddress(160 bits) requiresshl(160, value), notshl(96, value). The shift amount is the position of the field, not its width. Draw the bit layout before writing the code - Forgetting to clear before OR-ing β The read-modify-write pattern requires clearing the target bits first with
and(slot, not(mask)). If you skip the clear step and just OR the new value, youβll get corrupted data whenever the new value has fewer set bits than the old one - Inverted masks β
not(shl(160, 0xffffffffffffffffffffffff))clears bits 160-255. Getting the mask width or position wrong silently corrupts adjacent fields. Always verify with small test values
π‘ Concept: Aave V3 ReserveConfiguration Case Study
Aave V3 packs an entire reserveβs configuration into a single uint256 bitmap:
Aave V3 ReserveConfigurationMap (first 64 bits shown)
Bit 63 48 47 32 31 16 15 0
βββββββββββββββββββββββ¬ββββββββββββββββββββββ¬ββββββββββββββββββββββ¬ββββββββββββββββββββββ
β liq. bonus (16b) β liq. threshold(16b)β decimals + flags β LTV (16b) β
βββββββββββββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββ
The full 256-bit word contains: LTV, liquidation threshold, liquidation bonus, decimals, active flag, frozen flag, borrowable flag, stable rate flag, reserve factor, borrowing cap, supply cap, and more β all in one slot.
Part 2 Module 4 showed the Solidity-level configuration. Hereβs how to access it in assembly.
Reading LTV (bits 0-15):
assembly {
let config := sload(configSlot)
let ltv := and(config, 0xFFFF) // mask low 16 bits
}
Reading liquidation threshold (bits 16-31):
assembly {
let config := sload(configSlot)
let liqThreshold := and(shr(16, config), 0xFFFF) // shift right 16, mask 16 bits
}
Setting LTV (read-modify-write):
assembly {
let config := sload(configSlot)
let cleared := and(config, not(0xFFFF)) // clear bits 0-15
let updated := or(cleared, and(newLTV, 0xFFFF)) // set new LTV
sstore(configSlot, updated)
}
One SLOAD to read everything. That single storage read gives you access to 15+ configuration parameters. Without packing, this would be 15 separate SLOADs β up to 31,500 gas cold vs 2,100 gas for the packed read.
Production code: Aave V3 ReserveConfiguration.sol
π Deep Dive: Gas Analysis β Packed vs Unpacked
| Scenario | Unpacked (5 separate slots) | Packed (1 slot + bit math) |
|---|---|---|
| Cold read all 5 | 5 x 2,100 = 10,500 gas | 1 x 2,100 + ~50 shifts = ~2,150 gas |
| Warm read all 5 | 5 x 100 = 500 gas | 1 x 100 + ~50 shifts = ~150 gas |
| Update 1 field | 1 x 2,900 = 2,900 gas | 1 x 2,100 (read) + 2,900 (write) + ~50 = ~5,050 gas |
The tradeoff is clear:
- Packing wins big for read-heavy data (configuration, parameters, metadata). Aave V3 reads reserve configuration on every borrow, repay, and liquidation β the savings compound.
- Packing costs more for write-heavy data where you update individual fields frequently, because every update requires read-modify-write (an extra SLOAD).
Rule of thumb: Pack data thatβs written rarely and read often (protocol configuration, token metadata, access control flags). Keep data thatβs written frequently in separate slots (balances, counters, timestamps).
πΌ Job Market Context
βWalk me through how youβd pack configuration data in a protocol.β
- Good: Describe the mask/shift pattern for packing multiple fields into one uint256.
- Great: Discuss when packing is worth it (read-heavy config) vs when itβs not (frequently-updated individual fields). Reference Aave V3 as the canonical example. Mention that packing also reduces cold access overhead for functions that need multiple config values.
Interview red flag: Packing everything blindly without considering write frequency.
Transient Storage in Assembly
You learned TLOAD/TSTORE conceptually in Part 1 and used the transient keyword. Now the assembly patterns β and why the flat 100 gas cost changes everything.
π‘ Concept: TLOAD & TSTORE Yul Patterns
Syntax:
assembly {
tstore(slot, value) // write to transient slot
let val := tload(slot) // read from transient slot
}
Key differences from SLOAD/SSTORE:
| Property | SLOAD/SSTORE | TLOAD/TSTORE |
|---|---|---|
| Gas cost | 100-20,000 (warm/cold/create) | Always 100 |
| Cold/warm? | Yes (EIP-2929) | No |
| Refunds? | Yes (EIP-3529) | No |
| Persists? | Across transactions | Cleared at end of transaction |
| In storage trie? | Yes | No (separate transient map) |
Reentrancy guard in assembly:
function protectedFunction() external {
assembly {
if tload(0) { revert(0, 0) } // already entered? revert
tstore(0, 1) // set lock
}
// ... function body ...
assembly {
tstore(0, 0) // clear lock
}
}
Cost comparison: 200 gas total (set + clear) vs ~5,800+ gas with SSTORE-based guard. Thatβs a 29x reduction.
No refund on clearing β unlike SSTORE where 1β0 gives 4,800 gas back. But the flat 100 gas makes the total cost predictable and much cheaper overall.
π Uniswap V4 Assembly Walkthrough
Uniswap V4βs PoolManager uses transient storage for flash accounting β tracking per-currency balance changes across a multi-step callback:
// Simplified from Uniswap V4 PoolManager
function _accountDelta(Currency currency, int256 delta) internal {
assembly {
// Compute transient slot for this currency's delta
mstore(0x00, currency)
mstore(0x20, CURRENCY_DELTA_SLOT)
let slot := keccak256(0x00, 0x40)
// Read current delta, add new delta
let current := tload(slot)
let updated := add(current, delta)
tstore(slot, updated)
}
}
The pattern:
- Compute a transient slot using the same keccak256 formula as mapping slots.
- Read the current delta with
tload. - Update and write back with
tstore. - At the end of the
unlock()callback, verify all deltas are zero (settlement).
Why this only works with transient storage: A single swap touches multiple currencies. With SSTORE, each delta update would cost 2,900-20,000 gas. With TSTORE at 100 gas, tracking deltas per-currency per-swap is economically viable. This enables Uniswap V4βs singleton architecture where all pools share one contract.
π DeFi Pattern Connection
Transient storage use cases in production DeFi:
- Flash accounting (Uniswap V4) β track balance deltas across callback sequences
- Reentrancy locks β 29x cheaper than SSTORE-based guards
- Callback context β pass data between caller and callback without storage writes
- Temporary approvals β grant one-time permission within a transaction
πΌ Job Market Context
βWhatβs the difference between transient storage and regular storage?β
- Good: βTransient storage is cleared at the end of each transaction, so it costs less gasβ
- Great: βTLOAD/TSTORE (EIP-1153) provide a key-value store thatβs transaction-scoped β it persists across internal calls within a transaction but is wiped when the transaction ends. It costs 100 gas for both read and write (no cold/warm distinction, no refund complexity). The primary use case is replacing storage-based reentrancy guards and enabling flash accounting patterns like Uniswap V4βs delta tracking, where you need cross-call state without permanent storage costsβ
π© Red flag: Confusing transient storage with memory, or not knowing it persists across internal calls
Pro tip: Uniswap V4βs flash accounting (TSTORE deltas that must net to zero) is the canonical interview example β be ready to trace the flow
Production Storage Patterns
Now that you understand slot computation and packing, here are the production patterns that combine these primitives for real-world use.
π‘ Concept: ERC-1967 Proxy Slots in Assembly
Part 1 Module 6 covered ERC-1967 conceptually. Hereβs how proxy contracts actually access these slots:
// From OpenZeppelin's Proxy.sol (simplified)
bytes32 constant IMPLEMENTATION_SLOT =
0x360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc;
// = keccak256("eip1967.proxy.implementation") - 1
function _implementation() internal view returns (address impl) {
assembly {
impl := sload(IMPLEMENTATION_SLOT)
}
}
function _setImplementation(address newImpl) internal {
assembly {
sstore(IMPLEMENTATION_SLOT, newImpl)
}
}
The constant is precomputed β no keccak256 at runtime. The -1 subtraction happened off-chain when the standard was defined. At the EVM level, itβs just an SLOAD/SSTORE at a specific slot number.
The proxyβs fallback() function reads this slot to find the implementation, then uses delegatecall to forward the call. Module 5 covers the delegatecall pattern in detail.
πΌ Job Market Context
βHow do proxy contracts store the implementation address?β
- Good: βAt a specific storage slot defined by ERC-1967β
- Great: βERC-1967 defines the implementation slot as
bytes32(uint256(keccak256('eip1967.proxy.implementation')) - 1). The -1 prevents preimage attacks (same trick as ERC-7201). In assembly, the proxy reads this withsload(IMPLEMENTATION_SLOT)and delegates withdelegatecall. The slot is constant and known, which is how block explorers detect and display proxy implementations automaticallyβ
π© Red flag: Not knowing the slot constant or why it uses keccak256-minus-1
Pro tip: Write sload(0x360894...) from memory in interviews β it shows youβve actually worked with proxy assembly, not just used OpenZeppelinβs wrapper
π‘ Concept: ERC-7201 Namespaced Storage
Part 1 Module 6 mentioned ERC-7201 briefly. Hereβs the full picture β this is the modern replacement for __gap patterns.
The problem with __gap:
contract StorageV1 {
uint256 public value;
uint256[49] private __gap; // reserve 49 slots for future upgrades
}
Gaps are fragile. If you add 3 variables and forget to reduce the gap by 3, all subsequent slots shift and you get silent storage corruption. This has caused real exploits (Audius governance, ~$6M).
ERC-7201βs solution: namespaced storage
Instead of sequential slots with gaps, each module gets its own deterministic base slot computed from a namespace string:
Formula:
keccak256(abi.encode(uint256(keccak256("namespace.id")) - 1)) & ~bytes32(uint256(0xff))
Step-by-step derivation:
1. Hash the namespace: h = keccak256("openzeppelin.storage.ERC20")
2. Subtract 1: h' = h - 1 (preimage attack prevention)
3. Encode as uint256: encoded = abi.encode(uint256(h'))
4. Hash again: slot = keccak256(encoded)
5. Clear last byte: slot = slot & ~0xFF
Why clear the last byte?
The struct's fields occupy sequential slots: slot, slot+1, slot+2...
Clearing the last byte (zeroing bits 0-7) means the base slot is aligned
to a 256-slot boundary. This guarantees that up to 256 fields won't
overflow into another namespace's region.
OpenZeppelinβs pattern:
/// @custom:storage-location erc7201:openzeppelin.storage.ERC20
struct ERC20Storage {
mapping(address => uint256) _balances;
mapping(address => mapping(address => uint256)) _allowances;
uint256 _totalSupply;
}
// Precomputed: keccak256(abi.encode(uint256(keccak256("openzeppelin.storage.ERC20")) - 1)) & ~0xff
bytes32 private constant ERC20_STORAGE_LOCATION =
0x52c63247e1f47db19d5ce0460030c497f067ca4cebf71ba98eeadabe20bace00;
function _getERC20Storage() private pure returns (ERC20Storage storage $) {
assembly {
$.slot := ERC20_STORAGE_LOCATION
}
}
Why this is better than __gap:
- Each moduleβs storage is at a deterministic, non-colliding location.
- Adding fields to a struct doesnβt shift other modulesβ slots.
- No gap math to maintain β no risk of miscalculation.
- The
@custom:storage-locationannotation lets tools verify the layout automatically.
πΌ Job Market Context
βHow does ERC-7201 prevent storage collisions in upgradeable contracts?β
- Good: βIt uses a hash-based namespace so different facets donβt clashβ
- Great: βERC-7201 computes a base slot as
keccak256(abi.encode(uint256(keccak256(namespace_id)) - 1)) & ~bytes32(uint256(0xff)). The inner hash maps the namespace string to a unique seed, subtracting 1 prevents preimage attacks, the outer hash creates the actual base slot, and the& ~0xffmask aligns to a 256-byte boundary so that sequential struct fields can follow naturally. All struct members are atbase + offset, making the layout predictable and collision-free across independent storage namespacesβ
π© Red flag: Using string-based storage slots without understanding the collision prevention mechanism
Pro tip: OpenZeppelinβs upgradeable contracts use ERC-7201 by default since v5 β knowing the formula derivation step-by-step is interview gold for any upgradeable contract role
π‘ Concept: SSTORE2 β Bytecode as Immutable Storage
Solady introduced an alternative storage pattern: deploy data as contract bytecode, then read it with EXTCODECOPY.
The insight: Contract bytecode is immutable. EXTCODECOPY costs 3 gas per 32-byte word (after the base cost). Compare to SLOAD at 2,100 gas cold per 32 bytes.
Write (one-time):
// Deploy a contract whose bytecode IS the data
// CREATE opcode: deploy code that returns the data as runtime code
address pointer = SSTORE2.write(data);
Read:
// Read the data back from the contract's bytecode
bytes memory data = SSTORE2.read(pointer);
// Under the hood: EXTCODECOPY(pointer, destOffset, dataOffset, size)
Gas comparison for reading 1KB:
| Method | Cost |
|---|---|
| 32 separate SLOADs (cold) | 32 x 2,100 = 67,200 gas |
| EXTCODECOPY 1KB | ~2,600 (base) + 32 x 3 = ~2,700 gas |
25x cheaper reads for large immutable data.
When to use:
- Merkle trees for airdrops (large, written once, read many times)
- Lookup tables, configuration blobs, static metadata
- Any data thatβs immutable after deployment
When NOT to use: Data that needs to change. Bytecode is immutable β you canβt update it.
Production code: Solady SSTORE2
πΌ Job Market Context
βWhen would you use SSTORE2 instead of regular storage?β
- Good: βWhen you need to store large immutable data cheaplyβ
- Great: βSSTORE2 deploys data as a contractβs bytecode using CREATE, then reads it with EXTCODECOPY. Writing costs contract deployment gas (~200 gas/byte), but reading is only 2,600 base + 3 gas/word via EXTCODECOPY vs. 2,100 per 32-byte SLOAD. For data larger than ~96 bytes that never changes, SSTORE2 is cheaper to read. Solady and SSTORE2 library use this for on-chain metadata, Merkle trees, and any large blob storage. The trade-off: data is immutable once deployedβ
π© Red flag: Not knowing that SSTORE2 data is immutable (itβs bytecode, not storage)
Pro tip: SSTORE2 is a favorite interview topic because it tests understanding of CREATE opcode, bytecode structure, and gas economics simultaneously
π‘ Concept: Storage Proofs and Reading Any Contractβs Storage
eth_getProof is a JSON-RPC method that returns a Merkle proof for a specific storage slot. Given the proof, anyone can verify the slotβs value without trusting the RPC node.
Why this matters for DeFi:
- L2 bridges verify L1 state by checking storage proofs submitted on-chain.
- Optimistic rollups use proofs in fraud challenges.
- Cross-chain oracles prove that a value exists in another chainβs storage.
Practical tools for reading storage:
# Read any slot from any contract
cast storage <address> <slot> --rpc-url <url>
# Read with a storage proof
cast proof <address> <slot> --rpc-url <url>
# Inspect a contract's slot layout (compiled contract)
forge inspect <Contract> storageLayout
Combining them: Use forge inspect to find the slot number for a variable, then cast storage to read the live value from mainnet. This is how auditors and researchers read protocol state without relying on getter functions.
π How to Study Storage-Heavy Contracts
- Start with
forge inspect storageLayoutβ map out all slots and their byte offsets within slots. - Identify packed slots β look for multiple variables sharing one slot (variables smaller than 32 bytes).
- Trace mapping/array formulas β for each mapping, note the base slot and compute example entries with
cast keccak. - Draw the packing diagram β for packed slots, sketch which bits hold which fields.
- Read the assembly getters/setters β now you understand what every shift, mask, and hash is doing.
- Verify with
cast storageβ spot-check your computed slots against live chain data.
Donβt get stuck on: Trie internals. Focus on slot computation and packing first β thatβs what you need for reading and writing assembly. The trie exists to give you the mental model for gas costs.
π― Build Exercise: SlotExplorer
Workspace: src/part4/module3/exercise1-slot-explorer/SlotExplorer.sol | test/.../SlotExplorer.t.sol
Compute and read storage slots for variables, mappings, arrays, and nested mappings using inline assembly. The contract has pre-populated state β your assembly must find and read the correct slots.
What youβll implement:
readSimpleSlot()β read a uint256 state variable at slot 0 viasloadreadMappingSlot(address)β compute a mapping slot withkeccak256in scratch space andsloadreadArraySlot(uint256)β compute a dynamic array element slot andsloadreadNestedMappingSlot(address, uint256)β chain twokeccak256computations for a nested mappingwriteToMappingSlot(address, uint256)β compute a mapping slot andsstore, verifiable via the Solidity getter
π― Goal: Internalize the slot computation formulas so deeply that you can read any contractβs storage layout.
Run: FOUNDRY_PROFILE=part4 forge test --match-contract SlotExplorerTest -vvv
π― Build Exercise: StoragePacker
Workspace: src/part4/module3/exercise2-storage-packer/StoragePacker.sol | test/.../StoragePacker.t.sol
Pack, unpack, and update fields within packed storage slots using bit operations in assembly. Practice the read-modify-write pattern that production protocols use for gas-efficient configuration storage.
What youβll implement:
packTwo(uint128, uint128)β pack two uint128 values into one slot usingshl/orreadLow()/readHigh()β extract individual fields usingand/shrupdateLow(uint128)/updateHigh(uint128)β update one field without corrupting the other (read-modify-write)packMixed(address, uint96)/readAddr()/readUint96()β address + uint96 packinginitTriple(...)/incrementCounter()β increment a packed uint64 counter without corrupting adjacent fields
π― Goal: Build the muscle memory for bit-level storage manipulation that Aave V3, Uniswap V3, and every gas-optimized protocol uses.
Run: FOUNDRY_PROFILE=part4 forge test --match-contract StoragePackerTest -vvv
π Summary: Storage Deep Dive
β The Storage Model:
- Each contract has a 2^256 sparse key-value store backed by a Merkle Patricia Trie
- Cold access (2100 gas) = trie traversal from disk; warm access (100 gas) = cached in RAM
- Verkle trees will change the trie structure but not slot computation
β SLOAD & SSTORE:
sload(slot)reads,sstore(slot, value)writes β raw 256-bit operations- SSTORE cost depends on original/current/new value state machine (EIP-2200)
- Refund cap: max 1/5 of gas used (EIP-3529)
- Batch reads before writes for clarity and gas efficiency
β Slot Computation:
- State variables: sequential from slot 0 (with packing for sub-32-byte types)
- Mappings:
keccak256(abi.encode(key, baseSlot))β 64 bytes hashed - Dynamic arrays: length at
baseSlot, elements atkeccak256(abi.encode(baseSlot)) + index - Nested: chain the hash formulas; structs use sequential offsets from the computed base
- The
-1trick prevents preimage attacks on proxy storage slots
β Storage Packing:
- Pack:
shl+orto combine fields into one slot - Unpack:
shr+andto extract individual fields - Read-modify-write: load -> clear with inverted mask -> shift new value -> or -> store
- Pack read-heavy data (config, parameters); keep write-heavy data in separate slots
β Transient Storage:
tload/tstore: always 100 gas, no warm/cold, no refunds, cleared per transaction- 29x cheaper reentrancy guards; enables flash accounting patterns
β Production Patterns:
- ERC-1967: constant proxy slots accessed via
sload/sstore - ERC-7201: namespaced storage eliminates
__gapfragility - SSTORE2: immutable data stored as bytecode β 25x cheaper reads for large data
- Storage proofs:
eth_getProofenables trustless cross-chain state verification
Key formulas to remember:
- Mapping:
keccak256(abi.encode(key, baseSlot)) - Array element:
keccak256(abi.encode(baseSlot)) + index - ERC-7201:
keccak256(abi.encode(uint256(keccak256("ns")) - 1)) & ~bytes32(uint256(0xff))
Next: Module 4 β Control Flow & Functions β if/switch/for in Yul, function dispatch, and Yul functions.
π Resources
Essential References
- Solidity Docs β Storage Layout β Official specification for slot assignment, packing, and mapping/array formulas
- Solidity Docs β Layout of Mappings and Arrays β Detailed formulas with examples
- evm.codes β SLOAD | SSTORE | TLOAD | TSTORE β Interactive opcode reference
EIPs Referenced
- EIP-1967 β Standard proxy storage slots
- EIP-2200 β SSTORE gas cost state machine (Istanbul)
- EIP-2929 β Cold/warm access costs (Berlin)
- EIP-3529 β Reduced SSTORE refunds (London)
- EIP-7201 β Namespaced storage layout
- EIP-6800 β Verkle trees (proposed)
Production Code
- Aave V3 ReserveConfiguration.sol β Production bitmap packing
- Solady SSTORE2 β Bytecode as immutable storage
- OpenZeppelin StorageSlot.sol β Typed storage slot access
- Uniswap V4 PoolManager β Transient storage flash accounting
Deep Dives
- EVM Deep Dives: Storage β Noxx β Excellent visual walkthrough of slot computation
- EVM Storage Layout β RareSkills β Detailed guide with examples
Tools
cast storageβ Read any slot from any contractforge inspectβ Examine compiled storage layoutcast proofβ Get a Merkle storage proof
Navigation: Previous: Module 2 β Memory & Calldata | Next: Module 4 β Control Flow & Functions