Module 5: Foundry Workflow & Testing
Difficulty: Beginner
Estimated reading time: ~40 minutes | Exercises: ~4-5 hours
📚 Table of Contents
Foundry Essentials
- Why Foundry
- Setup
- Core Cheatcodes for DeFi Testing
- Configuration
- Build Exercise: Cheatcodes and Fork Tests
Fuzz Testing and Invariant Testing
Fork Testing and Gas Optimization
- Fork Testing for DeFi
- Gas Optimization Workflow
- Foundry Scripts for Deployment
- Build Exercise: Fork Testing and Gas
💡 Foundry Essentials for DeFi Development
💡 Concept: Why Foundry
Why this matters: Every production DeFi protocol launched after 2023 uses Foundry. Uniswap V4, Morpho Blue, MakerDAO’s new contracts—all built and tested with Foundry. If you want to contribute to or understand modern DeFi codebases, Foundry fluency is mandatory, not optional.
Created by Paradigm, now the de facto standard for Solidity development. Foundry Book
📊 Why it replaced Hardhat:
| Feature | Foundry | Hardhat |
|---|---|---|
| Test language | Solidity (same as contracts) ✨ | JavaScript (context switching) |
| Fuzzing | Built-in, powerful | Requires external tools |
| Fork testing | Seamless, fast | Slower, more setup |
| Gas snapshots | forge snapshot built-in | Manual tracking |
| Speed | Rust-based, parallelized | Node.js-based |
| EVM cheatcodes | vm.prank, vm.deal, etc. | Limited |
If you’ve used Hardhat, the key mental shift: everything happens in Solidity. Your tests, your deployment scripts, your interactions—all Solidity.
🔍 Deep dive: Read the Foundry Book - Projects section to understand the full project structure and how git submodules work for dependencies.
🔗 DeFi Pattern Connection
Where Foundry dominates in DeFi:
-
Protocol Development — Every major protocol launched since 2023 uses Foundry:
- Uniswap V4 — 1000+ tests, invariant suites, gas snapshots
- Aave V3.1 (aave-v3-origin) — Fork tests against live markets, Foundry-native
- Morpho Blue — Formal verification + Foundry fuzz testing
- Euler V2 — Modular vault architecture tested entirely in Foundry
-
Security Auditing — Top audit firms require Foundry fluency:
- Trail of Bits — Uses Foundry + Echidna for invariant testing
- Spearbit — All audit PoCs written in Foundry
- Cantina — Competition PoCs must be Foundry-based
- Exploit reproduction: Every post-mortem includes a Foundry PoC
-
On-chain Testing & Simulation — Fork testing is the standard for:
- Governance proposal simulation (Compound, MakerDAO)
- Liquidation bot testing against live oracle prices
- MEV strategy backtesting against historical blocks
The pattern: If you’re building, auditing, or researching DeFi — Foundry is the language you speak.
💼 Job Market Context
What DeFi teams expect you to know:
-
“What testing framework do you use?”
- Good answer: “Foundry — I write Solidity tests with fuzz and invariant testing”
- Great answer: “Foundry for everything — unit tests, fuzz tests, invariant suites with handlers, fork tests against mainnet, and gas snapshots in CI. I use Hardhat only when I need JavaScript integration tests for frontend”
-
“How do you test DeFi composability?”
- Good answer: “Fork testing against mainnet”
- Great answer: “I pin fork tests to specific blocks for determinism, test against multiple market conditions, and use
deal()instead of impersonating whales. For critical paths, I test against both mainnet and L2 forks”
Interview Red Flags:
- 🚩 Only knowing Hardhat/JavaScript testing in 2025+
- 🚩 Not understanding
vm.prankvsvm.startPranksemantics - 🚩 No experience with fuzz or invariant testing
Pro tip: When applying for DeFi roles, having a GitHub repo with well-written Foundry tests (fuzz + invariant + fork) is worth more than most take-home assignments. It demonstrates real protocol development experience.
🏗️ Setup
# Install/update Foundry
curl -L https://foundry.paradigm.xyz | bash
foundryup
# Create a new project
forge init my-project
cd my-project
# Project structure
# src/ — contract source files
# test/ — test files (*.t.sol)
# script/ — deployment/interaction scripts (*.s.sol)
# lib/ — dependencies (git submodules)
# foundry.toml — configuration
# Install OpenZeppelin
forge install OpenZeppelin/openzeppelin-contracts --no-commit
# Add remappings (tells compiler where to find imports)
echo '@openzeppelin/=lib/openzeppelin-contracts/' >> remappings.txt
💡 Concept: Core Foundry Cheatcodes for DeFi Testing
Why this matters: Cheatcodes let you manipulate the EVM state (time, balances, msg.sender) in ways impossible on a real chain. This is how you test time-locked vaults, simulate whale swaps, and verify liquidation logic.
The cheatcodes you’ll use constantly:
// ✅ Impersonate an address (critical for fork testing)
vm.prank(someAddress);
someContract.doSomething(); // msg.sender == someAddress (for one call)
// ✅ Persistent impersonation
vm.startPrank(someAddress);
// ... multiple calls as someAddress
vm.stopPrank();
// ✅ Set block timestamp (essential for time-dependent DeFi logic)
vm.warp(block.timestamp + 1 days);
// ✅ Set block number
vm.roll(block.number + 100);
// ✅ Deal ETH or tokens to an address
deal(address(token), user, 1000e18); // Give user 1000 tokens
deal(user, 100 ether); // Give user 100 ETH
// ✅ Expect a revert with specific error
vm.expectRevert(CustomError.selector);
vm.expectRevert(abi.encodeWithSelector(CustomError.selector, arg1, arg2));
// ✅ Expect event emission (all 4 booleans: indexed1, indexed2, indexed3, data)
vm.expectEmit(true, true, false, true);
emit ExpectedEvent(indexed1, indexed2, data);
someContract.doSomething(); // Must emit the event
// ✅ Create labeled addresses (shows up in traces as "alice" not 0x...)
address alice = makeAddr("alice");
(address bob, uint256 bobKey) = makeAddrAndKey("bob");
// ✅ Sign messages (for EIP-712 (https://eips.ethereum.org/EIPS/eip-712), permit, etc.)
(uint8 v, bytes32 r, bytes32 s) = vm.sign(privateKey, digest);
// ✅ Snapshot and revert state (useful for testing multiple scenarios)
uint256 snapshot = vm.snapshot();
// ... modify state ...
vm.revertTo(snapshot); // Back to snapshot state
// (Note: In recent Foundry versions, renamed to `vm.snapshotState()` and `vm.revertToState()`)
⚡ Common pitfall:
vm.prankonly affects the next call. If you need multiple calls, usevm.startPrank/vm.stopPrank. Forgetting this leads to “hey why is msg.sender wrong?” debugging sessions.
💻 Quick Try:
Create a file test/CheatcodePlayground.t.sol and run it:
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
import "forge-std/Test.sol";
contract CheatcodePlayground is Test {
function test_TimeTravel() public {
uint256 now_ = block.timestamp;
vm.warp(now_ + 365 days);
assertEq(block.timestamp, now_ + 365 days);
// You just jumped one year into the future!
}
function test_Impersonation() public {
address vitalik = 0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045;
deal(vitalik, 1000 ether);
vm.prank(vitalik);
// Next call's msg.sender is Vitalik
(bool ok,) = address(this).call{value: 1 ether}("");
assertTrue(ok);
}
receive() external payable {}
}
Run with forge test --match-contract CheatcodePlayground -vvv and watch the traces. Feel how cheatcodes manipulate the EVM.
🔗 DeFi Pattern Connection
Where cheatcodes are essential in DeFi testing:
-
Time-dependent logic (
vm.warp):- Vault lock periods and vesting schedules
- Oracle staleness checks
- Interest accrual in lending protocols (→ Part 2 Module 4)
- Governance timelocks and voting periods
-
Access control testing (
vm.prank):- Testing admin-only functions (pause, upgrade, fee changes)
- Simulating multi-sig signers
- Testing permit/signature flows with
vm.sign(← Module 3) - Account abstraction validation with
vm.prank(entryPoint)(← Module 4)
-
State manipulation (
deal):- Funding test accounts with exact token amounts
- Simulating whale positions for liquidation testing
- Setting up pool reserves for AMM testing (→ Part 2 Module 2)
-
Event verification (
vm.expectEmit):- Verifying Transfer/Approval events for token standards
- Checking protocol-specific events (Deposit, Withdraw, Swap)
- Critical for integration testing: “did the downstream protocol emit the right event?”
💼 Job Market Context
What DeFi teams expect you to know:
-
“Walk me through how you’d test a time-locked vault”
- Good answer: “Use
vm.warpto advance past the lock period, test both before and after” - Great answer: “I’d test at key boundaries — 1 second before unlock, exact unlock time, and after. I’d also fuzz the lock duration and test with
vm.rollfor block-number-based locks. For production, I’d add invariant tests ensuring no withdrawals are possible before the lock expires across random deposit/warp/withdraw sequences”
- Good answer: “Use
-
“How do you test signature-based flows?”
- Good answer: “Use
makeAddrAndKeyto create signers, thenvm.signfor EIP-712 digests” - Great answer: “I create deterministic test signers with
makeAddrAndKey, construct EIP-712 typed data hashes matching the contract’sDOMAIN_SEPARATOR, sign withvm.sign, and test both valid signatures and invalid ones (wrong signer, expired deadline, replayed nonce). For EIP-1271, I test both EOA and contract signers”
- Good answer: “Use
Interview Red Flags:
- 🚩 Using
vm.assumeinstead ofbound()for constraining fuzz inputs - 🚩 Not knowing
vm.expectRevertwith custom error selectors (Module 1 pattern) - 🚩 Hardcoding block.timestamp instead of using
vm.warpfor time-dependent tests
Pro tip: Master vm.sign + EIP-712 digest construction — it’s the most asked-about Foundry skill in DeFi interviews. Permit flows and meta-transactions are everywhere, and having a reusable EIP-712 test helper in your toolkit signals production experience.
🏗️ Real usage:
Uniswap V4 test suite extensively uses these cheatcodes. Read any test file to see production patterns.
🏗️ Configuration (foundry.toml)
[profile.default]
src = "src"
out = "out"
libs = ["lib"]
solc = "0.8.28" # Latest stable
evm_version = "cancun" # or "prague" for Pectra features
optimizer = true
optimizer_runs = 200 # Balance deployment cost vs runtime cost
via_ir = false # Enable when hitting stack-too-deep errors (slower compile)
[profile.default.fuzz]
runs = 256 # Increase for production: 10000+
max_test_rejects = 65536 # How many invalid inputs before giving up
[profile.default.invariant]
runs = 256 # Number of random call sequences
depth = 15 # Max calls per sequence
fail_on_revert = false # Don't fail just because a call reverts
[rpc_endpoints]
mainnet = "${MAINNET_RPC_URL}"
arbitrum = "${ARBITRUM_RPC_URL}"
optimism = "${OPTIMISM_RPC_URL}"
[etherscan]
mainnet = { key = "${ETHERSCAN_API_KEY}" }
🔍 Deep dive: Foundry Book - Configuration has all available options.
🎯 Build Exercise: Cheatcodes and Fork Tests
Workspace: workspace/test/part1/module5/ — base setup: BaseTest.sol, fork tests: UniswapV2Fork.t.sol, ChainlinkFork.t.sol
Set up the project structure you’ll use throughout Part 2:
-
Initialize a Foundry project with OpenZeppelin and Permit2 as dependencies:
forge init defi-protocol cd defi-protocol forge install OpenZeppelin/openzeppelin-contracts --no-commit forge install Uniswap/permit2 --no-commit -
Create a base test contract (
BaseTest.sol) with common setup:// test/BaseTest.sol import "forge-std/Test.sol"; abstract contract BaseTest is Test { // Mainnet addresses (save typing in every test) address constant WETH = 0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2; address constant USDC = 0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48; address constant DAI = 0x6B175474E89094C44Da98b954EedeAC495271d0F; address constant PERMIT2 = 0x000000000022D473030F116dDEE9F6B43aC78BA3; // Test users with private keys (for signing) address alice; uint256 aliceKey; address bob; uint256 bobKey; function setUp() public virtual { // Fork mainnet vm.createSelectFork("mainnet"); // Create test users (alice, aliceKey) = makeAddrAndKey("alice"); (bob, bobKey) = makeAddrAndKey("bob"); // Fund them with ETH deal(alice, 100 ether); deal(bob, 100 ether); } } -
Write a simple fork test that interacts with Uniswap V2 on mainnet:
contract UniswapV2ForkTest is BaseTest { IUniswapV2Pair constant WETH_USDC_PAIR = IUniswapV2Pair(0xB4e16d0168e52d35CaCD2c6185b44281Ec28C9Dc); function testGetReserves() public view { (uint112 reserve0, uint112 reserve1,) = WETH_USDC_PAIR.getReserves(); assertGt(reserve0, 0); assertGt(reserve1, 0); } } -
Write a fork test that reads Chainlink price feed data:
contract ChainlinkForkTest is BaseTest { AggregatorV3Interface constant ETH_USD_FEED = AggregatorV3Interface(0x5f4eC3Df9cbd43714FE2740f5E3616155c5b8419); function testPriceFeed() public view { (,int256 price,,,uint256 updatedAt) = ETH_USD_FEED.latestRoundData(); assertGt(price, 0); assertLt(block.timestamp - updatedAt, 1 hours); // Not stale } }
🎯 Goal: Have a battle-ready test harness before you start Part 2. The BaseTest pattern saves you from rewriting setup in every test file.
📋 Summary: Foundry Essentials
✓ Covered:
- Why Foundry — Solidity tests, built-in fuzzing, fast execution
- Project setup — dependencies, remappings, configuration
- Core cheatcodes —
vm.prank,vm.warp,deal,vm.expectRevert,vm.sign - BaseTest pattern — reusable test setup for fork testing
Next: Fuzz testing and invariant testing for DeFi
💡 Fuzz Testing and Invariant Testing
💡 Concept: Fuzz Testing
Why this matters: Manual unit tests check specific cases. Fuzz tests check properties across all possible inputs. The Euler Finance hack ($197M) involved donateToReserves + self-liquidation – a fuzz test targeting the invariant “liquidation should not be profitable with 0 collateral” could have flagged the vulnerability path.
How it works:
Fuzz testing generates random inputs for your test functions. Instead of testing specific cases, you define properties that should hold for ALL valid inputs, and the fuzzer tries to break them.
// ❌ Unit test: specific case
function testSwapExact() public {
uint256 amountOut = pool.getAmountOut(1e18, reserveIn, reserveOut);
assertGt(amountOut, 0);
}
// ✅ Fuzz test: property for ALL inputs
function testFuzz_SwapAlwaysPositive(uint256 amountIn) public {
amountIn = bound(amountIn, 1, type(uint112).max); // Constrain to valid range
uint256 amountOut = pool.getAmountOut(amountIn, reserveIn, reserveOut);
assertGt(amountOut, 0);
}
The bound() helper:
bound(value, min, max) is your main tool for constraining fuzz inputs to valid ranges without skipping too many random values (which would trigger max_test_rejects and fail your test).
// ❌ BAD: discards most inputs
function testBad(uint256 amount) public {
vm.assume(amount > 0 && amount < 1000e18); // Rejects 99.99% of inputs
// ...
}
// ✅ GOOD: transforms inputs to valid range
function testGood(uint256 amount) public {
amount = bound(amount, 1, 1000e18); // Maps all inputs to [1, 1000e18]
// ...
}
🔍 Deep dive: Read Foundry Book - Fuzz Testing for advanced techniques like stateful fuzzing. Cyfrin - Fuzz and Invariant Tests Full Explainer provides comprehensive coverage with DeFi examples.
Best practices for DeFi fuzz testing:
- ✅ Use
bound()to constrain inputs to realistic ranges (token amounts, timestamps, interest rates) - ✅ Test mathematical properties: swap output ≤ reserve, interest ≥ 0, shares ≤ total supply
- ✅ Test edge cases explicitly: zero amounts, maximum values, minimum values
- ⚠️ Use
vm.assume()sparingly—it discards inputs,bound()transforms them
⚠️ Common Mistakes
See Fuzz Testing > Bound vs Assume above for the
bound()vsvm.assume()pattern.
// ❌ WRONG: Testing only the happy path with fuzzing
function testFuzz_swap(uint256 amountIn) public {
amountIn = bound(amountIn, 1, 1e24);
uint256 out = pool.swap(amountIn);
assertTrue(out > 0); // Too weak — doesn't verify the math
}
// ✅ CORRECT: Test mathematical properties
function testFuzz_swap(uint256 amountIn) public {
amountIn = bound(amountIn, 1, 1e24);
uint256 reserveBefore = pool.reserve();
uint256 out = pool.swap(amountIn);
assertTrue(out > 0, "Output must be positive");
assertTrue(out < reserveBefore, "Output must be less than reserve");
// Verify constant product: k should not decrease
assertTrue(pool.k() >= kBefore, "k must not decrease");
}
💡 Concept: Invariant Testing
Why this matters: The Curve pool exploits (July 2023, $70M+ at risk) from a Vyper compiler reentrancy bug would have been detectable by invariant testing that checked “re-entering a pool cannot change its total value.” Fuzz tests check individual functions. Invariant tests check system-wide properties across arbitrary sequences of operations.
How it works:
Instead of testing individual functions, you define system-wide invariants—properties that must ALWAYS be true regardless of any sequence of operations—and the fuzzer generates random sequences of calls trying to violate them.
The Handler Pattern (Essential):
Without a handler, the fuzzer calls your contract with completely random calldata, which almost always reverts (wrong function selectors, invalid parameters). Handlers constrain the fuzzer to valid operation sequences while still exploring random states.
// Target contract: the system under test
// Handler: constrains how the fuzzer interacts with the system
contract VaultHandler is Test {
Vault public vault;
MockToken public token;
// Ghost variables: track cumulative state for invariants
uint256 public ghost_depositSum;
uint256 public ghost_withdrawSum;
constructor(Vault _vault, MockToken _token) {
vault = _vault;
token = _token;
}
function deposit(uint256 amount) public {
amount = bound(amount, 1, token.balanceOf(address(this)));
token.approve(address(vault), amount);
vault.deposit(amount);
ghost_depositSum += amount; // Track total deposits
}
function withdraw(uint256 shares) public {
shares = bound(shares, 1, vault.balanceOf(address(this)));
uint256 assets = vault.withdraw(shares);
ghost_withdrawSum += assets; // Track total withdrawals
}
}
contract VaultInvariantTest is Test {
Vault vault;
MockToken token;
VaultHandler handler;
function setUp() public {
token = new MockToken();
vault = new Vault(token);
handler = new VaultHandler(vault, token);
// Fund the handler
token.mint(address(handler), 1_000_000e18);
// Tell Foundry which contract to call randomly
targetContract(address(handler));
}
// ✅ This must ALWAYS be true, no matter what sequence of deposits/withdrawals
function invariant_totalAssetsMatchBalance() public view {
assertEq(
vault.totalAssets(),
token.balanceOf(address(vault)),
"Vault accounting broken"
);
}
function invariant_solvency() public view {
// Vault must have enough tokens to cover all shares
uint256 totalShares = vault.totalSupply();
uint256 totalAssets = vault.totalAssets();
uint256 sharesValue = vault.convertToAssets(totalShares);
assertGe(totalAssets, sharesValue, "Vault insolvent");
}
function invariant_conservation() public view {
// Total deposited - total withdrawn ≤ vault balance (accounting for rounding)
uint256 netDeposits = handler.ghost_depositSum() - handler.ghost_withdrawSum();
uint256 vaultBalance = token.balanceOf(address(vault));
assertApproxEqAbs(vaultBalance, netDeposits, 10, "Value leaked");
}
}
📊 Key invariant testing patterns for DeFi:
- Conservation invariants: Total assets in ≥ total assets out (accounting for fees)
- Solvency invariants: Contract balance ≥ sum of user claims
- Monotonicity invariants: Share price never decreases (for non-rebasing vaults)
- Supply invariants: Sum of user balances == total supply
⚡ Common pitfall: Setting
fail_on_revert = true(the old default). Many valid operations revert (withdraw with 0 balance, swap with 0 input). Set it tofalseand only care about invariant violations, not individual reverts.
🏗️ Real usage:
Morpho Blue invariant tests are the gold standard. Study their handler patterns and ghost variable usage.
🔍 Deep dive: Cyfrin - Invariant Testing: Enter The Matrix explains advanced handler patterns. RareSkills - Invariant Testing in Solidity covers ghost variables and metrics. Cyfrin Updraft - Handler Tutorial provides step-by-step handler implementation.
🔍 Deep Dive: Advanced Invariant Patterns
Beyond the basic handler pattern, production protocols use several advanced techniques:
1. Multi-Actor Handlers
Real DeFi protocols have many users interacting simultaneously. A single-actor handler misses concurrency bugs:
contract MultiActorHandler is Test {
address[] public actors;
address internal currentActor;
modifier useActor(uint256 actorSeed) {
currentActor = actors[bound(actorSeed, 0, actors.length - 1)];
vm.startPrank(currentActor);
_;
vm.stopPrank();
}
function deposit(uint256 amount, uint256 actorSeed) public useActor(actorSeed) {
amount = bound(amount, 1, token.balanceOf(currentActor));
// ... deposit as random actor
}
}
Why this matters: The Euler Finance hack involved multiple actors interacting in a specific sequence. Single-actor invariant tests wouldn’t have caught it.
2. Time-Weighted Invariants
Many DeFi invariants only hold after time passes (interest accrual, oracle updates):
function handler_advanceTime(uint256 timeSkip) public {
timeSkip = bound(timeSkip, 1, 7 days);
vm.warp(block.timestamp + timeSkip);
ghost_timeAdvanced += timeSkip;
}
// Invariant: interest only increases over time
function invariant_interestMonotonicity() public view {
assertGe(pool.totalDebt(), ghost_previousDebt, "Debt decreased without repayment");
}
3. Ghost Variable Accounting
Track what should be true alongside what is true:
┌─────────────────────────────────────────┐
│ Ghost Variable Pattern │
│ │
│ Handler tracks: │
│ ├── ghost_totalDeposited (cumulative) │
│ ├── ghost_totalWithdrawn (cumulative) │
│ ├── ghost_userDeposits[user] (per-user)│
│ └── ghost_callCount (metrics) │
│ │
│ Invariant checks: │
│ ├── vault.balance == deposits - withdrawals │
│ ├── Σ userDeposits == ghost_totalDeposited │
│ └── vault.totalShares >= 0 │
└─────────────────────────────────────────┘
Ghost variables are your parallel accounting system — if the contract’s state diverges from your ghost tracking, you’ve found a bug.
🔗 DeFi Pattern Connection
Where fuzz and invariant testing catch real bugs:
-
AMM Invariants (→ Part 2 Module 2):
x * y >= kafter every swap (constant product)- No tokens can be extracted without providing the other side
- LP share value never decreases from swaps (fees accumulate)
-
Lending Protocol Invariants (→ Part 2 Module 4):
- Total borrows ≤ total supplied (solvency)
- Health factor < 1 → liquidatable (always)
- Interest index only increases (monotonicity)
-
Vault Invariants (→ Part 2 Module 7):
convertToShares(convertToAssets(shares)) <= shares(no free shares — rounding in protocol’s favor)- Total assets ≥ sum of all redeemable assets (solvency)
- First depositor can’t steal from subsequent depositors (inflation attack)
-
Governance Invariants:
- Vote count ≤ total delegated power
- Executed proposals can’t be re-executed
- Timelock delay is always enforced
The pattern: For every DeFi protocol, ask “what must ALWAYS be true?” — those are your invariants.
💼 Job Market Context
What DeFi teams expect you to know:
-
“How do you approach testing a new DeFi protocol?”
- Good answer: “Unit tests for individual functions, fuzz tests for properties, invariant tests for system-wide correctness”
- Great answer: “I start by identifying the protocol’s invariants — solvency, conservation of value, monotonicity of share price. Then I build handlers that simulate realistic user behavior (deposits, withdrawals, swaps, liquidations), use ghost variables to track expected state, and run invariant tests with high depth. I also write targeted fuzz tests for mathematical edge cases like rounding and overflow boundaries”
-
“What’s the difference between fuzz testing and invariant testing?”
- Good answer: “Fuzz tests random inputs to one function, invariant tests random sequences of calls”
- Great answer: “Fuzz testing verifies properties of individual functions across all inputs — like ‘swap output is always positive for positive input.’ Invariant testing verifies system-wide properties across arbitrary call sequences — like ‘the pool is always solvent regardless of what operations happened.’ The key insight is that bugs often emerge from sequences of valid operations, not from any single call”
-
“Have you ever found a bug with fuzz/invariant testing?”
- This is increasingly common in DeFi interviews. Having a real example (even from your own learning exercises) is powerful
Interview Red Flags:
- 🚩 Only writing unit tests with hardcoded values (no fuzzing)
- 🚩 Not knowing the handler pattern for invariant testing
- 🚩 Using
fail_on_revert = true(shows lack of invariant testing experience) - 🚩 Can’t articulate what invariants a vault or AMM should have
Pro tip: The #1 skill that separates junior from senior DeFi developers is the ability to identify and test protocol invariants. If you can articulate “these 5 things must always be true about this protocol” and write tests proving it, you’re already ahead of most candidates.
⚠️ Common Mistakes
// ❌ WRONG: Not using a handler — fuzzer calls functions with random args directly
// This causes constant reverts and wastes 90% of test runs
// ✅ CORRECT: Use a handler to guide the fuzzer
contract VaultHandler is Test {
Vault vault;
function deposit(uint256 amount) external {
amount = bound(amount, 1, token.balanceOf(address(this)));
token.approve(address(vault), amount);
vault.deposit(amount);
}
// Handler ensures valid state transitions
}
// ❌ WRONG: Setting fail_on_revert = true in foundry.toml
// Invariant tests SHOULD hit reverts — that's the fuzzer exploring
// fail_on_revert = true makes your test fail on every revert, hiding real bugs
// ✅ CORRECT: Use fail_on_revert = false (default for invariant tests)
// [profile.default.invariant]
// fail_on_revert = false
// ❌ WRONG: Testing implementation details instead of invariants
function invariant_totalSupplyEquals1000() public {
assertEq(vault.totalSupply(), 1000); // Not an invariant — it changes!
}
// ✅ CORRECT: Test properties that must ALWAYS hold
function invariant_solvency() public {
assertGe(
token.balanceOf(address(vault)),
vault.totalAssets(),
"Vault must always be solvent"
);
}
🎯 Build Exercise: Vault Invariants
Workspace: workspace/src/part1/module5/ — vault: SimpleVault.sol, tests: SimpleVault.t.sol, handler: VaultHandler.sol, invariants: VaultInvariant.t.sol
-
Build a simple vault (accepts one ERC-20 token, issues shares proportional to deposit size):
Your vault should implement:
deposit(uint256 assets)– calculates shares, transfers tokens in, mints shareswithdraw(uint256 shares)– burns shares, transfers assets backtotalAssets(),convertToShares(),convertToAssets()
The share math follows the standard pattern:
- First deposit: shares = assets (1:1)
- Subsequent: shares = (assets * totalSupply) / totalAssets
See the scaffold in
SimpleVault.solfor the full TODO list. -
Write fuzz tests for the deposit and withdraw functions individually:
function testFuzz_Deposit(uint256 amount) public { amount = bound(amount, 1, 1000000e18); deal(address(token), alice, amount); vm.startPrank(alice); token.approve(address(vault), amount); vault.deposit(amount); vm.stopPrank(); assertEq(vault.balanceOf(alice), vault.convertToShares(amount)); } -
Write a Handler contract and invariant tests for the vault:
invariant_solvency: vault token balance ≥ what all shareholders could withdrawinvariant_supplyConsistency: sum of all share balances == totalSupplyinvariant_noFreeMoney: total withdrawals ≤ total deposits
-
Run with high iterations and see if the fuzzer finds any violations:
forge test --match-test invariant -vvv -
Intentionally break an invariant (e.g., remove
_burnfrom withdraw) and verify the fuzzer catches it
🎯 Goal: Invariant testing is how real DeFi auditors find bugs. Getting comfortable with the handler pattern now pays off enormously in Part 2 when you’re testing AMMs, lending pools, and CDPs.
📋 Summary: Fuzz and Invariant Testing
✓ Covered:
- Fuzz testing — property-based testing for all inputs
bound()helper — constraining inputs without rejecting them- Invariant testing — system-wide properties across call sequences
- Handler pattern — constraining fuzzer to valid operations
- Ghost variables — tracking cumulative state for invariants
Next: Fork testing and gas optimization
📖 How to Study Production Test Suites
Production DeFi test suites can be overwhelming (Uniswap V4 has 100+ test files). Here’s a strategy:
Step 1: Start with the simplest test file
Find a basic unit test (not invariant or fork). In Uniswap V4, start with test/PoolManager.t.sol basic swap tests, not the complex hook tests.
Step 2: Read the base test contract
Every production suite has a BaseTest or TestHelper. This shows:
- How they set up fork state
- What helper functions they use
- How they create test users and fund them
- Common assertions they reuse
Step 3: Study the handler contracts Handlers reveal what the team considers “valid operations.” Look at:
- Which functions are exposed (the attack surface)
- How inputs are bounded (what ranges are realistic)
- What ghost variables they track (what they think can go wrong)
Step 4: Read the invariant definitions These are the protocol’s core properties in code form:
Uniswap V4: "Pool reserves satisfy x*y >= k after every swap"
Aave V3: "Total borrows never exceed total deposits"
Morpho: "Sum of all user balances equals contract balance"
Step 5: Look for edge case tests
Search for tests with names like test_RevertWhen_*, test_EdgeCase_*, testFuzz_*. These reveal the bugs the team found and patched.
Don’t get stuck on: Complex multi-contract integration tests or deployment scripts initially. Build up to those after understanding the unit and fuzz tests.
Recommended study order:
- Solmate tests — Clean, minimal, great for learning patterns
- OpenZeppelin tests — Comprehensive, well-documented
- Uniswap V4 tests — Production DeFi complexity
- Morpho Blue invariant tests — Gold standard for invariant testing
💡 Fork Testing and Gas Optimization
💡 Concept: Fork Testing for DeFi
Why this matters: You can’t test DeFi composability in isolation. Your protocol will interact with Uniswap, Chainlink, Aave—you need to test against real deployed contracts with real liquidity. Fork testing makes this trivial.
What fork testing does:
Runs your tests against a snapshot of a real network’s state. This lets you:
- ✅ Interact with deployed protocols (swap on Uniswap, borrow from Aave)
- ✅ Test with real token balances and oracle prices
- ✅ Verify that your protocol composes correctly with existing DeFi
- ✅ Reproduce real exploits on forked state (for security research)
# Run tests against mainnet fork
forge test --fork-url $MAINNET_RPC_URL
# Pin to a specific block (deterministic results)
forge test --fork-url $MAINNET_RPC_URL --fork-block-number 19000000
# Multiple forks in the same test
uint256 mainnetFork = vm.createFork("mainnet");
uint256 arbitrumFork = vm.createFork("arbitrum");
vm.selectFork(mainnetFork); // Switch to mainnet
// ... test on mainnet ...
vm.selectFork(arbitrumFork); // Switch to arbitrum
// ... test on arbitrum ...
🔍 Deep dive: Foundry Book - Forking covers advanced patterns like persisting fork state and cheatcodes.
Best practices:
- ✅ Always pin to a specific block number for deterministic tests
- ✅ Use
deal()to fund test accounts rather than impersonating whale addresses (which can break if they change) - ✅ Cache fork data locally to avoid rate-limiting your RPC provider: Foundry automatically caches fork state
- ✅ Test against multiple blocks to ensure your protocol works across different market conditions
⚡ Common pitfall: Forgetting to set
MAINNET_RPC_URLin.env. Fork tests will fail with “RPC endpoint not found.” Use Alchemy or Infura for reliable RPC endpoints.
⚠️ Common Mistakes
// ❌ WRONG: Fork tests without pinning a block number
function setUp() public {
vm.createSelectFork("mainnet"); // Non-deterministic! Different results each run
}
// ✅ CORRECT: Always pin to a specific block
function setUp() public {
vm.createSelectFork("mainnet", 19_000_000); // Deterministic and cacheable
}
// ❌ WRONG: Impersonating whale addresses for token balances
vm.prank(0xBEEF...); // This whale might move their tokens!
token.transfer(alice, 1000e18);
// ✅ CORRECT: Use deal() to set balances directly
deal(address(token), alice, 1000e18); // Always works, no dependencies
💡 Concept: Gas Optimization Workflow
Why this matters: Every 100 gas you save is $0.01+ per transaction at 100 gwei (gas prices vary significantly; L2s can be 100-1000x cheaper). For a protocol processing 100k transactions/day (like Uniswap), that’s $1M+/year in user savings. Gas optimization is a competitive advantage.
# Gas report for all tests
forge test --gas-report
# Example output:
# | Function | min | avg | max |
# |--------------------|-------|--------|--------|
# | deposit | 45123 | 50234 | 55345 |
# | withdraw | 38956 | 42123 | 48234 |
# Gas snapshots — save current gas usage, then compare after optimization
forge snapshot # Creates .gas-snapshot
# ... make changes ...
forge snapshot --diff # Shows increase/decrease
# Specific function gas usage
forge test --match-test testSwap -vvvv # 4 v's shows gas per opcode
📊 Gas optimization patterns you’ll use in Part 2:
| Pattern | Savings | Example |
|---|---|---|
unchecked blocks | ~20 gas/operation | Loop counters |
| Packing storage variables | ~15,000 gas/slot saved | uint128 a; uint128 b; in one slot |
calldata vs memory | ~300 gas | Read-only arrays |
| Custom errors | ~24 gas/revert | vs require strings |
| Cache storage reads | ~100 gas/read | Local variable vs storage |
Examples:
// ✅ 1. unchecked blocks for proven-safe arithmetic
unchecked { ++i; } // Saves ~20 gas per loop iteration
// ✅ 2. Packing storage variables (multiple values in one slot)
// BAD: 3 storage slots (3 * 20k gas for cold writes)
uint256 a;
uint256 b;
uint256 c;
// GOOD: 1 storage slot if types fit
uint128 a;
uint64 b;
uint64 c;
// ✅ 3. Using calldata instead of memory for read-only function parameters
function process(uint256[] calldata data) external { // calldata: no copy
// vs
// function process(uint256[] memory data) external { // memory: copies
}
// ✅ 4. Caching storage reads in local variables
// BAD: reads totalSupply from storage 3 times
function bad() public view returns (uint256) {
return totalSupply + totalSupply + totalSupply;
}
// GOOD: reads once, reuses local variable
function good() public view returns (uint256) {
uint256 supply = totalSupply;
return supply + supply + supply;
}
🔍 Deep dive: Rareskills Gas Optimization Guide is the comprehensive resource. Alchemy - 12 Solidity Gas Optimization Techniques provides a practical checklist. Cyfrin - Advanced Gas Optimization Tips covers advanced techniques. 0xMacro - Gas Optimizations Cheat Sheet is a quick reference.
📖 How to Study Gas Optimization in Production Code
When you encounter a gas-optimized DeFi contract and want to understand the optimizations:
-
Run
forge test --gas-reportfirst — establish a baseline- Look at the
avgcolumn — that’s what matters for real users minandmaxshow edge cases (empty pools vs full pools)- Sort mentally by “which function is called most” × “gas cost”
- Look at the
-
Identify the expensive operations — run with
-vvvv(4 v’s)- Traces show gas cost per opcode
- Look for:
SLOAD(~2,100 cold),SSTORE(~5,000-20,000),CALL(~2,600 cold) - These three dominate gas costs in DeFi — everything else is noise
-
Read the code looking for storage patterns
- Count how many times each storage variable is read per function
- Look for: caching into local variables, packed structs, transient storage usage
- Compare with the unoptimized version if available (tests often have both)
-
Use
forge snapshotfor before/after comparisonforge snapshot # Baseline # ... make changes ... forge snapshot --diff # Shows delta- Any function that got MORE expensive → investigate (likely a regression)
- Focus on functions called in hot paths (swaps, transfers, not admin functions)
-
Study the protocol’s gas benchmarks
- Many protocols maintain
.gas-snapshotfiles in their repos - Example: Uniswap V4’s gas snapshots track gas per operation
- These tell you what the team considers “acceptable” gas costs
- Many protocols maintain
Don’t get stuck on: Micro-optimizations like unchecked ++i vs i++ (~20 gas). Focus on storage access patterns — a single eliminated SLOAD saves more gas than 100 unchecked increments.
💡 Concept: Foundry Scripts for Deployment
Why this matters: Deployment scripts in Solidity (not JavaScript) mean you can test your deployments before running them on-chain. You can also reuse the same scripts for local testing and production deployment.
// script/Deploy.s.sol
import "forge-std/Script.sol";
contract DeployScript is Script {
function run() public {
uint256 deployerKey = vm.envUint("PRIVATE_KEY");
vm.startBroadcast(deployerKey);
MyContract c = new MyContract(constructorArg);
vm.stopBroadcast();
console.log("Deployed at:", address(c));
}
}
# Dry run (simulation)
forge script script/Deploy.s.sol --rpc-url $RPC_URL
# Actual deployment + etherscan verification
forge script script/Deploy.s.sol --rpc-url $RPC_URL --broadcast --verify
# Resume failed broadcast (e.g., if etherscan verification failed)
forge script script/Deploy.s.sol --rpc-url $RPC_URL --resume
⚡ Common pitfall: Forgetting to fund the deployer address with ETH before broadcasting. The script will simulate successfully but fail when you try to broadcast.
🎓 Intermediate Example: Differential Testing
Differential testing compares two implementations of the same function to find discrepancies. This is how auditors verify optimized code matches the reference implementation.
contract DifferentialTest is Test {
/// @dev Reference implementation: clear, readable, obviously correct
function mulDivReference(uint256 x, uint256 y, uint256 d) public pure returns (uint256) {
return (x * y) / d; // Overflows for large values!
}
/// @dev Optimized implementation: handles full 512-bit intermediate
function mulDivOptimized(uint256 x, uint256 y, uint256 d) public pure returns (uint256) {
// ... (Module 1's FullMath.mulDiv pattern)
return FullMath.mulDiv(x, y, d);
}
/// @dev Fuzz: both implementations agree for non-overflowing inputs
function testFuzz_MulDivEquivalence(uint256 x, uint256 y, uint256 d) public pure {
d = bound(d, 1, type(uint256).max);
// Only test where reference won't overflow
unchecked {
if (y != 0 && (x * y) / y != x) return; // Would overflow
}
assertEq(
mulDivReference(x, y, d),
mulDivOptimized(x, y, d),
"Implementations disagree"
);
}
}
Why this matters in DeFi:
- Verifying gas-optimized swap math matches the readable version
- Comparing your oracle integration against a reference implementation
- Ensuring an upgraded contract produces identical results to the old one
Production example: Uniswap V3 uses differential testing to verify their TickMath and SqrtPriceMath libraries match reference implementations.
🔗 DeFi Pattern Connection
Where fork testing and gas optimization matter in DeFi:
-
Exploit Reproduction & Prevention:
- Every major hack post-mortem includes a Foundry fork test PoC
- Pin to the block before the exploit, then replay the attack
- Example: Reproduce the Euler hack by forking at the pre-attack block
- Security teams run fork tests against their own protocols to find similar vectors
-
Oracle Integration Testing (→ Part 2 Module 3):
- Fork test Chainlink feeds with real price data
- Test staleness checks:
vm.warppast the heartbeat interval - Simulate oracle manipulation by forking at blocks with extreme prices
-
Composability Verification:
- “Does my vault work when Aave V3 changes interest rates?”
- “Does my liquidation bot handle Uniswap V3 tick crossing?”
- Fork both protocols, simulate realistic sequences, verify no breakage
-
Gas Benchmarking for Protocol Competitiveness:
- Uniswap V4 hooks: gas overhead determines viability
- Lending protocols: gas cost of liquidation determines MEV profitability
- Aggregators (1inch, Cowswap): route selection depends on gas estimates
forge snapshot --diffin CI prevents gas regressions
💼 Job Market Context
What DeFi teams expect you to know:
-
“How would you reproduce a DeFi exploit?”
- Good answer: “Fork mainnet at the block before the exploit, replay the transactions”
- Great answer: “I’d fork at
block - 1, usevm.prankto impersonate the attacker, replay the exact call sequence, and verify the stolen amount matches the post-mortem. Then I’d write a test that proves the fix prevents the attack. I keep a library of exploit reproductions — it’s the best way to learn DeFi security patterns”
-
“How do you approach gas optimization?”
- Good answer: “Use
forge snapshotto measure and compare” - Great answer: “I establish a baseline with
forge snapshot, then useforge test -vvvvto identify the expensive opcodes. I focus on storage operations first (SLOAD/SSTORE dominate gas costs), then calldata optimizations, then arithmetic. I always run the full invariant suite after optimization to ensure correctness wasn’t sacrificed. In CI, I useforge snapshot --checkto catch regressions”
- Good answer: “Use
-
“Walk me through testing a protocol integration”
- Good answer: “Fork test against the deployed protocol”
- Great answer: “I pin to a specific block for determinism, set up realistic token balances with
deal(), test the happy path first, then systematically test edge cases — what happens when the external protocol pauses? What happens during extreme market conditions? I test against multiple blocks to catch time-dependent behavior, and I test on both mainnet and relevant L2 forks”
Interview Red Flags:
- 🚩 Never having reproduced an exploit (shows no security awareness)
- 🚩 Optimizing gas without measuring first (“premature optimization”)
- 🚩 Not pinning fork tests to specific block numbers (non-deterministic tests)
- 🚩 Not knowing the difference between
forge snapshotandforge test --gas-report
Pro tip: Maintain a personal repository of exploit reproductions as Foundry fork tests. It’s the most effective way to learn DeFi security, and it’s impressive in interviews. Start with DeFiHackLabs — they have 200+ reproductions.
🎯 Build Exercise: Fork Testing and Gas
Workspace: workspace/test/part1/module5/ — fork tests: UniswapSwapFork.t.sol, gas optimization: GasOptimization.sol and GasOptimization.t.sol
-
Write a fork test that performs a full Uniswap V2 swap:
function testUniswapV2Swap() public { // Fork mainnet at specific block vm.createSelectFork("mainnet", 19000000); IUniswapV2Router02 router = IUniswapV2Router02(0x7a250d5630B4cF539739dF2C5dAcb4c659F2488D); // Deal WETH to alice deal(WETH, alice, 10 ether); vm.startPrank(alice); // Approve router IERC20(WETH).approve(address(router), 10 ether); // Swap WETH → USDC address[] memory path = new address[](2); path[0] = WETH; path[1] = USDC; uint256[] memory amounts = router.swapExactTokensForTokens( 1 ether, 0, // No slippage protection (test only!) path, alice, block.timestamp ); vm.stopPrank(); assertGt(IERC20(USDC).balanceOf(alice), 0); } -
Write a fork test that reads Chainlink price feed data and verifies staleness:
function testChainlinkPrice() public { vm.createSelectFork("mainnet"); AggregatorV3Interface feed = AggregatorV3Interface( 0x5f4eC3Df9cbd43714FE2740f5E3616155c5b8419 // ETH/USD ); (,int256 price,, uint256 updatedAt,) = feed.latestRoundData(); assertGt(price, 1000e8); // ETH > $1000 assertLt(block.timestamp - updatedAt, 1 hours); // Not stale } -
Create a gas optimization exercise:
- Write a token transfer function two ways: one with
requirestrings, one with custom errors - Run
forge snapshoton both and compare:forge snapshot --match-test testWithRequireStrings # Edit to use custom errors forge snapshot --diff
- Write a token transfer function two ways: one with
-
Write a simple deployment script for any contract you’ve built this module
🎯 Goal: You should be completely fluent in Foundry before starting Part 2. Fork testing and gas optimization are skills you’ll use in every single module.
📋 Summary: Fork Testing and Gas Optimization
✓ Covered:
- Fork testing — testing against real deployed contracts and liquidity
- Gas optimization workflow — snapshots, reports, opcode-level analysis
- Optimization patterns — unchecked, packing, calldata, caching
- Foundry scripts — Solidity deployment scripts
Key takeaway: Foundry is your primary tool for building and testing DeFi. Master it before Part 2.
🔗 Cross-Module Concept Links
Backward references (← concepts from earlier modules):
| Module | Concept | How It Connects |
|---|---|---|
| ← M1 Modern Solidity | Custom errors | Tested with vm.expectRevert(CustomError.selector) — verify revert selectors |
| ← M1 Modern Solidity | UDVTs | Type-safe test assertions — unwrap for comparison, wrap for inputs |
| ← M1 Modern Solidity | Transient storage | Verified with cheatcodes — vm.load at transient slots, cross-call state |
| ← M2 EVM Changes | Flash accounting | vm.expectRevert for lock violations, settlement verification |
| ← M2 EVM Changes | EIP-7702 delegation | vm.etch for code injection, delegation target testing |
| ← M3 Token Approvals | EIP-2612 permits | vm.sign + EIP-712 digest construction for permit flows |
| ← M3 Token Approvals | Permit2 integration | deal() for token balances, approval chain testing |
| ← M4 Account Abstraction | ERC-4337 validation | vm.prank(entryPoint) for validateUserOp testing |
| ← M4 Account Abstraction | EIP-1271 signatures | Fork tests against real deployed smart wallets |
Forward references (→ concepts you’ll use later):
| Module | Concept | How It Connects |
|---|---|---|
| → M6 Proxy Patterns | Upgradeable testing | Verify storage layout compatibility, test initializers vs constructors |
| → M6 Proxy Patterns | Fork test upgrades | Test proxy upgrades against live deployments |
| → M7 Deployment | Foundry scripts | Deterministic deployment scripts, CREATE2 address prediction tests |
| → M7 Deployment | Multi-chain verification | Cross-chain deployment consistency checks |
Part 2 connections:
| Part 2 Module | Foundry Technique | Application |
|---|---|---|
| M2: AMMs | Invariant testing | x * y = k preservation, price bounds, LP share accounting |
| M3: Oracles | vm.warp + vm.roll | Time manipulation for oracle staleness, TWAP testing |
| M4: Lending | Fork testing + fuzz testing | Test against live Aave/Compound pools, randomized health factor scenarios |
| M5: Flash Loans | Fork testing + scripts | Flash loan PoCs against real pools, arbitrage scripts |
| M6: Stablecoins | Invariant testing | CDP solvency, peg stability, liquidation thresholds |
| M7: Vaults | Fuzz testing | Share/asset conversion edge cases, yield strategy invariants |
| M8: Security | Exploit reproduction | DeFiHackLabs-style fork tests reproducing real attacks |
| M9: Integration | Full test suite | All techniques combined — capstone integration testing |
📖 Production Study Order
Study these test suites in this order — each builds on skills from the previous:
| # | Repository | Why Study This | Key Files |
|---|---|---|---|
| 1 | Solmate tests | Clean, minimal — learn Foundry idioms | ERC20.t.sol, ERC4626.t.sol |
| 2 | OZ test suite | Industry-standard patterns, comprehensive coverage | ERC20.test.js → Foundry equivalents |
| 3 | Uniswap V4 basic tests | State-of-the-art DeFi testing patterns | PoolManager.t.sol, Swap.t.sol |
| 4 | Uniswap V4 handlers | Invariant testing with handler contracts | invariant/ directory |
| 5 | Morpho Blue invariant tests | Complex protocol invariant testing | Handler patterns for lending |
| 6 | DeFiHackLabs | Exploit reproduction with fork tests | src/test/ — real attack PoCs |
Reading strategy: Start with Solmate to learn clean Foundry patterns, then OZ for coverage standards. Move to V4 for DeFi-specific testing, then Aave for invariant handler patterns. Finish with DeFiHackLabs to understand exploit reproduction — the ultimate fork testing skill.
📚 Resources
Foundry Documentation
- Foundry Book — official docs (read cover-to-cover)
- Foundry GitHub — source code and examples
- Foundry cheatcodes reference — all
vm.*functions
Testing Best Practices
- Foundry Book - Testing — basics
- Foundry Book - Fuzz Testing — property-based testing
- Foundry Book - Invariant Testing — advanced fuzzing
Production Examples
- Uniswap V4 test suite — state-of-the-art testing patterns
- Morpho Blue invariant tests — handler patterns
- Solmate tests — clean, minimal examples
Gas Optimization
- Rareskills Gas Optimization — comprehensive guide
- EVM Codes — opcode gas costs
- Solidity gas optimization tips — from Solidity team
RPC Providers
Navigation: ← Module 4: Account Abstraction | Module 6: Proxy Patterns →