Skip to content

Thinking Cheat Sheet

Judgment scales; AI commoditizes execution.

ModelCore Idea
First principlesStrip away assumptions; build from bedrock truths
InversionInstead of “how do I succeed?” ask “how would I fail?”
Second-order effectsThen what? And then what after that?
Opportunity costChoosing X means not choosing Y
ReversibilityTwo-way doors (cheap to undo) vs one-way doors (commit carefully)
LeverageSmall inputs, large outputs — where’s the fulcrum?
ModelCore Idea
CAP theoremDistributed systems: pick two of consistency, availability, partition tolerance
Amdahl’s lawSpeedup limited by the part you can’t parallelize
Premature optimizationMeasure first; optimize the bottleneck
Leaky abstractionsAll abstractions fail; know what’s underneath
Worse is betterSimple, working beats perfect, unshipped
Gall’s lawComplex systems that work evolved from simple systems that worked
ModelCore Idea
Regret minimizationWhich choice will 80-year-old you regret less?
Eisenhower matrixUrgent vs important — different axes
Satisficing”Good enough” beats endless optimization
Reversible defaultsWhen uncertain, choose what’s easiest to undo

Work that keeps paying dividends vs. work that’s consumed once.

Linear ValueCompound Value
Shipping a featureEstablishing a pattern others follow
Fixing a bugCreating monitoring that catches bugs early
Writing codeWriting documentation that trains 10 engineers
Solving today’s problemPreventing tomorrow’s category of problems
Answering a questionWriting docs so 50 people don’t ask it
Helping one personBuilding tools that help everyone

The test: When you leave, does your impact persist? If it requires your ongoing presence to maintain value, it’s linear. If it continues without you, it compounds.

Levels of help:

  1. 1:1 — Answer a colleague’s question
  2. 1:many — Write documentation so 50 people don’t ask
  3. Systemic — Build tools that prevent the question from arising

How to shift: Each time you solve a problem, ask: “Can I create something that solves this for everyone?” The answer isn’t always yes—but asking changes what you build.

Feedback loops — Outputs become inputs

  • Reinforcing: growth spirals (or death spirals)
  • Balancing: self-correcting toward equilibrium

Stocks and flows — Accumulations and rates of change

  • Don’t confuse the bathtub (stock) with the faucet (flow)

Delays — Effects lag behind causes

  • Systems overshoot because feedback arrives late

Emergent behavior — Whole > sum of parts

  • Local optimization ≠ global optimization
  • Where are the feedback loops?
  • What’s the delay between action and effect?
  • Who are the stakeholders I’m not seeing?
  • What happens if this succeeds beyond expectations?
  • What’s the system optimizing for? Is that what we want?
TrapDescription
Local optimizationImproving your part while harming the whole
Metric fixationGoodhart’s law: measure becomes target, ceases to be useful
Linear thinkingAssuming proportional cause/effect in nonlinear systems
Ignoring delaysImpatience leads to over-correction

Different questions serve different purposes. Choose deliberately.

TypePurposeExample
ClarifyingSurface hidden assumptions”What would have to be true for this to work?”
ReframingShift the problem itself”Are we solving the right problem?”
AligningBuild consensus”What concerns need addressing to move forward?”
UnlockingDevelop others”What do you think should be done?”
PreventingStop expensive mistakes early”What would we see if this was failing?”
DiagnosticIsolate root cause”What changed? Can I reproduce it?”

The expert knows which questions to ask, not just how to answer.

  • What problem are we actually solving?
  • Who has this problem? How do they cope today?
  • What would “done” look like?
  • How will we know if it worked?
  • What’s the simplest thing that could possibly work?
  • What would have to be true for this to be the right choice?
  • What are we optimizing for? What are we sacrificing?
  • Is this a one-way or two-way door?
  • What do we believe that might be wrong?
  • Who disagrees? Why?
  • What do I actually know vs assume?
  • What question am I not asking?
  • What would I try if I couldn’t fail?
  • What would a beginner do?
  • What would I advise a friend in this situation?
  • What changed?
  • What did I expect? What happened instead?
  • Can I reproduce it?
  • What’s the smallest case that shows the bug?
  • Am I solving the symptom or the cause?
Immediate task → "Deploy the feature"
Project goal → "Increase user engagement"
Business goal → "Grow revenue"
Actual goal → "Build something people want"
Life goal → "Do meaningful work"

Working on the wrong level wastes effort. Zoom out regularly.

TrapExample
Task fixationFinishing the ticket vs solving the user’s problem
Vanity metricsOptimizing for lines of code, story points, commits
Sunk costContinuing because you’ve already invested, not because it’s right
Cargo cultingCopying practices without understanding why they work
Resume-driven developmentChoosing tech for your CV, not the problem
  • “What would success look like if this feature didn’t exist?”
  • “If we couldn’t build this, how else might we achieve the goal?”
  • “What’s the user actually trying to accomplish?”
  • “Will this matter in a week? A year?”
AI handles wellHumans handle better
Syntax, boilerplateArchitecture, design
Lookup, recallJudgment, taste
First draftFinal edit
”How to X in Y""Should we X at all?”
SpeedDirection
  • Verification — Can you tell if the output is correct?
  • Decomposition — Breaking problems into AI-sized chunks
  • Iteration — Refining output through dialogue
  • Integration — Combining AI output into coherent systems
  • Knowing when to go deep — Some things need human understanding

Do:

  • State context and constraints upfront
  • Ask for reasoning, not just answers
  • Verify against your mental model
  • Iterate — first answer is rarely best
  • Use AI to explore options, then decide yourself

Don’t:

  • Trust without verification
  • Outsource understanding
  • Skip building mental models
  • Accept complexity you can’t maintain
  • Let AI make one-way-door decisions

You must know enough to verify AI output, but AI reduces the need to memorize.

Resolution: Learn principles deeply, details just-in-time.

  1. Before starting: What’s the real goal here?
  2. When stuck: What question am I not asking?
  3. After finishing: What would I do differently?
  4. Weekly: What did I learn? What’s still fuzzy?

“The formulation of a problem is often more essential than its solution.” — Einstein