Skip to content

Knowledge Design

How to structure knowledge for transfer, not storage. The difference between knowing something and teaching it is architecture — choosing what comes first, what connects to what, and when to name things.

SkillCore Question
Taxonomy & Classification”What groups exist and how do they relate?”
Cognitive Task Analysis”What does the expert do that the learner can’t see?”
Mental Modeling”What does the learner currently believe?”
Semantic Labeling”When does this concept earn a name?”
Visual Communication”How do I make the structure visible?”

These five skills turn subject-matter expertise into learnable sequences. Each addresses a different failure mode in knowledge transfer.

Grouping concepts, identifying parent/child relationships, and sequencing prerequisites.

PatternStructureBest ForExample
HierarchicalTree — parent/childDomains with clear containmentAnimal kingdom, file systems
FacetedMultiple independent axesDomains with cross-cutting traitsRecipes (cuisine × diet × time)
SequentialOrdered chainDomains with prerequisite orderingMath curriculum, language levels
  1. List everything — dump every concept, skill, and fact
  2. Group by affinity — what belongs together? Name the groups
  3. Identify prerequisites — which concepts require others first?
  4. Draw the tree — if you can’t draw it, you don’t understand the subject
  5. Test with outsiders — does someone unfamiliar agree with the groupings?
  • False peers — concepts at the same level that differ in complexity (“variables” and “closures” side by side)
  • Missing parents — leaf concepts with no containing category
  • Circular prerequisites — A requires B requires A (usually means both need a shared foundation)
  • Overly deep trees — more than four levels signals over-splitting

Heuristic: If you can’t draw the tree, you don’t understand the subject yet.

Decomposing expert intuition into learnable steps. Experts chunk so aggressively that they skip steps unconsciously — the “curse of knowledge.”

An expert debugging a production outage “just knows” where to look. They’ve internalized hundreds of pattern matches that a novice hasn’t built yet. CTA makes those invisible steps visible.

  1. Observe — watch an expert perform the task, noting every action
  2. Elicit — interview: “What were you thinking when you did X?” “What would you check if that didn’t work?”
  3. Decompose — break each step into sub-steps until a novice could follow
  4. Sequence — order by prerequisite, not by habit

A lesson progression — the order in which concepts should be taught so each builds on the last.

Expert sees: "The service is OOMing"
CTA decomposes: 1. Check pod status (kubectl get pods)
2. Read restart count (is it cycling?)
3. Check memory limits (resource.requests vs actual)
4. Read logs for allocation patterns
5. Profile heap if needed
→ Each step is teachable; the expert skipped 1-4

When you are the expert, decompose your own intuition:

  • Solve a problem slowly, narrating each decision
  • Ask: “What did I check that I didn’t consciously notice?”
  • Write the steps down before they re-chunk into intuition

Heuristic: If your lesson plan has fewer steps than a novice would need, you’ve skipped the CTA.

The learner already has a mental model — it’s just wrong, incomplete, or shaped by a different domain. Teaching starts with seeing their current map and building a bridge to the target map.

Failure TypeDescriptionExample
Missing conceptNo node exists for this ideaLearner has no concept of “ownership” (Rust)
Wrong relationshipNodes exist but edges are wrong”HTTP is TCP” instead of “HTTP uses TCP”
OvergeneralizationOne model stretched to cover unrelated territory”Everything is an object” applied to Go
False analogyPrior domain maps poorly to new one”Git branches are copies” (from SVN mental model)
Invisible layerAn abstraction hides a critical mechanism”The network is reliable” (from local-only experience)
  • Anchored analogy — connect to what they know, then show where the analogy breaks (“Channels are like pipes, except they block when full”)
  • Progressive refinement — start with the simplified model, add complexity as they’re ready (“First, think of memory as a big array. Later, we’ll add the stack and heap distinction”)
  • Misconception-first teaching — surface the wrong model explicitly, then correct it (“You might think git pull fetches changes. It actually does two things…”)
  • Contrast pairs — show two similar things side by side to highlight the difference (“mutex vs channel — both synchronize, different tradeoffs”)

Heuristic: If the learner nods but can’t solve the problem, they have the words but not the model.

Choosing precise but accessible terminology — and introducing it at the right moment. Jargon is a power tool: essential for experts, dangerous for beginners.

The “Name It When You Need It” Principle

Section titled “The “Name It When You Need It” Principle”

Introduce a term only when the learner has a concept that needs a name:

Bad: "Today we'll learn about monads, functors, and applicatives."
(Three names for concepts the learner can't anchor)
Good: "You've been chaining these operations with .then(). That pattern
has a name: it's a monad. Now you can search for it."
(Name arrives when the concept has a home)
PhaseVocabulary LevelExample
IntroductionPlain language, no jargon”A box that holds a value”
FamiliarityIntroduce the term alongside plain form”This box — called an Option — …”
FluencyUse the term, define it in glossary”Option wraps a nullable value”
ExpertiseAssume the term, use in compound forms”Option::map chains transformations”
  • Premature jargon — terms before concepts (learner memorizes without understanding)
  • Jargon avoidance — refusing to name things (learner can’t search, can’t communicate with peers)
  • Inconsistent naming — same concept called different things in different lessons
  • Overloaded terms — same word meaning different things in different contexts without flagging the ambiguity

Heuristic: If the learner can use the jargon but not explain it in plain language, the label arrived before the concept.

Translating abstract hierarchies into spatial relationships. A diagram communicates structure that prose cannot — but only if the spatial choices carry meaning.

Visual PropertyMeaningExample
ProximityRelatednessGrouped boxes = same category
Lines/arrowsDependency or flowA → B means A feeds B
ContainmentScope or ownershipBox inside box = part of whole
Position (Y)Hierarchy or timeTop = abstract, bottom = concrete
Position (X)Sequence or alternativesLeft-to-right = temporal flow
SizeImportance or volumeLarger = more significant
TypeBest ForStructure
Concept mapShowing how ideas relateNodes + labeled edges
FlowchartDecision logic, process stepsBoxes + branching
SequenceInteractions over timeVertical timelines
HierarchyClassification, org structureTree
State machineLifecycle, transitionsStates + events
ER diagramData relationshipsEntities + connections
  • 7±2 nodes per diagram — more and it needs splitting
  • One idea per diagram — if the title needs “and”, make two diagrams
  • Label everything — unlabeled arrows are ambiguous
  • Direction = flow — left-to-right for time, top-to-bottom for hierarchy
  • Don’t decorate — every visual element should encode information

See Diagramming for syntax and tool reference.

FailureSymptomRoot Cause
Premature jargonLearner memorizes terms, can’t apply themLabels before concepts
Flat curriculumEverything taught at same depth and paceMissing taxonomy, no prerequisites
Missing prerequisitesLearner stuck mid-lesson on assumed conceptIncomplete CTA
Expert blind spots”It’s obvious” — but only to the expertNo self-CTA performed
Wrong mental modelLearner confident but incorrectDidn’t surface existing model
Wall of textConcepts described but never visualizedNo visual communication
Taxonomy by familiarityGrouped by what expert learned firstExpert bias, not logical structure