AI CLI Cheat Sheet
Last updated: 2026-02-06
Command-line AI coding assistants: Claude Code, GitHub Copilot CLI, Cursor, Cline.
Context Management
Section titled “Context Management”CLAUDE.md Structure
Section titled “CLAUDE.md Structure”Recommended sections:
- Project Overview — 1-2 sentences: what this is, primary language/framework
- Architecture — Key directories, entry points, data flow
- Commands — Build, test, lint, deploy commands
- Conventions — Naming patterns, error handling, testing expectations
- Gotchas — The weird auth module, special headers, files to avoid
File Locations
Section titled “File Locations”| File | Scope | Git |
|---|---|---|
~/.claude/CLAUDE.md | All projects | No |
./CLAUDE.md | Project root | Yes |
./CLAUDE.local.md | Project (personal) | No |
./src/CLAUDE.md | Subdirectory (loaded on demand) | Yes |
.cursorrules | Cursor (legacy) | Yes |
.cursor/rules/*.md | Cursor (scoped rules) | Yes |
What to Include
Section titled “What to Include”Good — prevents mistakes:
- “Use pnpm, not npm”
- “All API routes require auth middleware”
- “Never modify migrations after merge”
Bad — use linters instead:
- “Use 2-space indentation”
- “Always add semicolons”
- “Sort imports alphabetically”
Keep under 300 lines. Every line should prevent a specific mistake.
Prompting Patterns
Section titled “Prompting Patterns”Plan First, Code Second
Section titled “Plan First, Code Second”# Ask for a plan before implementation"Before writing code, outline your approach for addinguser authentication. What files will change?"
# Review plan, then approve"That approach looks good. Proceed with step 1."Context Packing
Section titled “Context Packing”# Provide constraints upfront"Add pagination to the users API. Constraints:- Use cursor-based pagination (not offset)- Match existing endpoints in src/api/posts.ts- Return max 50 items per page- Include total count in response"
# Include examples of desired output"Format error responses like this:{ error: { code: 'INVALID_INPUT', message: '...' } }"Chunked Implementation
Section titled “Chunked Implementation”# Break large tasks into steps"Let's implement the checkout flow in steps:1. First, create the cart summary component2. Then add the payment form3. Finally, wire up the order submission
Start with step 1."
# Verify before continuing"Step 1 looks good. Proceed to step 2."Negative Constraints
Section titled “Negative Constraints”# Tell the AI what NOT to do"Add form validation. Do NOT:- Add new dependencies- Modify the existing API- Change the form layout"Workflow Modes
Section titled “Workflow Modes”Exploratory (Vibe Coding)
Section titled “Exploratory (Vibe Coding)”Good for prototypes, learning, throwaway code.
# Open-ended generation"Build a CLI tool that converts markdown to HTML"
# Iterate on results"Add syntax highlighting for code blocks""Make it watch for file changes"Production (AI-Assisted Engineering)
Section titled “Production (AI-Assisted Engineering)”Good for code that will be maintained.
# Spec-driven development"Implement the user service according to this spec:[paste spec or reference file]"
# Test-first"Write failing tests for the user service first,then implement to make them pass"
# Incremental changes"Add email validation to the signup form.Show me the diff before applying."Debugging
Section titled “Debugging”# Provide full context"This test is failing:[paste test output]
The relevant code is in src/auth/token.ts.What's causing the failure?"
# Ask for hypotheses first"Before fixing, list 3 possible causes ranked by likelihood"Verification Checklist
Section titled “Verification Checklist”Always Check
Section titled “Always Check”- Logic correctness — AI has 1.75× more logic errors than humans
- Edge cases — Empty inputs, nulls, boundary values
- Error handling — Failures should be graceful, not silent
- Security — 45% of AI code has security flaws (auth, injection, XSS)
Before Merging
Section titled “Before Merging”# Run the full test suitenpm test
# Check typesnpm run typecheck
# Run linternpm run lint
# Manual smoke test# "Click through the UI yourself"Red Flags in AI Output
Section titled “Red Flags in AI Output”// Overly clever solutions// AI loves unnecessary abstractions
// Inconsistent patterns// Different approach than existing code
// Missing error handlingtry { doThing(); // No catch, no finally}
// Hardcoded values that should be configconst API_URL = "http://localhost:3000"; // Should be env var
// Commented-out code or TODOs// TODO: implement proper validationTool Selection
Section titled “Tool Selection”When to Use CLI Agents (Claude Code, Copilot CLI)
Section titled “When to Use CLI Agents (Claude Code, Copilot CLI)”- Multi-file refactoring
- Running tests and fixing failures iteratively
- Exploring unfamiliar codebases
- Complex tasks requiring tool use (git, npm, etc.)
When to Use IDE Copilots (Copilot, Cursor)
Section titled “When to Use IDE Copilots (Copilot, Cursor)”- Line-by-line completions while typing
- Quick boilerplate generation
- Tab-completing known patterns
- Real-time suggestions
When to Go Manual
Section titled “When to Go Manual”- Security-critical code (auth, crypto, payments)
- Complex business logic requiring domain knowledge
- When you can’t explain what the AI wrote
- Debugging AI-generated bugs (irony is real)
Version Control Discipline
Section titled “Version Control Discipline”Commit Granularly
Section titled “Commit Granularly”# Commit after each successful AI editgit add -p # Review changesgit commit -m "Add user validation"
# Use commits as save pointsgit stash # Before risky AI operation# ... AI makes changes ...git diff # Review what changedgit checkout -- file.ts # Revert if neededIsolate Experiments
Section titled “Isolate Experiments”# Use branches for AI experimentsgit checkout -b ai/experiment-auth
# Or worktrees for parallel explorationgit worktree add ../project-experiment featureAnti-Patterns
Section titled “Anti-Patterns”Blind Trust
Section titled “Blind Trust”# Bad: accept without review"Generate the authentication system" → merge
# Good: review everything"Generate the authentication system" → review → test → iteratePrompt and Pray
Section titled “Prompt and Pray”# Bad: vague request"Make it better"
# Good: specific request"Reduce the function complexity by extractingthe validation logic into a separate function"Context Starvation
Section titled “Context Starvation”# Bad: no context"Fix the bug"
# Good: full context"Fix the null pointer in handleSubmit (src/form.ts:45).The form data is undefined when the user double-clicks.Here's the error: [paste error]"Skipping Tests
Section titled “Skipping Tests”# Bad: assume AI code works"Generate the API endpoint" → deploy
# Good: verify behavior"Generate the API endpoint" → write tests → verify → deployQuick Reference
Section titled “Quick Reference”| Pattern | Command/Approach |
|---|---|
| Start session | Review CLAUDE.md, state current goal |
| Request plan | ”Outline your approach before coding” |
| Chunk work | ”Let’s do this in steps. Start with X” |
| Add constraints | ”Do NOT modify Y or add dependencies” |
| Verify output | Run tests, lint, manual check |
| Review diff | ”Show me the changes before applying” |
| Iterate | ”That’s close, but change X to Y” |
| Save progress | git commit after each successful change |
| Escape hatch | git checkout -- file to revert AI changes |
See Also
Section titled “See Also”- Claude Code Extensibility — Agents, hooks, plugins, MCP, memory, configuration
- Git — Version control for tracking AI changes
- Shell — Commands AI agents execute
- Debugging — Systematic approach to fixing AI bugs
- Thinking — Mental models for evaluating AI suggestions
- AI Adoption
- Technical Writing Lesson Plan