Commands, flags, and workflows for debugging, profiling, and performance
analysis.
# Drop into debugger at this line (Python 3.7+)
import pdb; pdb.set_trace()
# Post-mortem: debug after an unhandled exception
# Run script under pdb from the start
# Post-mortem on crash (drops into pdb on exception)
python -m pdb -c continue script.py
| Command | Short | Effect |
|---|
next | n | Execute next line (step over) |
step | s | Step into function call |
continue | c | Continue until next breakpoint |
return | r | Continue until current function returns |
break N | b N | Set breakpoint at line N |
break fn | b fn | Set breakpoint at function |
tbreak N | | Temporary breakpoint (fires once) |
clear N | cl N | Clear breakpoint number N |
list | l | Show source around current line |
longlist | ll | Show full source of current function |
print expr | p | Evaluate and print expression |
pp expr | | Pretty-print expression |
display expr | | Watch expression (print on change) |
undisplay | | Remove watched expression |
where | w | Print stack trace |
up | u | Move up one stack frame |
down | d | Move down one stack frame |
quit | q | Exit debugger |
(Pdb) b 42, x > 100 # Break at line 42 only when x > 100
(Pdb) b utils.py:10, len(items) == 0
# ipdb — pdb with IPython features (tab completion, syntax highlighting)
# Use ipdb as the default breakpoint handler
os.environ["PYTHONBREAKPOINT"] = "ipdb.set_trace"
breakpoint() # Now opens ipdb
# pudb — full TUI debugger with variable inspector
# Trace specific syscall categories
strace -e trace=network ./program
strace -e trace=file ./program
strace -e trace=open,read,write ./program
# Attach to running process
# Count syscalls and show summary
# Write output to file (stderr is normal output)
strace -o trace.log ./program
strace -t ./program # Wall clock
strace -T ./program # Time spent in each call
# macOS equivalent of strace (requires SIP adjustments)
# Trace specific syscalls
sudo dtruss -f -t open ./program
# Count syscalls by process name
sudo dtrace -n 'syscall:::entry { @[execname] = count(); }'
# Trace file opens by a specific process
sudo dtrace -n 'syscall::open*:entry /execname == "python3"/ {
printf("%s", copyinstr(arg0));
# Profile user-space stacks at 99 Hz
sudo dtrace -n 'profile-99 /pid == 1234/ {
# Launch program under debugger
lldb -- ./program --flag arg
# Attach to running process
(lldb) breakpoint set -f main.c -l 42 # Break at file:line
(lldb) b main # Break at function
(lldb) br list # List breakpoints
(lldb) run # Start execution
(lldb) continue # Continue
(lldb) frame variable # Show local variables
(lldb) p expression # Evaluate expression
(lldb) memory read 0x1000 # Examine memory
(lldb) register read # Show registers
(lldb) watchpoint set variable x # Break on write to x
(gdb) break main.c:42 # Set breakpoint
(gdb) run # Start program
(gdb) next / step / continue # Navigation
(gdb) info locals # Show local variables
(gdb) print expr # Evaluate expression
(gdb) watch variable # Watchpoint
(gdb) x/16xw 0x1000 # Examine 16 words at address
| Task | lldb | gdb |
|---|
| Set breakpoint | b main | break main |
| Run | run | run |
| Backtrace | bt | bt |
| Print variable | p var | print var |
| Local variables | frame variable | info locals |
| Examine memory | memory read addr | x addr |
| Watch variable | watchpoint set variable x | watch x |
| Attach to PID | process attach -p 1234 | attach 1234 |
# Live top-like view of a running process
# Record a flame graph (SVG output)
py-spy record -o flame.svg --pid 1234
py-spy record -o flame.svg -- python script.py
# Record with specific format
py-spy record --format flamegraph -o flame.svg -- python script.py
py-spy record --format speedscope -o profile.json -- python script.py
# Sample rate (default 100 Hz)
py-spy record --rate 250 -o flame.svg --pid 1234
# Include native C extensions
py-spy record --native -o flame.svg --pid 1234
py-spy record --subprocesses -o flame.svg -- python script.py
# Run profiler and save stats
python -m cProfile -o profile.prof script.py
# Sort by cumulative time (direct output)
python -m cProfile -s cumtime script.py
# Visualize with snakeviz (opens browser)
# Profile a specific section
with cProfile.Profile() as pr:
stats.sort_stats("cumulative")
stats.print_stats(20) # Top 20 functions
pip install line_profiler
# Decorate functions to profile
total = sum(range(1000000))
# -l line-by-line profiling
# -v show results immediately
pip install memory_profiler
from memory_profiler import profile
# Run and show line-by-line memory usage
python -m memory_profiler script.py
# Track memory over time (generates plot data)
mprof plot # Opens matplotlib graph
hyperfine 'fd . /tmp' 'find /tmp'
# Warmup runs (prime caches)
hyperfine --warmup 3 'command'
hyperfine --runs 50 'command'
hyperfine --parameter-scan threads 1 8 \
'sort --parallel={threads} data.txt'
hyperfine --parameter-list lang python3,ruby,node \
hyperfine --export-json results.json 'command'
hyperfine --export-markdown results.md 'command'
# Preparation command (run before each timing)
hyperfine --prepare 'sync; echo 3 | sudo tee /proc/sys/vm/drop_caches' \
hyperfine --show-output 'echo hello'
# Bash builtin (real/user/sys)
/usr/bin/time -l ./program # macOS: includes memory stats
/usr/bin/time -v ./program # Linux: verbose resource usage
/usr/bin/time -f "%e real, %U user, %S sys, %M maxRSS(KB)" ./program
python -m timeit 'sum(range(1000))'
python -m timeit -n 10000 -r 5 'sum(range(1000))'
# -n number of executions per run
# -r number of runs (best of r is reported)
elapsed = timeit.timeit('sum(range(1000))', number=10000)
setup='import random; data = random.sample(range(10000), 1000)',
width = time spent (wider = more time)
depth = call stack (bottom = entry point, top = leaf function)
color = arbitrary (usually random or by category)
- Wide bars at top → hot functions (optimize these)
- Tall narrow towers → deep call stacks (check recursion)
- Plateaus → single function dominating runtime
py-spy record --format flamegraph -o flame.svg -- python script.py
# Open flame.svg in a browser (interactive: click to zoom)
# Convert to flame graph (Brendan Gregg's scripts)
perf script | stackcollapse-perf.pl | flamegraph.pl > flame.svg
# Profile from command line using xctrace
xcrun xctrace record --template 'Time Profiler' \
--launch ./program --output profile.trace
# Open in Instruments GUI
Instruments provides a native flame graph (“Call Tree” view inverted) plus
memory allocations, disk I/O, and energy impact profilers.
# Capture all traffic on default interface
# Specific interface and port
sudo tcpdump -i en0 port 443
sudo tcpdump host 192.168.1.1
# Show packet contents in ASCII
# Save to file for Wireshark analysis
sudo tcpdump -w capture.pcap
sudo tcpdump 'tcp port 80 and host example.com'
sudo tcpdump 'udp and port 53' # DNS only
sudo tcpdump -n 'icmp' # Ping/ICMP only
# Show request/response headers
curl -v https://example.com
# Full trace (hex + ASCII)
curl --trace trace.log https://example.com
curl --trace-ascii trace.log https://example.com
curl -o /dev/null -s -w "\
Connect: %{time_connect}s
TTFB: %{time_starttransfer}s
# Follow redirects with verbose
curl -vL https://short.url/abc
# nslookup — simple DNS query
nslookup -type=MX example.com
# dig — detailed DNS query
dig +short example.com # Just the answer
dig +trace example.com # Full delegation chain
dig @8.8.8.8 example.com # Query specific nameserver
# host — concise DNS lookup
host -t AAAA example.com # IPv6 records
format="%(asctime)s %(levelname)s %(name)s %(message)s"
logger = logging.getLogger(__name__)
# Structured output with extra fields
logger.info("Request processed", extra={
# JSON log formatter for machine parsing
class JSONFormatter(logging.Formatter):
def format(self, record):
"time": self.formatTime(record),
"level": record.levelname,
"message": record.getMessage(),
if hasattr(record, "user_id"):
log_entry["user_id"] = record.user_id
return json.dumps(log_entry)
grep -i "timeout" app.log
grep -C 3 "Exception" app.log # 3 lines context
grep -oP 'ERROR: \K[^:]+' app.log | sort | uniq -c | sort -rn
# Parse JSON logs with jq
cat app.log | jq 'select(.level == "ERROR")'
cat app.log | jq 'select(.duration_ms > 1000) | {time, message}'
cat app.log | jq -r '[.time, .level, .message] | @tsv'
# Group errors by message
cat app.log | jq -r 'select(.level == "ERROR") | .message' \
| sort | uniq -c | sort -rn
# Follow log file in real time
tail -f app.log | grep --line-buffered "ERROR"
# Follow with highlighting (grep color)
tail -f app.log | grep --line-buffered --color=always -E "ERROR|WARNING|"
# Empty final alternative matches all lines but colors matches
| I need to… | Tool | Command |
|---|
| Debug Python interactively | pdb | breakpoint() in code |
| Debug with better UI | pudb | python -m pudb script.py |
| Profile Python CPU usage | py-spy | py-spy top --pid 1234 |
| Generate a flame graph | py-spy | py-spy record -o flame.svg -- python app.py |
| Profile function call counts | cProfile | python -m cProfile -s cumtime script.py |
| Profile line-by-line | line_profiler | kernprof -l -v script.py |
| Profile memory usage | memory_profiler | python -m memory_profiler script.py |
| Benchmark shell commands | hyperfine | hyperfine 'cmd1' 'cmd2' |
| Benchmark Python snippets | timeit | python -m timeit 'expr' |
| Trace system calls (Linux) | strace | strace -e trace=file ./program |
| Trace system calls (macOS) | dtruss | sudo dtruss ./program |
| Debug native binary (macOS) | lldb | lldb ./program |
| Debug native binary (Linux) | gdb | gdb ./program |
| Capture network traffic | tcpdump | sudo tcpdump -i en0 port 443 |
| Debug HTTP requests | curl | curl -v https://example.com |
| Debug DNS resolution | dig | dig +trace example.com |
| Find which commit broke something | git bisect | git bisect start && git bisect bad |
| Watch logs in real time | tail | tail -f app.log | grep ERROR |
| Parse JSON logs | jq | jq 'select(.level == "ERROR")' app.log |