Artificial Intelligence Blogs Posts
cancel
Showing results for 
Search instead for 
Did you mean: 
_Dimitri_
Product and Topic Expert
Product and Topic Expert
1,506

Claude Code Best Practices

A practical guide to working effectively with Claude Code. Covers workflow principles, planning strategies, model selection, and the full tooling ecosystem.

Who This Is For

Developers — from first-time Claude Code users to power users. Part 1 covers principles everyone should follow. Part 2 is a reference for specific tools and skills you'll adopt over time.


Table of Contents

Part 1: Core Workflow & Best Practices

  1. The Golden Rule — Always Plan Before Coding
  2. Working with Plan Mode — Superpowers vs. feature-dev
  3. Model Selection — Opus vs. Sonnet vs. Haiku
  4. Verification — Never Claim Done Without Proof
  5. CLAUDE.md — Teaching Claude Your Project
  6. settings.json — Configuring Claude Code
  7. Built-in Commands — No Plugins Needed
  8. Subagents & Agent Teams — Parallel Work Patterns

Part 2: Tooling & Skills Reference

  1. Superpowers Plugin
  2. feature-dev:feature-dev
  3. pr-review-toolkit:review-pr
  4. Agent Teams
  5. Graphify
  6. Chrome CDP
  7. tmux & ccc — Parallel Agent Visibility

Part 1: Core Workflow & Best Practices

1. The Golden Rule — Always Plan Before Coding

Why This Matters

The single most expensive mistake when working with Claude Code is letting it start coding before you've agreed on what to build. Claude is fast — it can write 200 lines in seconds — but if those 200 lines solve the wrong problem, you've lost more time than you saved.

The cost asymmetry is stark: pausing to plan costs near zero. Wrong edits cost hours of rework, debugging, and reverting.

The Decision Gate

Before starting ANY task, answer two questions:

  1. Is the task unambiguous? — clear bug with symptoms, explicit edit request with exact intent
  2. Is it trivial? — fewer than 3 steps, no architectural decisions, touches a single file

Both true? Action

YesProceed directly — just do it
NoEnter plan mode first

What "Plan Mode" Means in Practice

You don't need a 10-page document. "Plan mode" means:

  • Tell Claude what you want to achieve (not how)
  • Let it research the codebase and propose an approach
  • Review the approach before saying "go ahead"

That's it. Three messages instead of one. But those three messages prevent the "undo everything and start over" scenario.

Examples

Trivial — proceed directly:

You: "Fix the typo on line 42 of README.md"
Claude: [fixes it]

Non-trivial — plan first:

You: "Add authentication to the API"
Claude: "Let me explore the codebase and propose an approach..."
Claude: [proposes JWT with middleware pattern, explains trade-offs]
You: "Sounds good, go ahead"
Claude: [implements]

Vague — ask questions first:

You: "Make it faster"
Claude: "I need more context. What's slow? The API response time,
         the build, or the frontend rendering?"

The Rule for Claude

This is what we put in CLAUDE.md to enforce this behavior:

When in doubt, default to research + recommendation. The cost of pausing is near zero; the cost of wrong edits is high.

2. Working with Plan Mode — Superpowers vs. feature-dev

Why This Matters

Planning isn't just "think before you act." We have two structured pipelines that turn ideas into implemented, tested, reviewed code. Each produces concrete artifacts that the next stage consumes. Skip a stage and you're flying blind.

The key question: how clear are your requirements?

When to Use Which

Situation Workflow Why

Requirements unclear, need explorationSuperpowersBrainstorming phase explores options, surfaces unknowns
TDD / test-first approachSuperpowerstest-driven-development skill enforces write-test-first discipline
Requirements clear, well-defined featurefeature-devGuided 7-phase flow, no brainstorming needed
Unfamiliar codebase, need to learn patternsfeature-devBuilt-in code-explorer + code-architect agents
Multiple competing approaches to evaluateSuperpowersBrainstorming produces 2-3 options with trade-offs
Straightforward "add X to Y" with clear specfeature-devFaster — skips spec writing, goes straight to architecture

Path A: Superpowers Pipeline (Requirements Unclear / TDD)

Idea → Brainstorm → Spec → Write Plan → Execute → Verify → Finish

Each stage has a dedicated skill that enforces discipline:

Stage Skill What it produces

Designsuperpowers:brainstormingDesign spec with 2-3 approaches and trade-offs
Planningsuperpowers:writing-plansBite-sized implementation tasks (2-5 min each)
Executionsuperpowers:subagent-driven-developmentImplemented code via fresh agent per task
Verificationsuperpowers:verification-before-completionEvidence that it works
Completionsuperpowers:finishing-a-development-branchMerged PR or clean branch

How it works:

Step 1: Brainstorm — You describe what you want. Claude asks clarifying questions (one at a time), proposes 2-3 approaches with trade-offs, and writes a design spec.

You: "I need to add rate limiting to our API"
Claude: [asks about scope, limits, storage backend]
Claude: [proposes: Redis-based vs. in-memory vs. API gateway]
Claude: [writes spec to docs/superpowers/specs/2026-04-29-rate-limiting-design.md]

Step 2: Review the spec — Read it. Challenge assumptions. Ask "what about X?" This is cheap. Changing the spec costs nothing; changing the code costs hours.

Step 3: Write the plan — Claude converts the spec into an ordered list of tiny tasks. Each task has exact file paths, code to write, and commands to verify.

Step 4: Review the plan — This is your last chance to catch issues before code gets written. Check: are the tasks in the right order? Is anything missing? Are there unnecessary steps?

Step 5: Execute — Claude dispatches a fresh subagent per task. Each agent implements, tests, and commits. A reviewer agent checks each task against the spec.

Step 6: Verify & Finish — Run the full test suite. Create a PR or merge.


Path B: feature-dev Pipeline (Requirements Clear)

Discovery → Explore Codebase → Architecture → Clarify → Implement → Review → Complete

Phase What happens Agent spawned

1. DiscoveryClarifies what to build, identifies constraints
2. Codebase ExplorationUnderstands existing patterns and conventionscode-explorer
3. Architecture DesignDesigns the solution, proposes structurecode-architect
4. ClarificationAsks remaining questions before implementation
5. ImplementationBuilds the feature
6. ReviewQuality check against conventions and best practicescode-reviewer
7. CompletionFinal verification

How to invoke:

/feature-dev:feature-dev Add WebSocket support for real-time notifications

Or without arguments (it will ask interactively):

/feature-dev:feature-dev

What makes it different from superpowers:

  • No brainstorming phase — assumes you know what you want
  • Built-in codebase exploration via specialized agents
  • Architecture design is guided, not open-ended
  • Single flow — you don't compose individual skills
  • Faster for well-defined tasks

Critical Rules (Both Workflows)

  • ALWAYS review the plan/architecture before approving execution. Don't just say "looks good" — actually read it.
  • If something goes wrong mid-execution: STOP. Don't patch a broken plan. Re-plan from the point of failure.
  • Specs and plans are saved to version control. They're documentation for future developers (including future you).

3. Model Selection — Opus vs. Sonnet vs. Haiku

Why This Matters

Different tasks need different levels of reasoning depth. Using Opus for everything is slow and expensive. Using Haiku for everything produces shallow results on complex tasks. Match the model to the job.

The Models

Model Reasoning Depth Speed Best For

OpusDeepest — handles ambiguity, multi-step reasoning, architectural decisionsSlowerPlanning, complex debugging, writing specs, multi-file refactoring
SonnetStrong — good for well-defined tasks with clear requirementsFastImplementation, code review, feature development, standard work
HaikuAdequate for focused, simple tasksFastestFile searches, simple edits, subagent grunt work, lookups

The Rule of Thumb

Opus for thinking. Sonnet for doing. Haiku for subagent tasks.

Practical Guidance

Situation Model Why

Brainstorming session, writing a design specOpusNeeds to reason about trade-offs, propose alternatives
Implementing a well-defined task from a planSonnetRequirements are clear, just needs to execute
Spawning 5 agents to search the codebaseHaikuSimple lookup work, saves cost without losing quality
Debugging a complex multi-service issueOpusNeeds to hold multiple systems in mind, reason about interactions
Writing a single unit testSonnetStraightforward, requirements are explicit
Code review of a large PRSonnet/OpusSonnet for standard PRs, Opus for complex architectural changes

How to Switch

# In Claude Code, use the /model command
/model                    # Shows current model and options
/model opus               # Switch to Opus
/model sonnet             # Switch to Sonnet
/model haiku              # Switch to Haiku

You can also set the default in ~/.claude/settings.json:

{
  "model": "anthropic--claude-opus-latest"
}

4. Verification — Never Claim Done Without Proof

Why This Matters

Claude is confident. It will say "tests pass" without running them. It will say "the bug is fixed" without verifying. This is the single most common failure mode — and it wastes your time when you discover the claim was wrong.

We enforce a simple rule: evidence before claims, always.

The Iron Law

NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE

The Verification Gate

Before claiming ANY status (tests pass, bug fixed, feature works), follow this sequence:

  1. IDENTIFY — What command proves this claim?
    • Test suite: npm test, pytest, go test ./...
    • Build: npm run build
    • Lint: npm run lint
    • Manual: screenshot, curl output, log entry
  2. RUN — Execute the full command. Fresh. Complete. Not cached results from 10 minutes ago.
  3. READ — Full output. Check exit code. Count failures. Don't skim.
  4. VERIFY — Does the output actually confirm the claim?
    • "3 passed, 0 failed" → yes, tests pass
    • "47 passed, 2 failed" → NO, tests do NOT pass
  5. ONLY THEN — Make the claim, with evidence.

Common Failures

What Claude says The problem

"Tests should pass now"Didn't run them
"All tests pass" (after seeing 2 failures)Didn't read the output carefully
"Fixed the bug"Didn't reproduce and verify the fix
"Build succeeds" (from 5 minutes ago)Results are stale — changes happened since

The Staff Engineer Test

Before marking work as done, ask yourself:

"Would a staff engineer approve this based on the evidence I have right now?"

If the answer is "they'd ask me to run the tests first" — run the tests first.

5. CLAUDE.md — Teaching Claude Your Project

Why This Matters

Every new Claude Code conversation starts with zero context about your project. Without guidance, Claude will guess at conventions, make assumptions about architecture, and potentially write code that doesn't match your patterns.

CLAUDE.md solves this. It's a file Claude reads at the start of every conversation — persistent instructions that teach it your project's rules, conventions, and constraints.

Where It Goes

Location Scope Use For

~/.claude/CLAUDE.mdGlobal (all projects)Personal workflow rules, code standards you always follow
<project-root>/CLAUDE.mdProject-specificArchitecture decisions, tech stack, testing requirements, project conventions

Both are loaded — global first, then project-specific.

What to Put In It

High-value entries:

  • Coding standards and conventions ("use signals, not observables")
  • Architecture decisions ("all API calls go through the gateway service")
  • Testing requirements ("every public method needs a unit test")
  • "Don't do X" rules ("never use eslint-disable as a fix")
  • Tech stack context ("NestJS backend, Angular frontend, HANA database")
  • Build/test commands ("use nx test <project> not npm test")

Don't bother with:

  • Things Claude already knows (language syntax, common patterns)
  • Things that change frequently (current sprint tickets, WIP features)
  • Long-form documentation (link to it instead)

Structure: The Tiered Approach

Structure your CLAUDE.md with the most important rules first. Claude pays more attention to content near the top.

Our Global CLAUDE.md (Reference)

This is what we use globally across all projects. Copy what's relevant to your setup:

# Global Development Rules

---

## TIER 1: HARD RULES — Check Before Every Action

These are non-negotiable. Violating any of these is a workflow failure.

### Pre-Action Checkpoint

Before touching ANY file, answer these two questions:

1. **Is the task unambiguous?** (clear bug with symptoms, explicit edit request with exact intent)
2. **Is it trivial?** (< 3 steps, no architectural decisions, single file)

If BOTH are true → proceed directly.
If EITHER is false → **enter plan mode first**.

- State assumptions explicitly. If uncertain, ask.
- If multiple interpretations exist, present them — don't pick silently.
- If a simpler approach exists, say so. Push back when warranted.
- If something is unclear, stop. Name what's confusing. Ask.

### Bug Fix vs. Ambiguous Task

| Situation | Action |
|-----------|--------|
| Bug report with clear symptoms | Fix autonomously — find root cause, fix it, verify |
| User says "add X" / "change Y" without exact spec | Research first, recommend approach, wait for "go ahead" |
| Vague intent ("make it better", "fix the flow") | Ask clarifying questions, do NOT edit |

When in doubt, default to research + recommendation.

### Never Speculate About Unread Code

- **Never make claims about code without opening and reading the files first**
- **Read before answering** — always investigate files before responding
- If you haven't read it, you don't know what it does

---

## TIER 2: Workflow — How to Execute

### Plan Mode
- Enter plan mode for ANY non-trivial task (3+ steps or architectural decisions)
- If something goes sideways, STOP and re-plan immediately
- Write detailed specs upfront to reduce ambiguity

### Task Management
1. Write plan to tasks/todo.md with checkable items
2. **Check in before starting implementation** — don't just start coding
3. Mark items complete as you go

### Subagent Strategy
- Use subagents liberally to keep main context window clean
- Offload research, exploration, and parallel analysis to subagents
- One task per subagent for focused execution

### Goal-Driven Execution
- Transform tasks into verifiable goals before starting:
  - "Add validation" → "Write tests for invalid inputs, then make them pass"
  - "Fix the bug" → "Write a test that reproduces it, then make it pass"
  - "Refactor X" → "Ensure tests pass before and after"

### Verification Before Done
- Never mark a task complete without proving it works
- Run tests, check logs, demonstrate correctness
- Ask yourself: "Would a staff engineer approve this?"

---

## TIER 3: Code Standards — What the Code Should Look Like

### Core Principles
- **Simplicity First**: Minimum code that solves the problem.
- **Iterate**: Enhance existing code unless fundamental changes are clearly justified
- **Focus**: Stick strictly to defined tasks. No features beyond what was asked.
- **No Laziness**: Find root causes. No temporary fixes.
- **Minimal Impact**: Changes should only touch what's necessary

### Surgical Changes
- Don't "improve" adjacent code, comments, or formatting
- Don't refactor things that aren't broken
- Match existing style, even if you'd do it differently
- Every changed line should trace directly to the user's request

### Code Style
- Clean imports only — no dynamic imports in functions, all imports at top level
- No one-time scripts committed to production repos
- Files under 300 lines; proactively refactor

Memory System

Claude also has a persistent memory system. When you correct it ("don't use mocks in these tests"), it saves that feedback to memory files in ~/.claude/projects/<project>/memory/. Next session, it remembers.

You don't need to manage this manually — just correct Claude when it does something wrong, and it will learn.

6. settings.json — Configuring Claude Code

What It Is

~/.claude/settings.json is Claude Code's global configuration file. It controls model selection, environment variables, experimental features, plugins, hooks, and UI behavior. Project-level overrides go in .claude/settings.json within the repo.

Location

File Scope

~/.claude/settings.jsonGlobal — applies to all projects
<project>/.claude/settings.jsonProject — overrides global for this repo
<project>/.claude/settings.local.jsonLocal (gitignored) — personal overrides

Key Settings

Environment Variables (env)

{
  "env": {
    "ANTHROPIC_MODEL": "anthropic--claude-opus-latest",
    "ANTHROPIC_DEFAULT_SONNET_MODEL": "anthropic--claude-sonnet-latest",
    "ANTHROPIC_DEFAULT_HAIKU_MODEL": "anthropic--claude-haiku-latest",
    "ANTHROPIC_DEFAULT_OPUS_MODEL": "anthropic--claude-opus-latest",
    "CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1",
    "CLAUDE_CODE_NO_FLICKER": "1"
  }
}

Variable What it does

ANTHROPIC_MODELDefault model for new sessions. Format: anthropic--claude-{tier}-latest
ANTHROPIC_DEFAULT_SONNET_MODELModel used when you switch to Sonnet via /model sonnet
ANTHROPIC_DEFAULT_HAIKU_MODELModel used when you switch to Haiku via /model haiku
ANTHROPIC_DEFAULT_OPUS_MODELModel used when you switch to Opus via /model opus
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMSEnables agent teams feature (required for /team-* commands)
CLAUDE_CODE_NO_FLICKEREliminates screen flicker during rapid tool-call output updates

Top-Level Settings

{
  "model": "opus[1m]",
  "alwaysThinkingEnabled": false,
  "showClearContextOnPlanAccept": true,
  "includeCoAuthoredBy": false,
  "editorMode": "vim",
  "gitAttribution": false,
  "skipDangerousModePermissionPrompt": true
}

Setting What it does Recommended

modelDefault model with optional token budget suffix (e.g., opus[1m] = Opus with 1M token context)"opus[1m]" for deep work
alwaysThinkingEnabledForces extended thinking on every response (adds latency)false — let Claude decide when thinking helps
showClearContextOnPlanAcceptShows "Clear context?" prompt when you accept a plan — lets you start implementation with a clean slatetrue — fresh context for execution
includeCoAuthoredByAdds "Co-authored-by: Claude" to git commitsPersonal preference
editorModeKeybinding style for the input editor (vim, emacs, or default)Your preference
gitAttributionAdds Claude attribution metadata to commitsfalse unless required
skipDangerousModePermissionPromptSkips the confirmation when entering dangerous/bypass modeOnly if you know what you're doing

Hooks

Hooks are shell commands that fire on specific Claude Code events. They enable integrations like ccc (tmux visibility), Telegram notifications, and custom automation.

{
  "hooks": {
    "Notification": [...],
    "PermissionRequest": [...],
    "PostToolUse": [...],
    "PreToolUse": [...],
    "Stop": [...],
    "UserPromptSubmit": [...],
    "SessionStart": [...],
    "SessionEnd": [...],
    "SubagentStart": [...],
    "PostToolUseFailure": [...],
    "PreCompact": [...]
  }
}

Event Fires when Common use

NotificationClaude sends a notification (task done, needs input)Telegram/Slack alerts, ccc integration
PermissionRequestClaude asks for permission to run a toolAuto-approve patterns, remote notifications
PreToolUseBefore any tool executesInject context, gate checks
PostToolUseAfter a tool completesLogging, visibility dashboards
StopClaude's turn endsNotify that output is ready
UserPromptSubmitUser sends a messageRemote monitoring, session tracking
SessionStart / SessionEndSession lifecycleService registration, cleanup
SubagentStartA subagent is spawnedTrack parallel work
PostToolUseFailureA tool call failsError alerting
PreCompactContext is about to be compactedPreserve state before compression

Each hook entry has:

  • matcher — regex filter (empty = match all, "Bash" = only Bash tool, "Grep|Glob|Bash" = multiple)
  • type — always "command"
  • command — shell command to execute
  • timeout — milliseconds before kill (default: 10000)
  • async  true to fire without blocking Claude's execution

How to Edit

# Via skill (recommended — validates structure)
/update-config

# Manually
vim ~/.claude/settings.json

After editing manually, restart Claude Code for changes to take effect.

7. Built-in Commands — No Plugins Needed

Why This Matters

Claude Code ships with powerful slash commands out of the box. You don't need plugins for session management, context control, or basic navigation. Knowing these saves time and prevents lost work.

Session Management

Command What it does

/resumeResume a previous conversation — pick from recent sessions and continue where you left off
/clearClear current context window — start fresh without ending the session
/compactCompress conversation history to free up context space while preserving key information

Model & Configuration

Command What it does

/modelShow current model or switch (/model opus, /model sonnet, /model haiku)
/configOpen interactive configuration (theme, model, permissions)
/permissionsView and manage tool permission settings
/costShow token usage and cost for the current session

Navigation & Context

Command What it does

/helpShow all available commands and keybindings
/initGenerate a CLAUDE.md for the current project (analyzes codebase)
/reviewReview a pull request (built-in, no plugin needed)
/tasksList background tasks (subagents, running commands)

Workflow

Command What it does

/fastToggle fast mode (faster output, same model tier)
/vimToggle vim keybindings for the input editor
!commandRun a shell command in-session (output lands in conversation context)
#Press # to auto-incorporate learnings into CLAUDE.md

The Most Important Ones

/resume — The single most useful command. If your session crashes, context gets compacted, or you come back the next day — /resume picks up exactly where you left off. No lost work.

/compact — When Claude starts forgetting earlier context or responses get shallow, compact before it auto-compresses. You control what gets preserved.

/clear — After finishing one task and starting another unrelated task. Fresh context = better reasoning.

8. Subagents & Agent Teams — Parallel Work Patterns

Why This Matters

Claude Code offers two models for parallel work: subagents (lightweight, fire-and-forget workers) and agent teams (full independent sessions coordinated through a shared task list). Choosing the right model for your situation is the difference between clean parallel execution and a coordination nightmare.

Both solve the same root problem — Claude's context window fills up. Every file read, every search result, every exploration output eats into that budget. Parallel workers keep your main session focused on coordination rather than drowning in details.


Subagents

Subagents are isolated Claude instances you spawn for a single focused task. They get only the context you give them, do their job, report back a summary, then disappear. Your main session stays clean.

+-------------------------------------------------------+
|                   MAIN AGENT                          |
|          (orchestrator - stays clean)                 |
+-------------------------------------------------------+
|                                                       |
|  spawn --+--> [Subagent 1] --> summary --+            |
|          |                               |            |
|  spawn --+--> [Subagent 2] --> summary --+-> synth.   |
|          |                               |   results  |
|  spawn --+--> [Subagent 3] --> summary --+            |
|                                                       |
|  Each subagent:                                       |
|  - Gets ONLY what you pass it                         |
|  - Has its own isolated context                       |
|  - Cannot ask follow-up questions                     |
|  - Returns results, then disappears                   |
+-------------------------------------------------------+

When to use subagents:

Scenario Why subagents help

Research & exploration"Find all files that handle auth" — results stay in the subagent, only the summary comes back
Parallel investigation3 test files failing for different reasons — dispatch 3 agents simultaneously
Focused implementation"Implement this one function per spec" — agent gets spec + target file, nothing else
Code review"Review this PR for security" — agent reads the diff in isolation

Rules:

  1. One task per subagent — "Research auth AND implement the fix" is two agents, not one.
  2. Give full context upfront — file paths, requirements, constraints, relevant code. They can't ask follow-up questions.
  3. You synthesize, they execute — subagents report findings; you decide what to do with them.

When NOT to use subagents:

  • Tightly coupled tasks where task B needs the exact output of task A
  • Tasks requiring your entire conversation history for context
  • Work where agents need to coordinate with each other mid-task

Agent Teams

Agent teams are full independent Claude Code sessions coordinated by a team-lead agent. Unlike subagents, teammates persist, communicate with each other, and share a task list. They're real parallel workers — not fire-and-forget.

+-----------------------------------------------------------+
|                     TEAM LEAD                             |
|    (decomposes work, assigns tasks, synthesizes)          |
+-----------------------------------------------------------+
|                                                           |
| +--------------+ +--------------+ +--------------+        |
| |  Teammate A  | |  Teammate B  | |  Teammate C  |        |
| |  (backend)   | |  (frontend)  | |  (tests)     |        |
| |              | |              | |              |        |
| |  owns:       | |  owns:       | |  owns:       |        |
| |  src/api/    | |  src/ui/     | |  tests/      |        |
| +------+-------+ +------+-------+ +------+-------+        |
|        |                |                |                |
|        +-------+--------+-------+--------+                |
|                |                |                         |
|                v                v                         |
|      +------------------------------------+               |
|      |        SHARED TASK LIST            |               |
|      |   (pending / in_progress / done)   |               |
|      |   + file ownership boundaries      |               |
|      |   + dependency tracking            |               |
|      +------------------------------------+               |
|                                                           |
| Communication:                                            |
| - Teammates message each other directly                   |
| - Team lead assigns/reassigns tasks                       |
| - Idle teammates can be woken with new work               |
| - Plan approval gates before risky changes                |
+-----------------------------------------------------------+

Key concepts:

  • File ownership — each teammate gets exclusive ownership of specific files. No two agents modify the same file. This prevents conflicts.
  • Shared task list — all teammates read/write tasks. They claim unblocked work, mark it done, and check for what's next.
  • Messaging — teammates communicate via SendMessage. The team lead coordinates, but peers can talk directly.
  • Plan approval — team lead can require approval before teammates execute risky changes.
  • Display modes — in tmux, each teammate appears as a visible pane. You watch them work in real-time.

When to use agent teams:

Scenario Why teams help

Multi-layer featuresBackend + frontend + tests need to coordinate interface contracts
Competing-hypothesis debugging3 debuggers investigate different root causes simultaneously
Multi-dimensional code reviewSecurity, performance, architecture reviewers run in parallel
Large refactorsMultiple implementers handle different modules with file ownership

Comparison

Dimension Subagents Agent Teams

ContextIsolated — no shared stateShared task list, can message each other
CommunicationOne-way: main → subagent → summaryBidirectional: teammates ↔ teammates ↔ lead
CoordinationYou synthesize manuallyTeam lead orchestrates automatically
LifespanSingle task, then gonePersist across multiple tasks
Best forResearch, focused implementation, parallel investigationCross-layer features, coordinated refactors, multi-step debugging
Token costLow — minimal overheadHigher — team infrastructure + messaging
SetupZero — just spawnRequires CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 + plugin

Decision Flowchart

+---------------------------------------------------+
|            Do workers need to                     |
|            talk to each other?                    |
|            YES / NO                               |
|             |      |                              |
|             v      v                              |
| +-------------------+ +-------------------+       |
| | Do they need      | | SUBAGENTS         |       |
| | file ownership    | |                   |       |
| | boundaries?       | | One task each,    |       |
| +--------+----------+ | report back,      |       |
|    YES / NO            | disappear.        |      |
|     |      |           +-------------------+      |
|     v      v                                      |
| +---------------+ +---------------+               |
| | AGENT TEAMS   | | SEQUENTIAL    |               |
| |               | | SUBAGENTS     |               |
| | Full parallel | |               |               |
| | coordination  | | Chain outputs |               |
| | w/ ownership  | | manually      |               |
| +---------------+ +---------------+               |
+---------------------------------------------------+

The Skills

Subagents: Use superpowers:dispatching-parallel-agents when you have 2+ independent problems. It handles spawning, parallelism, and result collection.

Agent Teams (requires agent-teams plugin):

# Add the marketplace
/plugin marketplace add wshobson/agents

# Install the plugin
/plugin install agent-teams@claude-code-workflows

Also requires in ~/.claude/settings.json:

{
  "env": {
    "CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1"
  }
}

Command Purpose

/team-spawnSpawn from presets (review, debug, feature, fullstack, research, security, migration)
/team-featureParallel feature development with file ownership
/team-debugCompeting-hypothesis debugging
/team-reviewMulti-dimensional code review
/team-delegateTask delegation dashboard
/team-statusCheck team progress
/team-shutdownGraceful cleanup

Git Worktrees — File Isolation for Parallel Work

Subagents and agent teams coordinate work. Worktrees isolate files. Without worktrees, two parallel Claude sessions editing the same repo will overwrite each other's changes. A worktree gives each session its own working directory and branch while sharing the same Git history.

+--------------------------------------------------------+
|                   YOUR REPOSITORY                      |
+--------------------------------------------------------+
|                                                        |
| main checkout (your terminal)                          |
| +-- .claude/worktrees/                                 |
| |   +-- feature-auth/ <- session 1 (own branch)        |
| |   +-- bugfix-123/   <- session 2 (own branch)        |
| |   +-- pr-456/       <- session 3 (PR branch)         |
| |                                                      |
| All share the same .git - commits, remotes, history    |
| Each has its own working directory - no conflicts      |
+--------------------------------------------------------+

Starting a Worktree Session

# Named worktree — creates branch worktree-feature-auth
claude --worktree feature-auth

# Auto-named (e.g. bright-running-fox)
claude --worktree

# From a PR (fetches pull/<n>/head)
claude --worktree "#1234"

# During an existing session — ask Claude
"Work in a worktree for this feature"

Base Branch Configuration

By default, worktrees branch from origin/HEAD (clean remote state). To branch from your current local HEAD instead (useful when subagents need in-progress work):

// ~/.claude/settings.json
{
  "worktree": {
    "baseRef": "head"
  }
}

Worktrees + Subagents

Add isolation: worktree to a custom subagent's frontmatter — each invocation gets a temporary worktree that auto-cleans when done:

---
name: safe-refactorer
description: Refactors code in an isolated worktree
isolation: worktree
model: sonnet
---

Or ask Claude mid-session: "use worktrees for your agents."

Worktrees + Multiple Claude Sessions (Manual)

# Terminal 1 — feature work
git worktree add ../my-app-feature -b feature-notifications
cd ../my-app-feature && claude

# Terminal 2 — bug fix (parallel, isolated)
git worktree add ../my-app-bugfix -b fix-login-race
cd ../my-app-bugfix && claude

# Check status
git worktree list

# Clean up when done
git worktree remove ../my-app-feature

Copying Gitignored Files (.worktreeinclude)

Worktrees are fresh checkouts — .env and secrets won't be there. Add a .worktreeinclude file (uses .gitignore syntax) to auto-copy them:

# .worktreeinclude
.env
.env.local
config/secrets.json

Cleanup

Situation What happens

No changes madeWorktree and branch auto-removed
Changes or commits existClaude prompts: keep or remove?
Non-interactive (-p flag)Must clean up manually: git worktree remove <path>
Orphaned subagent worktreesAuto-removed at startup after cleanupPeriodDays

When to Use Worktrees

Scenario Use worktrees?

Multiple Claude sessions on same repoYes — prevents file collisions
Subagents that edit filesYes — add isolation: worktree
Subagents that only read/searchNo — read-only work doesn't conflict
Agent teamsUsually handled by file ownership boundaries, but worktrees add extra safety
Single session, single featureNo — just use a regular branch
Tip: Add .claude/worktrees/ to your .gitignore so worktree contents don't appear as untracked files in your main checkout.

Part 2: Tooling & Skills Reference

9. Superpowers Plugin

What It Is

The discipline framework for Claude Code. Superpowers enforces structured workflows — it won't let you skip planning, skip tests, or claim something works without evidence. Think of it as guardrails that prevent the most common AI-assisted development mistakes.

Installation

/plugin install superpowers@claude-plugins-official

The Complete Skill Chain

Skill When to invoke What it does

superpowers:brainstormingBefore any creative/design workAsks clarifying questions, proposes 2-3 approaches, writes a design spec
superpowers:writing-plansAfter a spec is approvedConverts spec into bite-sized implementation tasks (2-5 min each)
superpowers:executing-plansReady to implement (new session)Implements plan task-by-task with review checkpoints
superpowers:subagent-driven-developmentReady to implement (same session)Dispatches fresh subagent per task + two-stage review (spec compliance + code quality)
superpowers:test-driven-developmentDuring implementationEnforces: write failing test → minimal code → pass. No exceptions.
superpowers:systematic-debuggingAny bug or unexpected behaviorForces root cause investigation before any fix attempts
superpowers:dispatching-parallel-agents2+ independent problemsOne agent per problem domain, working concurrently
superpowers:verification-before-completionBefore claiming doneRequires fresh test run + evidence before any success claims
superpowers:finishing-a-development-branchAll tasks done and verifiedVerify tests → present options (merge/PR/cleanup) → execute
superpowers:using-git-worktreesNeed workspace isolationCreates isolated git worktree for feature work

Example: Full Workflow for "Add a New REST Endpoint"

1. /superpowers:brainstorming
   → Claude asks: what resource? what operations? auth needed?
   → Proposes: controller pattern vs. decorator pattern vs. generated
   → You pick one, Claude writes spec

2. You review spec, approve it

3. /superpowers:writing-plans
   → Produces 8 tasks: create route, add validation, write tests, etc.
   → Each task has exact file paths and code

4. You review plan, approve it

5. /superpowers:subagent-driven-development
   → Dispatches agents: Task 1 agent writes test → reviewer checks
   → Task 2 agent implements → reviewer checks
   → ...continues until all tasks done

6. /superpowers:verification-before-completion
   → Runs full test suite, confirms all pass

7. /superpowers:finishing-a-development-branch
   → Creates PR with standardized format

When to Use Which Execution Skill

Situation Skill Why

Many independent tasks in same sessionsubagent-driven-developmentFresh context per task, fast iteration
Want to review between every taskexecuting-plansMore human-in-the-loop control
Multiple unrelated failures to investigatedispatching-parallel-agentsConcurrent investigation

10. feature-dev:feature-dev

What It Is

A structured 7-phase feature development workflow with specialized agents. Unlike superpowers (where you compose skills manually), feature-dev is an all-in-one guided experience — it handles discovery, exploration, architecture, implementation, and review in a single flow.

Installation

/plugin install feature-dev@claude-plugins-official

The 7 Phases

Phase What happens Agent spawned

1. DiscoveryClarifies what to build, identifies constraints
2. Codebase ExplorationUnderstands existing patterns and conventionscode-explorer
3. Architecture DesignDesigns the solution, proposes structurecode-architect
4. ClarificationAsks remaining questions before implementation
5. ImplementationBuilds the feature
6. ReviewQuality check against conventions and best practicescode-reviewer
7. CompletionFinal verification

When to Use

  • Building a new feature in a codebase you don't fully know yet
  • You want a guided end-to-end workflow without manually invoking individual skills
  • The feature requires understanding existing patterns before implementing

How to Invoke

/feature-dev:feature-dev Add WebSocket support for real-time notifications

Or without arguments (it will ask you interactively):

/feature-dev:feature-dev

feature-dev vs. superpowers — When to Use Which

Aspect feature-dev superpowers

ControlGuided, opinionated flowModular, you compose the workflow
Best forUnfamiliar codebases, new featuresAny task, maximum flexibility
AgentsBuilt-in specialized agentsYou choose agent strategy
Plan artifactsInternal to the flowSaved as reusable documents
Learning curveLower — just invoke and followHigher — need to know which skill when

Use feature-dev when you want to be guided through building something new.
Use superpowers when you know exactly what workflow you need and want fine-grained control.

11. pr-review-toolkit:review-pr

What It Is

A comprehensive PR review system that dispatches 6 specialized agents in parallel — each focused on a different quality dimension. You get back structured findings with severity ratings and file:line citations.

Installation

/plugin install pr-review-toolkit@claude-plugins-official

The 6 Review Agents

Agent Focus What it catches

comment-analyzerComment accuracyOutdated comments, misleading documentation, comment rot
pr-test-analyzerTest coverageMissing tests, weak assertions, untested edge cases
silent-failure-hunterError handlingSwallowed errors, empty catch blocks, inappropriate fallbacks
type-design-analyzerType designPoor encapsulation, missing invariants, leaky abstractions
code-reviewerCode qualityStyle violations, logic errors, convention adherence
code-simplifierMaintainabilityUnnecessary complexity, opportunities to simplify

How to Use

# Review the current PR (auto-detects branch and diff)
/pr-review-toolkit:review-pr

What You Get

Each agent reports findings in a structured format:

  • Severity — Critical / Warning / Suggestion
  • Location — Exact file:line reference
  • Finding — What's wrong
  • Recommendation — How to fix it

Workflow

  1. Finish your implementation
  2. Run /pr-review-toolkit:review-pr
  3. Review findings — focus on Critical and Warning items
  4. Fix critical issues
  5. Create/merge PR with confidence

When to Use

  • Before merging any significant PR (more than a few lines changed)
  • After implementing a feature, before requesting human review
  • When reviewing others' PRs — use it as a first pass to catch mechanical issues

12. Agent Teams

What It Is

Multi-agent orchestration — multiple Claude instances working simultaneously on different aspects of a task. One agent handles database changes while another handles the API layer while a third writes tests. They coordinate through a team-lead agent that manages file ownership and dependencies.

Installation

# Add the marketplace
/plugin marketplace add wshobson/agents

# Install the plugin
/plugin install agent-teams@claude-code-workflows

Prerequisites

Add to your ~/.claude/settings.json:

{
  "env": {
    "CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1"
  }
}

For best visibility, run Claude Code in a tmux session. Agents will appear as panes you can watch in real-time.

Available Commands

Command Purpose Example

/team-spawnSpawn from presets/team-spawn review
/team-featureParallel feature development/team-feature "Add RBAC" --team-size 3 --plan-first
/team-debugCompeting hypothesis debugging/team-debug "API 500 on user creation"
/team-reviewMulti-dimensional code review/team-review
/team-delegateTask delegation dashboard/team-delegate
/team-statusCheck progress/team-status
/team-shutdownGraceful cleanup/team-shutdown

Agent Roles

Agent Role What it does

team-leadOrchestratorDecomposes work, assigns file ownership, manages lifecycle, synthesizes results
team-implementerBuilderImplements code within strict file ownership boundaries
team-reviewerReviewerReviews one assigned dimension (security, performance, architecture, testing, accessibility)
team-debuggerInvestigatorInvestigates one hypothesis, gathers evidence to confirm or falsify

Key Concept: File Ownership

Each implementer gets exclusive ownership of specific files. No two agents modify the same file. This prevents merge conflicts and ensures clean parallel work.

Example: building "Role-Based Access Control" with 3 agents:

  • Agent 1 owns: src/auth/roles.ts, src/auth/guards.ts
  • Agent 2 owns: src/api/middleware/rbac.ts, src/api/decorators/roles.ts
  • Agent 3 owns: tests/auth/, tests/api/rbac.spec.ts

Presets

Available presets for /team-spawn:

Preset Agents Use case

reviewMultiple reviewersParallel code review across dimensions
debugMultiple debuggersCompeting hypothesis investigation
featureLead + implementersParallel feature development
fullstackFrontend + backend + testFull-stack feature with coordination
researchMultiple explorersParallel codebase research
securitySecurity-focused reviewersComprehensive security audit
migrationMultiple implementersCoordinated codebase migration

Example: Debugging with Competing Hypotheses

/team-debug "API returns 500 on user creation"

What happens:

  1. Team-lead analyzes the bug, proposes 3 hypotheses:
    • H1: Database constraint violation
    • H2: Validation middleware rejecting payload
    • H3: Auth token expired mid-request
  2. Three debugger agents investigate in parallel
  3. Each gathers evidence (logs, code traces, test reproductions)
  4. Team-lead synthesizes: "H1 confirmed — unique constraint on email, no error handling"
  5. You get a root cause + suggested fix

Example: Parallel Feature Development

/team-feature "Add user notifications system" --team-size 3 --plan-first

What happens:

  1. Team-lead decomposes into 3 streams:
    • Stream 1: Database schema + service layer
    • Stream 2: API endpoints + WebSocket handler
    • Stream 3: Frontend notification component + tests
  2. Each implementer works on their assigned files
  3. Team-lead coordinates interface contracts between streams
  4. Result: fully implemented feature from all 3 agents

13. Graphify

📦GitHub Repository

What It Is

Turns any folder of files — code, docs, papers, images, notes — into a navigable knowledge graph with community detection, an honest audit trail, and interactive visualization. Think of it as "make this codebase searchable and connected" in one command.

Why Use It (What Claude Alone Can't Do)

  1. Persistent graph — relationships stored in graphify-out/graph.json survive across sessions. Ask questions weeks later without re-reading everything.
  2. Honest audit trail — every edge is tagged EXTRACTED (found in source), INFERRED (reasoned about), or AMBIGUOUS. You know what was found vs. invented.
  3. Community detection — automatically finds clusters of related concepts you might not have noticed.

Common Commands

# Full pipeline on current directory
/graphify

# Full pipeline on specific path
/graphify ~/projects/my-app

# Deep mode — thorough extraction, richer relationships
/graphify ~/projects/my-app --mode deep

# Incremental update — only process new/changed files
/graphify ~/projects/my-app --update

# Build a wiki agents can crawl
/graphify ~/projects/my-app --wiki

# Write to Obsidian vault
/graphify ~/projects/my-app --obsidian

# Query the graph (BFS — broad context)
/graphify query "How does authentication work?"

# Query the graph (DFS — trace one specific path)
/graphify query "What calls the payment service?" --dfs

# Find shortest path between two concepts
/graphify path "AuthModule" "Database"

# Get plain-language explanation of a concept
/graphify explain "SwinTransformer"

What It Produces

Output Location Use

Interactive HTML graphgraphify-out/index.htmlVisual exploration in browser
Graph data (JSON)graphify-out/graph.jsonProgrammatic access, agent queries
Reportgraphify-out/GRAPH_REPORT.mdGod nodes, communities, key relationships

When to Use

  • Onboarding to a new codebase — run /graphify --mode deep and explore the graph
  • Understanding dependencies — "what depends on this module?"
  • Finding hidden connections — community detection reveals unexpected coupling
  • Building a knowledge base — drop papers, notes, and code into one folder
  • Before refactoring — understand what's connected before you change it

14. Chrome CDP

📦GitHub Repository

What It Is

A lightweight Chrome DevTools Protocol skill that lets Claude interact with your local Chrome browser — list tabs, take screenshots, read accessibility trees, run JavaScript, click elements. No Puppeteer dependency, works with 100+ tabs, instant connection.

Prerequisites

  1. Open Chrome
  2. Go to chrome://inspect/#remote-debugging
  3. Toggle the switch to enable remote debugging

Key Commands

# List all open tabs (shows targetId prefix for each)
scripts/cdp.mjs list

# Take a screenshot of a specific tab
scripts/cdp.mjs shot <targetId> [output-file]

# Get accessibility tree snapshot (great for understanding page structure)
scripts/cdp.mjs snap <targetId>

# Run JavaScript in the page context
scripts/cdp.mjs eval <targetId> "document.title"
scripts/cdp.mjs eval <targetId> "document.querySelector('.btn').textContent"

# Click at coordinates
scripts/cdp.mjs click <targetId> 150 300

# Navigate to a URL
scripts/cdp.mjs nav <targetId> "https://example.com"

The <targetId> is a prefix from the list output (e.g., 6BE827FA).

When to Use

Scenario How CDP helps

Testing UI changesTake screenshot, verify layout without switching windows
Debugging frontendInspect accessibility tree, run JS to check state
Extracting dataPull content from web apps that don't have APIs
Verifying visual regressionsScreenshot before/after comparison

15. tmux & ccc — Parallel Agent Visibility

Why This Matters

When Claude spawns subagents or agent teams, they run as separate processes. Without tmux, they're invisible — you can't see what they're doing until they report back. With tmux, each agent gets its own pane. You watch them work in real-time, catch issues early, and understand what's happening.

The Pattern

Start Claude Code inside a tmux session → agents spawn as new panes in the same session → you see everything.

ccc (Claude Code Companion)

A companion tool that wraps Claude Code in tmux. The primary use case is running Claude in tmux so agent teams get visible panes.

What it does:

  • Launches Claude Code inside tmux sessions automatically
  • Hook integration (handles Claude's notifications and permissions)
  • Session management (start, continue, kill)
  • Optional: Telegram bot for remote control (not required)

Setup:

# 1. Install ccc (Go binary — follow instructions at the repo)
# https://github.com/kidandcat/ccc

# 2. Setup without Telegram (tmux-only mode)
ccc setup

# 3. Verify everything works
ccc doctor

ccc doctor checks: tmux installed, claude CLI found, config exists, hooks installed.

Daily usage:

# Start a new session in the current directory
ccc

# Continue previous session (preserves conversation history)
ccc -c

Why This Matters for Agent Teams

Without tmux With tmux (via ccc)

Agents run invisiblyEach agent gets a visible pane
You wait for the final reportYou watch progress in real-time
Hard to catch early mistakesSee issues as they happen
No sense of what's happeningFull visibility into parallel work

This is especially valuable when using /team-feature or /team-debug — you can watch 3-4 agents working simultaneously, each in their own pane.

1 Comment