The most common misunderstanding about context: fork is its scope. Fork does not create a sandbox. It does not isolate file operations. It isolates exactly one thing: the conversation context — what the model remembers.
A forked skill with Write access creates real files that persist in the project after the fork ends. The verbose analysis that led to those files stays in the fork. The files stay on disk.
What Fork Actually Does
When a skill runs with context: fork, Claude Code spins up a sub-agent with its own conversation context. The sub-agent performs the skill’s work — reading files, running analysis, generating output. When it finishes, only a summary returns to the main session. The detailed output stays in the fork’s context and is not available to the main session.
This means:
| Operation | Isolated? | Why |
|---|---|---|
| Conversation context | Yes | Fork is a separate context window |
| File reads | No | Reads the real filesystem |
| File writes | No | Writes to the real filesystem |
| Bash commands | No | Executes in the real shell |
A skill that explores the codebase (3,000 tokens of analysis) and then writes refactored code: the analysis stays in the fork, the refactored files persist on disk. Both requirements met with one forked skill — no need to split into separate explore and modify skills.
The Tradeoff: Context Efficiency vs Follow-Up Questions
Before fork: a codebase analysis skill consumes 65% of main context. Only 35% remains for subsequent work.
After fork: the skill’s summary consumes 5% of main context. 95% remains for work.
But developers report they can no longer ask detailed follow-up questions. “Tell me more about the vulnerability in auth.ts” gets “I don’t have information about that module” — because the detailed findings exist only in the fork’s context, which the main session cannot access.
The Solution: Better Summaries
The tradeoff is not binary. A well-designed summary bridges both needs.
One team surveyed developers about their forked security scan:
- 70%: Summary too vague — “Found 3 vulnerabilities” without details
- 20%: Summary about right — key findings with file locations
- 10%: Summary too detailed — defeats the purpose of forking
The 20% who said “about right” describe the target format. Fix the 70% by adding instructions to the skill body that specify what the summary must include:
- Finding count and severity
- Affected file paths
- Brief remediation hints
The summary becomes actionable without being verbose. Developers can follow up on summary-level details (“Fix the high-severity issue in auth.ts”) without needing the full analysis in context.
This is controlled by skill instructions, not fork configuration. Fork doesn’t have a “summary verbosity” setting. What the sub-agent returns depends on what the skill tells it to summarize.
When to Fork
Fork when:
- Output exceeds ~5,000 tokens and the main session only needs a summary
- Verbose exploration, brainstorming, or scanning where detailed output is intermediate work
- Skills that both explore (verbose) and modify (persistent) — fork handles both correctly
Do not fork when:
- The skill produces minimal output (200 tokens, a pass/fail result)
- Developers need to ask detailed follow-up questions about the output
- The skill establishes session context (architecture notes, conventions) that subsequent commands need
- Quick operations where fork overhead is disproportionate (2-second lint check)
The decision criterion is not just token count. A 200-token skill that sets up critical session context should not fork. A 4,000-token brainstorming skill where the developer picks one approach from eight should fork — the other seven approaches are noise once one is chosen.
Data: Before and After Fork
From a team that added fork to their codebase analysis skill:
| Metric | Before fork | After fork |
|---|---|---|
| Main context consumed | 65% | 5% |
| Context left for work | 35% | 95% |
| Follow-up detail available | Full | Summary only |
The 60-percentage-point context savings transformed the team’s workflow — subsequent tasks that previously failed from context exhaustion now had ample room. The summary quality problem was solved separately through better skill instructions.
/compact Is Not the Answer
After a verbose, non-forked skill fills the context, /compact compresses the conversation. This is a manual, lossy workaround:
- Must be run after every verbose skill invocation
- May discard useful context alongside skill output
- Developer must remember to do it
Fork is the architectural fix. It prevents the problem structurally — no manual step, no information loss in the main context, no developer discipline required.
One-liner: Fork isolates what Claude remembers (conversation context), not what it does (filesystem operations) — and the quality of the summary it returns depends on your skill instructions, not fork settings.