Sub-agents are independent sessions. They do not inherit the coordinator’s conversation history, tool results, or any context from prior turns. If the coordinator doesn’t explicitly include information in the sub-agent’s task prompt, the sub-agent doesn’t know it exists. This is the single most common source of multi-agent bugs.
The misconception that causes bugs
“The sub-agent will see what the coordinator already found.” No. Each sub-agent starts as a fresh session. The coordinator found 5 key papers through search? The sub-agent doesn’t know they exist unless the coordinator includes them in the task prompt. The coordinator established coding standards? The sub-agent will ignore them unless they’re in its prompt.
Classic symptom: a sub-agent duplicates work the coordinator already did (re-searching for papers, re-looking up customer data). Root cause: the task prompt said “analyze the research findings” without including the actual findings.
What to pass and how
Explicit, structured, task-specific context. Not the coordinator’s entire 25,000-token history — only the facts relevant to the sub-agent’s task:
- Search sub-agent: query + scope (500 tokens)
- Analysis sub-agent: papers + criteria (3,000 tokens)
- Synthesis sub-agent: all findings (8,000 tokens)
One production system was passing 25,000 tokens to each of 3 sub-agents (75,000 total) when only 11,500 tokens were actually needed across all three. Task-specific curation cut tokens by 85%.
Quality correlates with context completeness
Data from a customer support system:
- Billing sub-agent: 92% quality (received full customer + order context)
- Shipping sub-agent: 71% quality (received only order ID, missing address and delivery preferences)
- Account sub-agent: 65% quality (received only customer name, missing account tier and history)
Same architecture, same model, different context completeness → different quality. The fix: enrich the coordinator’s context passing for shipping (add address + preferences) and account (add tier + history).
Include rules AND format
Two things commonly missing from sub-agent prompts:
- Domain rules: custom security rules, coding standards, business policies — anything the coordinator knows but the sub-agent needs
- Output format: how findings should be structured so the coordinator can aggregate them
Without domain rules, the sub-agent applies generic standards instead of project-specific ones. Without output format, the coordinator struggles to merge results from different sub-agents.
No shared state needed
The Agent SDK’s Task tool is designed for explicit context passing through prompts. Adding Redis, shared databases, or message queues introduces infrastructure complexity for a problem that prompt-based context passing already solves. There are no race conditions when each sub-agent receives its context in its prompt.
One-liner: Sub-agents are independent sessions with zero inherited context — the coordinator must curate and pass task-specific context in each prompt, and quality directly correlates with context completeness (92% vs 65% in production data).