The Task tool delegates work to sub-agents. Getting the invocation right means understanding three required inputs, the response format the coordinator receives back, and the surprisingly common bug where the coordinator ignores sub-agent results.
Three required input parameters
description — a short 3-5 word summary identifying the task. “review auth security,” “search academic papers,” “run tests.” This is for logging and monitoring, not for the sub-agent. When reviewing logs with dozens of delegations, a concise description lets you identify what was delegated at a glance. Don’t write a paragraph. Don’t copy the prompt. Don’t leave it empty or generic (“task”).
prompt — the detailed task instructions with all context the sub-agent needs. This is where you include curated context (see K1.2.2), specific requirements, and expected output format. The prompt is the sub-agent’s primary input.
subagent_type — the string key matching an agent in the agents configuration dictionary. If you configured agents={"code-reviewer": AgentDefinition(...)}, then subagent_type="code-reviewer".
All three are required. Omitting any one causes a validation error — the SDK does not infer missing parameters from context, and does not auto-select agents.
Dynamic subagent_type selection
The subagent_type is a regular string parameter that the coordinator model can set dynamically at runtime. A research coordinator with agents ["academic-researcher", "web-researcher", "financial-analyst"] can examine each query and choose the most appropriate specialist — routing academic queries to academic-researcher and financial queries to financial-analyst.
This is not hardcoded at development time. The model reads agent descriptions and selects the best match for each task, just as it selects tools by description. Specialized agents outperform generalists on domain-specific tasks.
Task tool response format
The Task tool returns four fields:
{
"result": "Customer has a $50 credit from return #RET-456",
"usage": {"prompt_tokens": 1200, "completion_tokens": 450},
"total_cost_usd": 0.003,
"duration_ms": 8000
}
result — the sub-agent’s output text. This is what the coordinator should incorporate into its own reasoning and response.
usage — token counts for the delegation. Useful for prompt optimization.
total_cost_usd — the dollar cost of this delegation. Critical for cost monitoring.
duration_ms — execution time. Critical for performance monitoring and timeout tuning.
The ignored-result bug
A coordinator delegates a billing investigation. The Task tool returns result: "Customer has a $50 credit". The coordinator responds: “I was unable to find any billing credits.” Why?
The coordinator’s prompt had no instructions about how to handle Task tool responses. Without guidance to incorporate the result field into its reasoning, the coordinator defaults to its own (uninformed) answer. Fix: add explicit instructions in the coordinator’s prompt about reading and using Task tool results.
Response metadata for operations
The non-result fields are not just for billing reports:
-
total_cost_usdper delegation, aggregated by subagent_type → identifies which agent types drive the most spend. One system found API costs grew 3x while volume only grew 1.5x — the image extractor at $0.05/invocation was 25x more expensive than other agents. -
duration_msper delegation → identifies performance bottlenecks. An image extractor averaging 25,000ms consumed 84% of a 30-second SLA budget. The PDF and text extractors combined for only 4,500ms. -
usagetoken counts → guides prompt optimization. An agent using 8,000 prompt tokens per invocation may have overly verbose context injection.
Use all four fields: result for decisions, duration_ms for timeout tuning, total_cost_usd for budget alerts, usage for prompt optimization.
One-liner: The Task tool takes description (3-5 word summary), prompt (detailed instructions), and subagent_type (dynamically selectable agent key) — and returns result, usage, cost, and duration metadata that the coordinator must be explicitly prompted to use.