When the model decides to call a tool, it doesn’t execute anything. It produces a tool_use content block inside its response — a structured request saying “please run this tool with these parameters.”
Three fields in every tool_use block
id— A unique identifier for this specific tool call (e.g.,"toolu_abc123"). You’ll need this later when sending back the result.name— Which tool the model wants called. Matches thenamefrom your tool definition.input— The parameters the model generated, conforming to the tool’sinput_schema.
{
"type": "tool_use",
"id": "toolu_01A09q90qw90lq917835lq9",
"name": "get_weather",
"input": {"location": "San Francisco", "unit": "celsius"}
}
The id is a linking mechanism
The id field exists for one purpose: connecting the request to its result. When you send back a tool_result, its tool_use_id must exactly match the id from the tool_use block. This pairing tells the model which result belongs to which request — critical when multiple tool calls happen in the same turn.
input is the instance, input_schema is the blueprint
The tool definition’s input_schema describes the structure — types, required fields, constraints. The tool_use block’s input is the actual instance — concrete values the model generated following that structure. The schema says “ticker must be a string.” The input says {"ticker": "AAPL"}.
Don’t modify the tool_use block in conversation history
When you pass the conversation back to the API on the next turn, the assistant message containing the tool_use block should remain unchanged. You can execute the tool with modified inputs if needed (for safety, sanitization, etc.), but the tool_use block stored in the conversation history should reflect what the model originally requested. Altering it creates a mismatch between what the model thinks it asked for and what the history shows.
One-liner: The tool_use block is a structured request from the model — extract name and input to execute the tool, and save the id to link the result back.