Most agentic loop failures come from three mistakes: using text content as a completion signal, using iteration limits as primary control, or not passing tool results back to the model. All three have the same root cause — not trusting stop_reason as the authoritative control mechanism.
Anti-pattern 1: Text content as completion signal
The bug: if response.has_text_content: stop_loop = True
Why it breaks: text blocks and tool_use blocks coexist in the same response. The model regularly sends explanatory text (“I’ll check the billing records now…”) alongside tool call requests. Checking for text presence terminates the agent mid-task.
The mirror bug: if response.has_text_content: continue_loop = True — this causes infinite loops because EVERY response contains text, including the final summary when stop_reason is already end_turn.
The intermittent version: if has_text OR stop_reason == 'end_turn': stop — sometimes works (when end_turn happens to coincide with the correct stop point) and sometimes causes premature termination (when text accompanies tool calls). The OR logic masks the problem by occasionally producing the right behavior.
The fix: remove all text content checks. Use stop_reason exclusively.
Anti-pattern 2: Iteration limit as primary control
The bug: max_iterations = 15 as the primary termination mechanism.
Why it breaks: complex tasks legitimately need 20, 30, or 50+ tool calls. An aggressive iteration limit cuts off the agent mid-work. One production system saw 7% of customer tickets terminated incomplete because the limit of 20 wasn’t enough for complex billing disputes.
The nuance: iteration limits are fine as a safety net (set generously at 100) to catch genuine infinite loops. They become an anti-pattern when they’re the primary termination mechanism — when normal successful completion relies on hitting the limit rather than the model signaling end_turn.
The fix: use stop_reason: "end_turn" for normal termination. Keep a generous iteration limit (100+) as a backstop for infinite loops only.
Anti-pattern 3: Not passing tool results
The bug: executing the tool but not appending the tool_result to the messages array before the next API call.
Why it breaks: the API is stateless. Without the tool_result in the history, the model’s next turn sees a conversation where it requested a tool but never received the output. Naturally, it requests the same tool again. Infinite loop.
How to spot it: the model consistently repeats the exact same tool call it already made. Not intermittently — every single time. If you see this pattern, check whether tool results are being appended to the messages array.
Related bug: sending only the latest tool_result without prior conversation history. The model receives the result but has no context about why it was requested or what the overall task is. Each turn becomes an isolated, context-free interaction.
The compound failure
These anti-patterns compound. A loop that checks text content AND has a tight iteration limit will either stop too early (text check triggers) or spin until the limit kills it (text check doesn’t trigger because the model sends tool_use without text). The fix for all three is the same: trust stop_reason as the sole authoritative loop control, send full history every request, and keep iteration limits as a generous safety net.
One-liner: The three killers: text content as completion signal (text coexists with tool calls), iteration limits as primary control (complex tasks need many calls), and missing tool results (stateless API loses context) — fix all three by trusting stop_reason.