Should the model decide what to do, or should you hardcode the path in advance? The answer depends on whether the task has one predictable path or many context-dependent variations.
Model-driven: let the model choose tools
The Agent SDK’s core design principle: provide Claude with tool descriptions and let it select the right tools based on context. The model examines the situation (customer inquiry type, PR characteristics, document format) and autonomously decides which tools to call and in what order.
This works because the model can assess context that no hardcoded tree can anticipate. A customer support agent sees a billing dispute that also involves a product return — it adapts its tool sequence to handle both. A code analysis agent sees a Rust PR and skips the Python linter. A research coordinator recognizes a fast-moving topic and prioritizes recent news over academic papers.
Decision trees: when the path is fixed
Hardcoded decision trees work for deterministic processes with no variation. Identity verification that must follow a strict 3-step regulatory sequence (name → DOB → security question) — no exceptions, no judgment calls, compliance penalties for deviation. This is exactly where a decision tree is correct: the path is known, fixed, and legally required.
The key question: is there ambiguity that requires judgment? If yes → model-driven. If no, the sequence is fixed and deviation is a risk → decision tree.
The data tells the story
A comparison between hardcoded extraction (Version A) and model-driven extraction (Version B):
- Standard invoices: 97% vs 96% — essentially equal
- Non-standard documents (handwritten, multi-language, unusual formats): 61% vs 89% — model-driven wins by 28 points
Decision trees match model-driven performance on predictable cases. They fall apart on non-standard cases where the hardcoded path doesn’t account for the variation. The 28-point gap is where adaptability matters.
The anti-pattern: running everything on everything
Hardcoding a fixed sequence that runs ALL tools on EVERY input regardless of context is the worst of both worlds. Running a Python linter on a Rust PR wastes resources. Running a security scanner on a docs-only change adds latency without value. This is the non-agentic automation script pattern — it ignores context entirely.
The right hybrid
Most real systems combine both approaches:
- Decision trees for fixed processes: API authentication sequences, rate limiting, data format validation, regulatory compliance steps
- Model-driven for open-ended processes: research strategy, source evaluation, code analysis decisions, customer issue resolution
Match the approach to the process type at design time. Don’t let the model decide at runtime which paradigm to use — that’s knowable in advance.
Transitioning from decision tree to model-driven
When converting a rigid pipeline (lint → test → build → deploy) to an intelligent agent, don’t replace everything at once. Start with a model-driven pre-analysis step that examines the change and recommends which steps to run, with human approval initially. This provides model intelligence while maintaining pipeline reliability during the transition.
One-liner: Use model-driven decisions for tasks with variability and ambiguity, decision trees for fixed deterministic sequences — and the data shows model-driven wins by 28 points on non-standard cases where adaptability matters.