Using dynamic decomposition for everything sounds smart — “it can adapt when needed.” But on fixed-pipeline tasks, dynamic adds 30% latency overhead with zero quality improvement. Strategy matching — chaining for fixed, dynamic for exploratory — optimizes both efficiency and quality.
The data: strategy matching matters
A system uses dynamic decomposition for all tasks:
| Task type | Dynamic overhead | Quality gain | Volume |
|---|---|---|---|
| Fixed-pipeline (code review) | +30% latency | 0% | 60% |
| Exploratory (research) | minimal | +25% | 40% |
Total wasted overhead: 30% × 60% = 18% overall. Strategy matching eliminates this while preserving the 25% quality gain on exploratory tasks.
Prompt chaining: fixed, predictable steps
Security scan → performance audit → style check → integration summary. Always the same steps, same order, same outputs feeding forward. Prompt chaining provides consistency, auditability, and efficiency.
A rigid chain that prevents mid-execution plan changes is correct for this task — the steps don’t need to change. Introducing dynamic decomposition here adds analysis overhead (the agent evaluates whether to adapt, never does) and reduces predictability (might unnecessarily deviate between runs).
Dynamic decomposition: discovery-dependent planning
“Understand this legacy codebase and identify risky areas.” The agent doesn’t know the structure upfront. It explores, discovers modules, identifies coupling, and adjusts its focus based on findings. A fixed plan can’t account for what the agent hasn’t discovered yet.
When a research agent finds a groundbreaking paper in step 1 of a 4-step chain, a rigid chain forces step 2 to proceed as originally planned — missing the paper’s implications. Dynamic decomposition lets the agent restructure its approach.
Hybrid: chaining + dynamic in one pipeline
A CI pipeline has two phases:
- Phase 1 (code review): fixed 4-step chain (security → performance → style → integration)
- Phase 2 (improvement suggestions): exploratory, depends on what review found
Use chaining for phase 1 (predictable, auditable). Switch to dynamic for phase 2 (adaptive, discovery-driven). Each phase gets the optimal strategy.
Semi-structured tasks: chain then adapt
Debugging follows a pattern: first 2 steps are fixed (reproduce → isolate), then the approach adapts based on the root cause found. Chain the fixed steps, then switch to dynamic for the adaptive phase.
Extension points: predictability + adaptability
A CI system needs both auditable steps (reviewers expect consistent structure) AND adaptation (some findings need deeper investigation). Fix: a prompt chain backbone with extension points — places where the agent can optionally invoke deeper investigation if findings warrant it. The chain ensures consistent base behavior; extensions allow targeted adaptation without disrupting the overall structure.
”Dynamic subsumes chaining” is wrong
A colleague claims: “Always use dynamic — it replicates chaining when the task is fixed.” While technically true, dynamic adds overhead to fixed tasks: unnecessary analysis, reduced predictability, harder debugging. A Swiss Army knife can turn screws, but a screwdriver is better for the job.
The decision framework
Step 1: Does the task have predictable, fixed steps? → Prompt chaining. Step 2: Does the task require adaptation based on discoveries? → Dynamic decomposition. Step 3: Does it have both fixed and adaptive phases? → Hybrid.
The nature of the task determines the strategy. Analyze characteristics before choosing — don’t default to either.
One-liner: Prompt chaining for fixed pipelines (saves 30% overhead), dynamic decomposition for exploratory tasks (+25% quality), hybrid for mixed tasks — match strategy to task type, don’t use dynamic for everything.