When an extraction fails validation, re-sending the same prompt produces the same result. The model made a deliberate extraction choice, not a random mistake. It will make the same choice again without targeted information about what went wrong.
The Three-Component Retry Prompt
Effective retry requires three inputs:
- Original document — the source data
- Failed extraction — what was produced
- Specific validation errors — what needs fixing
Example: “Line items sum to $450 but total field says $500. $50 discrepancy. The source document shows a $50 shipping charge on line 4 that was not extracted.”
Feedback Specificity Drives Success
Three levels tested on 150 extraction failures:
| Feedback level | Correction rate |
|---|---|
| ”Extraction failed, please retry” | 11% |
| “customer_name field is empty” | 54% |
| “customer_name is empty but document line 1 says ‘Bill To: John Smith‘“ | 87% |
Each specificity layer — error signal, field identification, source data location — adds substantial value. The most specific feedback resolves 8x more failures than a generic retry message.
Blind retry (identical prompt, no feedback) corrects 12% after 3 attempts. One retry with specific error feedback corrects 73%. Two retries with feedback: 89%.
Include ALL Errors in One Retry
When multiple validation errors occur simultaneously, include all of them in a single retry prompt. Claude can address multiple issues at once. Sequential single-error retries waste API calls and may introduce new errors while fixing old ones.
Build Validation First
The first implementation step is not the retry loop. It is the validation layer that identifies specific errors and formats them as feedback. Without knowing what went wrong, no retry prompt can provide targeted information.
What Does Not Work
Higher temperature increases randomness, not accuracy. Format and value errors need targeted correction, not variation.
“Try again and be more careful” is not actionable. The model already tried its best — it needs to know specifically what was wrong.
Retry without feedback produces the same result because the model has no new information to change its behavior.
One-liner: Include the original document, the failed output, and specific validation errors with exact values in each retry — this corrects 87% of failures versus 11% for generic “please retry” messages.