The agent trusts its conversation history implicitly. If a file was rewritten since the last session, the agent’s cached Read result shows the old version — and it has no signal that anything changed. Stale data detection is the developer’s responsibility, not the agent’s.
The agent can’t detect stale data
A developer resumes a 3-day-old session. The agent recommends extracting a function from auth.js — but auth.js was completely rewritten 2 days ago. The agent analyzes the old version as if it’s current because its conversation history contains the old Read result.
The agent has no “last modified” comparison, no change detection, no file freshness verification. It sees its prior tool results as truth.
First step: git diff
Before deciding resume vs fresh, check what changed: git diff --stat HEAD~N or git log --oneline --since="yesterday". Takes 30 seconds. Prevents hours of wasted work based on stale analysis.
The four-scenario decision matrix
| Scenario | Code changed? | Focus changed? | Strategy |
|---|---|---|---|
| A: Same branch, 0 changes, same focus | No | No | Resume (context fully valid) |
| B: Same branch, 3 unrelated files changed | Minor | No | Resume + notify (“utils.js modified, tests/new.js added — re-analyze these two”) |
| C: Different branch merged, 30 files changed | Major | No | New session + inject summary |
| D: Same branch, 0 changes, different focus | No | Yes | New session + inject summary (old focus’s detailed analysis may bias new direction) |
Context validity depends on what changed (code AND focus), not clock time. A 2-hour-old session is stale after a major merge; a 3-day-old session is valid if nothing changed.
The injected summary: insights, not data
When starting a new session after changes, inject a summary of prior findings. Include:
- Architectural understanding — module boundaries, dependency patterns
- Identified priorities — what areas need attention and why
- Key decisions — what was decided and the reasoning
- Tracked issues — known problems being monitored
Do NOT include:
- File contents — they’re stale, force the agent to re-read current versions
- Tool results — outdated Read/Grep output from prior sessions
- Specific line references — line numbers may have shifted
The summary carries forward insights while forcing fresh tool calls for specifics.
Incremental change handling
2 files changed out of 50 analyzed. Resume + targeted notification:
“Since our last session, utils.js was modified (bug fix in line 42) and tests/new-test.js was added. Please re-analyze these two files and update your findings.”
The agent keeps valid analysis of 48 files (saving significant time) while specifically refreshing the 2 changed files. This is optimal for small changes — much more efficient than starting fresh.
Daily CI: new session + summary every day
A CI workflow runs analysis daily on an evolving codebase. Pattern:
- Each day: new session + injected summary of yesterday’s key findings, tracked issues, and analysis priorities
- The summary provides continuity (yesterday’s insights inform today’s priorities)
- The new session ensures freshness (all tool calls read current code)
Always resuming yesterday’s session risks overnight changes making tool results stale. Always starting fresh loses accumulated understanding. The summary bridges both constraints.
The “always resume” fallacy
“Always resume — if the agent encounters outdated info, it’ll detect the discrepancy.”
Wrong. The agent doesn’t detect discrepancies in its own cached data. It analyzes old file contents as if they’re current. A function that was renamed, a module that was split, a dependency that was removed — the agent references the old versions without knowing they changed.
One-liner: Check git diff before deciding: no changes → resume, minor changes → resume + notify, major changes or focus shift → new session with insights-only summary (no file contents). The agent trusts its cached data implicitly and cannot detect stale context.