Two sub-agents report EV market share. One says 35%, the other says 42%. The coordinator averages them: 38.5%. A reviewer notes that 38.5% appears in no source document. It is a fabricated number — mathematically derived, attributable to nothing, verifiable against nothing.
The correct output: “Source A reports 35% (2023 industry consortium), Source B reports 42% (2024 government statistics).” Both values preserved. Both sources cited. The reader can evaluate the conflict themselves.
This is the core principle: when sources conflict, preserve both with attribution. Do not silently resolve the conflict by averaging, choosing one, or fabricating a compromise.
Why Conflicts Happen — and Why That Is Normal
Conflicting data from credible sources is not an error to fix. It is a signal to preserve. Sources conflict because:
- Different time periods: 35% in 2023 vs 42% in 2024 is a trend, not a contradiction
- Different methodologies: One benchmark tests on 4 cores, another on 8 cores — both results are real
- Different contexts: Use async/await (Node 16+), use callbacks (Node 12), avoid async in hot paths (low-latency) — each is correct in its context
- Different scopes: One subsidiary counts direct revenue, another includes channel revenue
Resolving these conflicts silently destroys information. The reader needs to know the conflict exists to make the right decision for their context.
The Anti-Patterns
Averaging
The most common and most dangerous resolution. 35% and 42% become 38.5%. $4.2M and $5.1M become $4.65M. 10,000 ops/sec and 15,000 ops/sec become 12,500 ops/sec. Every averaged figure is fabricated data that cannot be traced to any source.
One research system’s audit found 25% of conflict resolutions used averaging. Combined with 45% that silently chose the newer value and 10% that randomly selected a source, only 20% of conflicts were handled correctly (both values preserved with attribution). The 35% user complaint rate on reports containing conflicts correlated directly with the 80% incorrect-resolution rate.
Recency bias
“Use whichever value was updated more recently.” A customer support system applied this rule to pricing: promotional prices were entered, then nightly catalog updates refreshed standard prices with a later timestamp. The recency rule systematically overwrote valid promotions with standard prices. 60% of pricing complaints traced back to this pattern.
Recency does not equal accuracy. The most recent update could be an automated bulk refresh, an error, or a different data scope. It tells you when something was written, not whether it is correct.
Choosing the “authoritative” source
When two databases both say they are authoritative — the product database and the warranty database both claiming to hold the definitive warranty period — choosing one is a gamble. The product database may have been updated with a policy change the warranty database has not yet received. Or vice versa. The agent is not in a position to determine which is correct.
Using the “conservative” value
A proposal to always use the higher revenue figure “to be conservative” gets it backwards. Conservative revenue reporting uses the lower figure. But the deeper problem is that systematically choosing one side of a conflict creates directional bias. Always using the higher value inflates revenue. Always using the lower understates it. Both are wrong.
The Correct Pattern
Present both values with:
- The value itself
- The source (document, database, URL)
- Context that explains why the values might differ (time period, methodology, scope)
- A flag indicating the conflict for human resolution
For customer-facing systems, this means escalating to a human agent: “Our system shows two different warranty periods for your product — 12 months in one database and 24 months in another. Let me connect you with someone who can verify the correct terms.”
For research reports, this means a comparison:
EV market share:
- 35% (2023 Industry Consortium Report, based on unit sales)
- 42% (2024 Government Statistics Bureau, based on registration data)
Note: Different time periods and methodologies.
For technical documentation, this means presenting all relevant recommendations with their contexts, not filtering to a single “best” answer that may not apply to the reader’s situation.
Context-Dependent Conflicts Are Not Contradictions
Four sources give different advice about async patterns:
- “Use async/await” (Node.js 16+ guide)
- “Use callbacks” (Node.js 12 legacy docs)
- “Use Promise.all for parallel” (performance blog)
- “Avoid async in hot paths” (low-latency systems guide)
These are not conflicting — they are context-dependent. Each is correct within its context. Synthesizing into a single unified recommendation forces a false consensus that cannot correctly address all contexts. Filtering by recency loses the legacy and performance-specific advice. The correct presentation includes all four with their contexts, letting the developer select the approach that matches their situation.
Financial Data: Higher Stakes, Same Principle
When two subsidiaries report different revenue for the same product line, the stakes of silent resolution are higher. Choosing the higher value creates upward bias in revenue data. Choosing the lower creates downward bias. Averaging produces a fabricated figure. Different rules for different audiences (higher for internal, lower for regulatory) creates inconsistency.
The only safe approach: preserve both values with their sources and flag the discrepancy for human reconciliation. The underlying cause may be a legitimate accounting difference (different revenue recognition methods) that requires human judgment to resolve.
Dependency Conflicts
A CI/CD agent cross-references npm, GitHub advisories, Snyk, and an internal audit log. npm says package v2.1 is safe. Snyk says it has a critical CVE. Averaging severity scores is meaningless. Using npm as the sole authority creates security blind spots. Always using the most restrictive assessment causes alert fatigue from false positives.
Present all assessments with sources. The development team evaluates based on their context — a critical CVE flagged by Snyk but not npm may reflect different detection timelines, and the team’s risk tolerance determines the response.
One-liner: When sources conflict, present both values with their sources and context — averaging fabricates data, choosing one loses information, and only preserving both gives the reader what they need to decide.