KB article
Narrative-Ready Models: Designing for Text Explanations
Narrative‑ready models provide the context and structure AI needs for clear explanations.
arf-kbanalytical-explainabilitymetadata-densityexplanation-templatedrivers
TL;DR
- Narratives need structure, not just numbers.
- Metadata and driver measures make narratives reliable.
The problem
- AI narratives are generic because the model lacks context.
- Metrics are not annotated with business meaning.
Why it matters
- Narratives are only useful when they reflect business reality.
- Clear explanations reduce manual analyst effort.
Symptoms
- AI outputs vague text like “revenue increased.”
- Narratives omit drivers and caveats.
Root causes
- Low metadata density.
- No driver measures or explanation template.
What good looks like
- Measures include definitions, units, and caveats.
- Driver measures are available for key KPIs.
How to fix
- Add descriptive metadata to measures.
- Define narrative templates for KPIs.
- Use driver measures in explanations.
Pitfalls
- Over‑reliance on AI to “figure it out.”
- Generic narratives without context.
Checklist
- Metadata density improved.
- Narrative templates defined.
- Drivers and caveats included.
Framework placement
Primary ARF layer: Analytical Explainability. Diagnostic bridge: semantic-reliability, execution-reliability, change-reliability.