KB article

Your Model Can Calculate, But Can It Explain?

Explainability requires more than calculations; it requires drivers and context.

arf-kbanalytical-explainabilityexplainabilitydriverslineage

TL;DR

  • Calculations answer “what,” not “why.”
  • Explainability requires drivers, lineage, and caveats.

The problem

  • Models are optimized for totals and KPIs but not explanations.
  • AI can’t justify changes without supporting measures.

Why it matters

  • Without explanations, users distrust results.
  • AI answers without context can be misleading.

Symptoms

  • Users ask “why did this change?” and get vague answers.
  • AI responses omit drivers or cite irrelevant factors.

Root causes

  • No driver measures or decomposition logic.
  • Missing metadata for assumptions.

What good looks like

  • KPI measures paired with driver measures.
  • Explainability is part of the model design.

How to fix

  • Add driver measures for top KPIs.
  • Create standard explanation templates.
  • Embed caveats in metadata.

Pitfalls

  • Assuming AI can infer drivers from raw data.
  • Ignoring outliers and null semantics.

Checklist

  • Top KPIs have driver measures.
  • Explanation templates exist.
  • Caveats documented in metadata.

Framework placement

Primary ARF layer: Analytical Explainability. Diagnostic bridge: semantic-reliability, execution-reliability, change-reliability.