Knowledge base
Operational reference articles behind ARF.
These entries turn the framework into something you can apply. They stay product-agnostic while staying grounded in model design, determinism, reliability, and explainability.
AI-Readable Schemas: What It Means in Practice
AI‑readable schemas have clear names, relationships, and metadata.
An AI Readiness Scorecard You Can Run Monthly
A simple scorecard tracks progress across metadata, context, and explainability.
Ambiguous Relationships: The Silent Context Killer
Ambiguous relationships create multiple filter paths, leading to unpredictable answers.
Assumptions and Caveats: Making Answers Trustworthy
Explicit assumptions and caveats keep AI answers honest and reliable.
Bidirectional Filtering: Convenience vs Predictability
Bidirectional filters can make models easier to use, but less predictable for AI.
Business Definitions vs Calculation Logic
A metric is not just a formula; it is a business definition with boundaries.
Your Model Can Calculate, But Can It Explain?
Explainability requires more than calculations; it requires drivers and context.
Calculation Groups Without Chaos
Calculation groups can simplify models, but they need clear rules and naming.
Canonical Metrics: One Definition, Many Views
Canonical metrics standardize meaning while allowing flexible reporting views.
Cohorts and Segmentation: Explainability at the Right Level
Segmentation and cohort analysis provide context for why metrics move.
A Context Test Harness for Power BI Models
A test harness validates that key questions return stable results.
Context Volatility: Hidden Interactions Between Slicers and Measures
Volatile context is caused by slicer interactions, hidden filters, and ambiguous paths.
Contribution Analysis: Turning Totals Into Reasons
Contribution analysis breaks totals into components that explain change.
Default Aggregation: When SUM Is the Wrong Assumption
Default aggregations can distort results when a sum is not meaningful.
Deterministic Slices: Designing Questions AI Can Ask Reliably
Deterministic slices constrain questions so answers stay consistent and explainable.
Dimensional Grain: Preventing Apples-to-Oranges Comparisons
Grain defines the level of detail; without it, AI compares incompatible data.
Drivers vs Correlations: Explaining Without Overclaiming
Drivers explain causes; correlations only show association. AI must distinguish them.
Evaluation: How to Test AI Answers Against Your Model
Evaluation compares AI answers against expected model outputs to detect errors.
Explainability Metrics: Consistency, Coverage, and Confidence
Measure explainability to track progress and reliability over time.
Filter Context in Plain English
Filter context determines what data a calculation sees; it must be predictable.
From KPI to Story: A Repeatable Explanation Template
A consistent template makes AI explanations easier to generate and trust.
Governance for AI Analytics: Change Control for Semantics
Governance ensures semantic changes are intentional and traceable.
Grounding: Preventing Confidently Wrong Answers
Grounding anchors AI answers in model facts and metadata.
Inactive Relationships and USERELATIONSHIP: When Intent Gets Lost
Inactive relationships require explicit activation, which AI often misses.
Interoperability: Aligning Power BI With the Rest of Your Stack
Interoperability ensures consistent semantics across BI, data platforms, and AI tools.
A Lightweight Metric Dictionary That Actually Gets Used
A simple metric dictionary helps teams align without heavy governance overhead.
Lineage: Tracing a Number Back to Its Sources
Lineage makes every number auditable by tracing it to sources.
Many-to-Many Relationships and AI: What Can Go Wrong
Many‑to‑many relationships can produce unexpected filter behavior for AI queries.
Measure Branching Done Right: Reuse Without Confusion
Branching keeps complex measures readable and consistent when done carefully.
Measure Singularity: Reducing Metric Sprawl Without Losing Flexibility
Measure singularity keeps one true metric while allowing controlled variants.
Metadata Density: Why Descriptions Matter More Than You Think
Metadata density makes models interpretable by AI and humans.
Naming Measures So Humans and AI Agree
Consistent naming helps AI select the right measure and reduces ambiguity.
Narrative-Ready Models: Designing for Text Explanations
Narrative‑ready models provide the context and structure AI needs for clear explanations.
Outliers and Null Semantics: When ‘Missing’ Means Something
Outliers and nulls can be meaningful; AI must interpret them correctly.
A Practical Explainability Checklist for Power BI
A checklist to ensure AI explanations are reliable and auditable.
Prompting vs Modeling: Where to Fix the Problem
Most AI answer issues are model issues, not prompting issues.
Retrieval Patterns for BI: Getting the Right Context to the Model
Retrieval patterns define which metadata and filters should be provided to AI.
Role-Playing Dimensions: Dates, Regions, and Other Multipliers
Role‑playing dimensions require clear naming and explicit usage to avoid confusion.
Row-Level Security and AI: What You Must Validate
RLS affects AI answers and must be validated with realistic AI queries.
Security and Privacy: What Not to Expose
AI access must respect security boundaries and avoid exposing sensitive data.
Semantic Contracts: Setting Expectations for Questions and Answers
Semantic contracts define what questions are valid and how answers should be interpreted.
Semantic Drift: How Definitions Quietly Change Over Time
Semantic drift happens when metric meaning changes without clear communication.
Time Intelligence: Why ‘Last Month’ Is Harder Than It Sounds
Time intelligence depends on clean date tables and clear definitions of time.
Tooling Interfaces: SQL, DAX, and the Translation Layer
Different tools expose different query layers; AI must align with them.
Units, Currency, and Time: The Hidden Semantics That Cause Bad Answers
Units, currency, and time basis are often implicit, but AI needs them explicit.
Variance Decomposition for Business Users
Variance decomposition explains changes using business‑friendly components.
Why AI Answers Change When Your Data Didn’t
Inconsistent context, not data changes, often causes fluctuating AI answers.
Why Multiple Measures for the Same Metric Break AI Answers
Multiple measures for the same metric create conflicting answers and undermine trust.