KB article

Prompting vs Modeling: Where to Fix the Problem

Most AI answer issues are model issues, not prompting issues.

arf-kbai-readiness-interoperabilitysemantic-contractmetadata-densitygrounding

TL;DR

  • Prompting can’t fix ambiguous data models.
  • Fix the model first, then refine prompts.

The problem

  • Teams try to patch model issues with prompts.
  • AI still returns inconsistent answers.

Why it matters

  • Prompts are fragile; model changes are durable.
  • Model fixes improve all tools at once.

Symptoms

  • Prompt tweaks help one question but break another.
  • AI still struggles with definitions.

Root causes

  • Ambiguous measures and weak metadata.
  • Lack of semantic contracts.

What good looks like

  • Model definitions are clear, prompts are simple.
  • Prompting is used for formatting, not semantics.

How to fix

  • Identify root model issues.
  • Improve definitions, metadata, and context stability.
  • Use prompts for output structure only.

Pitfalls

  • Over‑engineering prompts as a substitute for model fixes.
  • Ignoring evaluation results.

Checklist

  • Model issues addressed first.
  • Prompting used for formatting.
  • Evaluation shows improved consistency.

Framework placement

Primary ARF layer: AI Readiness & Interoperability. Diagnostic bridge: data-movement-reliability, semantic-reliability, execution-reliability, change-reliability.