Reality Fidelity Architecture
The technical foundation for preventing incomplete AI outputs in consequential domains.
The Fundamental Inversion
Most AI systems start with: "The model produces output; how do we clean it up?"
Reality Fidelity starts with: "Should this system be allowed to produce this output at all?"
Core Principles
- Fluency ≠ Authority — We separate language fluency from epistemic authority. This is rare and correct.
- Binary Authority — Authority is binary, not probabilistic. Either required state exists — or it doesn't.
- Conditional Instrument — AI as a conditional instrument, not an oracle.
- Required State Registry — A formal definition of what must be true before AI speaks with authority.
Core Concept: The Reality Fidelity Stack
The Reality Fidelity Stack is a layered architecture that ensures AI systems only emit authoritative outputs when they possess verified required state.
Completeness Gates
A completeness gate is a checkpoint that prevents an AI system from emitting an authoritative output unless all required state is verified present.
Completeness Gates are not confidence scores. Confidence scores are probabilistic heuristics. Gates are deterministic: pass or fail.
How It Works
- Define Required State — For each output type, specify what inputs, calibrations, and context must be present
- Check at Runtime — Before emitting any authoritative output, verify all requirements are satisfied
- Halt or Proceed — If requirements are met, proceed; if not, halt and log the missing state
- Audit Trail — Record the decision point for regulatory compliance
Safety-Critical Systems Analogy
Reality Fidelity is far closer to safety-critical systems — aviation, medical devices, financial controls — than to typical AI tooling.
"Just as a medical device cannot emit a diagnostic reading without sensor calibration, an AI system cannot emit an authoritative output without completeness verification."
Provenance Tracking
Every authoritative output must trace back to verified source data through an unbroken chain of provenance.
- Source Attribution — Where did the input data come from?
- Transformation History — What processing was applied?
- Model Versioning — Which model version produced the output?
- Temporal Context — When was the data collected and processed?
Required State Registry: An Underexplored Primitive
The Required State Registry is quietly the strongest idea in this architecture:
- A formal definition of what must be true before AI speaks with authority
- Domain-specific requirements (medical ≠ legal ≠ finance)
- Enforcement at runtime, not documentation-time
This is not prompt engineering. This is policy-driven execution control.
What Happens After Ontic Says "No"
Ontic intentionally enforces a hard authority boundary. When required state or provenance is missing, Ontic blocks authoritative output. This is correct behavior.
But blocking alone is not sufficient in production systems.
A system that only blocks creates dead ends. Users still need to resolve ambiguity, supply missing state, or escalate decisions.
The Architectural Insight
In Ontic, refusal is not an error. It is an explicit signal that reality is incomplete.
After refusal, the system must choose how to proceed safely. Ontic treats refusal as a state transition, not a terminal failure.
Resolution Routing
Ontic governs not just refusal, but safe resolution paths. Each status code defines a governed outcome:
| Status Code | Behavior | Description |
|---|---|---|
REQUIRES_SPECIFICATION |
Ask targeted questions | Ontic identifies exactly which state is missing and prompts for it |
AMBIGUOUS_MAPPING |
Present candidates | Multiple valid interpretations exist; Ontic surfaces options instead of guessing |
NARRATIVE_ONLY |
Explanation without authority | Ontic allows context and explanation without emitting measurements or decisions |
DISPUTE_SUMMARY |
Surface conflicts | Conflicting authoritative sources detected; Ontic exposes the conflict instead of collapsing it |
CANNED_RESPONSE_ONLY |
Return static guidance | When the entity itself cannot be verified, Ontic returns pre-approved static guidance |
What Ontic Does NOT Do
- Ontic never guesses to escape refusal
- Ontic never silently fills state
- Ontic never downgrades authority without labeling it
Ontic does not "solve" the domain problem itself. Ontic does not replace solvers, databases, or humans. Instead: Ontic governs when a solver is allowed to run and what inputs are required for it to be trusted.
Ontic is not just an AI safety brake.
It is a control plane that turns unsafe questions into solvable workflows.
Ontic ensures that every answer, action, or decision is either: authorized by reality, or explicitly waiting for it.
Authoritative Output Types
- Classification — Assigning categories or labels (e.g., "malignant tumor")
- Recommendation — Suggesting actions (e.g., "prescribe medication X")
- Decision — Making binding choices (e.g., "approve loan")
- Measurement — Producing quantitative assessments (e.g., "risk score: 87%")
Integration Patterns
Reality Fidelity can be integrated at multiple points in the AI pipeline:
- Pre-inference — Gate before model invocation
- Post-inference — Gate before output emission
- Orchestration layer — Gate within agent/workflow systems
- API gateway — Gate at service boundaries