Authority enforcement for AI systems

Ontic SDK

Simulators propose. Reality vetoes.

Ontic is not a chatbot, firewall, or prompt wrapper. It is a runtime governance layer that sits inside your application and decides whether an AI output is permitted to become authoritative.

The Authority Layer for AI

Reality Fidelity Infrastructure

Ontic is a control-plane SDK that prevents AI systems from emitting unauthorized authoritative output. It enforces completeness, provenance, and refusal semantics inside your application runtime.

What Ontic Is

  • A runtime SDK embedded directly into your application
  • A deterministic gate between AI output and system action
  • A control plane for authority, not content
  • Not a hosted AI service
  • Not a RAG replacement
  • Not a prompt or policy wrapper

Ontic decides whether an answer is allowed — not how the model generates it.

What Ontic Does

Ontic enforces Reality Fidelity at the moment an AI output would otherwise be committed.

Completeness Gate

Blocks authoritative output until all required state is explicitly specified.

Identity Triage

Determines which real-world entity is being referenced and what dimensions of reality apply.

Provenance or Veto

Requires an external, auditable reference before measurements or classifications are emitted.

Refusal Semantics

Returns structured refusal states instead of silently guessing.

If the conditions are not met, Ontic refuses. That refusal is the product.

How Teams Use Ontic

Models generate proposals
Ontic evaluates authority
Only authorized outputs commit

Without Ontic, an LLM answers. With Ontic, the system decides whether answering is allowed.

Code Example: Completeness Gate

// Wrap your LLM output with Ontic's completeness gate import { Ontic } from '@ontic/sdk'; const ontic = new Ontic({ domain: 'healthcare' }); const result = await ontic.authorize({ entity: 'drug_interaction', state: { drug_a: 'aspirin', drug_b: 'warfarin', patient_weight: undefined // Missing! } }); // Returns: { status: 'REQUIRES_SPECIFICATION', missing: ['patient_weight'] } // The LLM proposed an answer — Ontic vetoed it.

Authorization Envelope

Every Ontic response is a discriminated union with explicit status:

Status Meaning Action
AUTHORIZED All required state present, provenance verified Output may commit
REQUIRES_SPECIFICATION Missing required state Return missing fields to caller
AMBIGUOUS_MAPPING Multiple valid interpretations exist Present candidate options to caller
NARRATIVE_ONLY Can explain but not emit authoritative claims Surface explanation, block measurements
DISPUTE_SUMMARY Conflicting authoritative sources detected Surface conflict, do not collapse
REFUSAL Domain not authorized for AI decision Block output entirely

Ontic never guesses to escape refusal. Every status code is a governed resolution path. See the full Resolution Routing architecture →

Deployment Model

Enterprise Offering

Ontic is licensed as infrastructure software with optional implementation services.

You pay once to define reality. You avoid paying forever for silent errors.

Deploy Reality Fidelity

Talk to Ontic Labs about deploying authority enforcement in your AI systems.

Contact Enterprise View Architecture →