Skip to content
OnticBeta

Learn

How this works, why it matters, and what to do about it.

governance
liability
enforcement
timing

The Distributed Enron Moment

Most people are waiting for a catastrophic AI failure to create regulatory urgency. They're waiting for an event that already happened. It's just distributed across 21 lawsuits, enforcement actions, and settlements instead of one headline.

February 8, 202610 min read
governance
architecture
thesis

I Asked a Frontier AI Why Ontic Might Fail. It Proved Why We're Necessary.

In which Claude Opus 4.5 confidently delivers a biased analysis, gets corrected by reality four times, and accidentally demonstrates the exact problem Ontic solves.

February 7, 202612 min read
architecture
reliability
governance
doctrine

Ontic Turbulence: Keeping AI Systems in Laminar Flow

You don't eliminate turbulence by making better models. You design architectures that keep most traffic laminar and treat turbulence as a signal to redesign.

February 5, 202610 min read
governance
ai
reliability
ops

The Human Perimeter: A Defense System for AI-Driven Change

A practical framework for keeping AI changes grounded in evidence—whether in code, customer communication, data pipelines, or business decisions.

February 3, 20268 min read
governance
reliability
doctrine

Mondai Ishiki & Kadai Barashi: From Japanese Problem Consciousness to Ontic Governance

Mondai Ishiki (problem consciousness) and Kadai Barashi (problem dissolution) anchor the Ontic stack. This post traces their leadership roots and shows how they translate into causal targeting, dissolution-first design, and governance discipline.

January 31, 20268 min read
governance
architecture

Why AI Needs Machine-Readable Authority Claims: Introducing llms.json

AI crawlers detect files but often fail to parse contents, leading to generic classifications. The llms.json standard provides machine-checkable authority declarations that complement llms.txt and JSON-LD.

January 28, 20268 min read
governance
architecture

Goober at Work: Using Persona as an Epistemic Control Plane

We trained a single 8B model with two deterministic personas. In Goober at Work mode, persona isn't tone—it's governance. Every factual claim must declare its epistemic status.

January 25, 202612 min read
governance
architecture

When an AI Couldn't See a Public File

An AI agent attempted to verify public files and implied they weren't accessible. The files were there. The failure was epistemic, not technical—a clean case study in why AI systems need explicit authority boundaries.

January 22, 20266 min read
governance
architecture

Authority Must Be Outside the Model

Most AI safety failures do not come from bad answers. They come from answer-shaped bypasses — outputs that satisfy surface checks while still asserting unearned authority. A single architectural invariant becomes unavoidable: Authority must be enforced outside the model's perceptual and optimization surface.

January 18, 20268 min read
architecture
reliability
llm
scale

Link Margin: Why Scale Won't Save You

Scale increases capability—not reliability. Reliability requires architectural link margin, not larger models.

January 14, 202612 min read
architecture
governance
llm
reliability

Simulators, Sensors, and the Discovery of Ontic Governance

How a single constraint—do not allow AI to fabricate nutrition data—forced the discovery of a three-layer governance architecture.

January 10, 202615 min read

Ready to see it in action?

The best way to understand Ontic is to run the risk profile wizard and see what environment you actually need.