Learn
How this works, why it matters, and what to do about it.
The Distributed Enron Moment
Most people are waiting for a catastrophic AI failure to create regulatory urgency. They're waiting for an event that already happened. It's just distributed across 21 lawsuits, enforcement actions, and settlements instead of one headline.
I Asked a Frontier AI Why Ontic Might Fail. It Proved Why We're Necessary.
In which Claude Opus 4.5 confidently delivers a biased analysis, gets corrected by reality four times, and accidentally demonstrates the exact problem Ontic solves.
Ontic Turbulence: Keeping AI Systems in Laminar Flow
You don't eliminate turbulence by making better models. You design architectures that keep most traffic laminar and treat turbulence as a signal to redesign.
The Human Perimeter: A Defense System for AI-Driven Change
A practical framework for keeping AI changes grounded in evidence—whether in code, customer communication, data pipelines, or business decisions.
Mondai Ishiki & Kadai Barashi: From Japanese Problem Consciousness to Ontic Governance
Mondai Ishiki (problem consciousness) and Kadai Barashi (problem dissolution) anchor the Ontic stack. This post traces their leadership roots and shows how they translate into causal targeting, dissolution-first design, and governance discipline.
Why AI Needs Machine-Readable Authority Claims: Introducing llms.json
AI crawlers detect files but often fail to parse contents, leading to generic classifications. The llms.json standard provides machine-checkable authority declarations that complement llms.txt and JSON-LD.
Goober at Work: Using Persona as an Epistemic Control Plane
We trained a single 8B model with two deterministic personas. In Goober at Work mode, persona isn't tone—it's governance. Every factual claim must declare its epistemic status.
When an AI Couldn't See a Public File
An AI agent attempted to verify public files and implied they weren't accessible. The files were there. The failure was epistemic, not technical—a clean case study in why AI systems need explicit authority boundaries.
Authority Must Be Outside the Model
Most AI safety failures do not come from bad answers. They come from answer-shaped bypasses — outputs that satisfy surface checks while still asserting unearned authority. A single architectural invariant becomes unavoidable: Authority must be enforced outside the model's perceptual and optimization surface.
Link Margin: Why Scale Won't Save You
Scale increases capability—not reliability. Reliability requires architectural link margin, not larger models.
Simulators, Sensors, and the Discovery of Ontic Governance
How a single constraint—do not allow AI to fabricate nutrition data—forced the discovery of a three-layer governance architecture.
Ready to see it in action?
The best way to understand Ontic is to run the risk profile wizard and see what environment you actually need.