AI Incident Database
Documented cases where AI systems emitted authoritative outputs without required state—and the consequences that followed.
Why Document Incidents?
Every incident in this database represents a case where an AI system produced an authoritative output—a classification, recommendation, decision, or measurement—without possessing the required state to do so correctly.
These aren't hypotheticals. They're documented failures with real consequences: misdiagnoses, wrongful denials, biased decisions, and harm to individuals and organizations.
By analyzing these incidents through the lens of Reality Fidelity, we identify patterns and demonstrate how completeness gates could have prevented each failure.
Incident Categories
- Healthcare — Diagnostic AI failures, treatment recommendation errors, clinical decision support malfunctions
- Finance — Credit decision errors, algorithmic trading failures, fraud detection false positives/negatives
- Legal — Case prediction failures, contract analysis errors, compliance automation mistakes
- Child Safety — Content moderation failures, risk assessment errors, intervention system breakdowns
- Employment — Hiring algorithm bias, performance evaluation errors, workforce decision failures
Featured Incidents
Epic Sepsis Model False Negatives
A widely deployed sepsis prediction model failed to identify patients who later developed sepsis, leading to delayed treatment and preventable deaths.
Missing Required State:- Complete vital signs history
- Laboratory result currency verification
- Patient population calibration
A gate verifying input completeness and model calibration for the specific patient population would have flagged the prediction as unreliable.
Apple Card Gender Discrimination
Apple's credit card algorithm assigned significantly lower credit limits to women than men with similar or better financial profiles.
Missing Required State:- Bias audit completion verification
- Fair lending compliance check
- Demographic parity validation
A gate requiring bias audit completion before credit decisions would have prevented discriminatory outputs.
COMPAS Recidivism Prediction Bias
The COMPAS algorithm used in criminal sentencing was found to be biased against Black defendants, predicting higher recidivism rates than warranted.
Missing Required State:- Demographic calibration verification
- Historical bias analysis
- Judicial oversight confirmation
A gate requiring demographic calibration and human oversight before sentencing recommendations would have flagged the biased outputs.
Incident Analysis Framework
Each incident is analyzed using the SAF (State-Authority-Fidelity) framework:
- System Description — What AI system was involved?
- Authoritative Output — What classification, recommendation, decision, or measurement was emitted?
- Missing Required State — What information was absent or unverified?
- Documented Consequence — What harm resulted?
- Completeness Gate Opportunity — How could Reality Fidelity have prevented the failure?