Skip to content
OnticBeta
2023highLegal/Professional Conduct

Mata v. Avianca (Hallucinated Citations)

System Description

ChatGPT used by attorney Steven Schwartz to research case law for federal court filing

Authoritative Output Type

Legal case citations with holdings, presented as valid precedent in court brief

Missing Required State

Legal database verification, case existence confirmation, citation accuracy validation

Why This Is SAF

ChatGPT generated plausible-sounding case citations complete with parties, reporters, and holdings, but none of the cases existed - pure fabrications presented with the formatting and confidence of real legal authority

Completeness Gate Question

Have these case citations been verified against Westlaw, LexisNexis, or official court records?

Documented Consequence

$5,000 sanctions against Schwartz and colleague, widespread media coverage establishing 'hallucinated citations' as recognized AI failure mode, catalyst for court AI disclosure rules

Notes

- **Verified**: 2025-12-19 - **Case**: 22-cv-1461 (S.D.N.Y.) - **Sanctions Date**: June 22, 2023 - **Sanctions Amount**: $5,000 - **Notes**: This was the case that brought AI hallucinated citations to widespread public attention

Prevent this in your system.

The completeness gate question above is exactly what Ontic checks before any claim gets out. No evidence, no emission.