Skip to content
OnticBeta
2024highHealthcare/Regulatory

Texas AG v. Pieces Technologies (Healthcare AI)

System Description

Generative AI tool used by hospitals to summarize patient health data in real time

Authoritative Output Type

Patient health summaries used by clinical staff for treatment decisions

Missing Required State

Validated accuracy rates, hallucination frequency disclosure, clinical review requirements

Why This Is SAF

Company claimed '<0.001% hallucination rate' for AI generating patient health summaries, but Texas AG investigation found these claims were 'likely inaccurate' - meaning clinicians were making treatment decisions based on potentially unreliable AI outputs

Completeness Gate Question

Has this clinical AI summary been validated for accuracy and do users understand the actual error rate?

Documented Consequence

First-of-kind state AG settlement requiring accuracy disclosures and ensuring hospital staff understand limitations of AI outputs

Notes

- **Verified**: 2025-12-19 - **Settlement Date**: September 2024 - **Notes**: First state AG enforcement action specifically targeting healthcare AI accuracy claims

Prevent this in your system.

The completeness gate question above is exactly what Ontic checks before any claim gets out. No evidence, no emission.