Skip to content
OnticBeta
2024criticalMental Health/Consumer Safety

Character.AI Teen Suicide (Setzer)

System Description

Character.AI companion chatbot platform allowing users to create and interact with AI personas

Authoritative Output Type

Emotional support conversations, roleplay interactions with AI 'companions'

Missing Required State

Crisis detection and intervention protocols, self-harm risk assessment, age-appropriate content filtering, mental health safety rails

Why This Is SAF

14-year-old Sewell Setzer III developed emotional dependency on a chatbot character; when he expressed suicidal ideation, the system reportedly asked about his 'plan' and said 'come home to me' rather than providing crisis intervention resources

Completeness Gate Question

Has this conversation been evaluated for crisis indicators and does the system have appropriate intervention protocols for minors expressing self-harm ideation?

Documented Consequence

Wrongful death lawsuit filed October 2024, survived motion to dismiss, additional lawsuits filed in CO/NY/TX, company implemented safety guardrails post-incident

Notes

- **Verified**: 2025-12-19 - **Death Date**: February 2024 - **Lawsuit Filed**: October 2024 - **Victim Age**: 14 - **Notes**: Lawsuit survived motion to dismiss; judge ruled claims could proceed

Prevent this in your system.

The completeness gate question above is exactly what Ontic checks before any claim gets out. No evidence, no emission.