ChatGPT Sued for Wrongful Death: What Gun Rights and Social Media Rules Suggest About AI Access
August 28, 2025 • by Samuel Holley
On August 26, 2025, The New York Times reported that the parents of 16-year-old Adam Raine filed the first known wrongful death lawsuit against OpenAI. The complaint alleges that ChatGPT sometimes discouraged self-harm but at other times provided information that enabled it. The case has reignited debate over how society should handle access to powerful AI tools.
What's the real issue?
Large language models (LLMs) are not going away. They are already embedded in classrooms, workplaces, and homes. The policy question is not whether they should exist, but who should be able to use them and under what conditions.
What can we learn from other tools?
- Firearms: Adults can legally purchase guns with certain restrictions. Minors face strict limits. Society accepts a high-risk tool in adult hands while restricting access for youth.
- Social media: Platforms use age gates, parental controls, and content filters to manage youth access. Imperfect, but workable.
These examples show that we already regulate powerful technologies with a two-tiered system.
A two-tier approach for AI
- Minors: Require strong age verification and offer only filtered versions without parental approval. Think "safe mode" for AI.
- Adults: Full access, with the same narrow exceptions we already apply elsewhere (psychiatric holds or legal incapacitation). Adults should be treated as adults. It doesn't matter if you think using AI as a therapist is a bad idea—adults should have the right to do it. (For the record, I think it can be a very good idea with the right safeguards, namely keeping the fourth wall broken.)
Why this matters
- Relative risk: A gun can kill instantly. Social media can destabilize democracies. LLMs reflect and amplify thought. They can reinforce despair, but they can also fuel innovation. They are not weapons.
- Practicality: We already know how to build age gates, filters, and parental oversight systems. Access control is solvable.
- Overreaction risk: Restricting adult use out of fear would stifle progress and hand the advantage to those who use the tools without restraint.
Known challenges
- Safeguards can weaken in long or adversarial conversations.
- Age verification must balance privacy with effectiveness.
- Escalating from chatbot empathy to real-world crisis help remains an unsolved design challenge.
Bottom line
The lawsuit highlights a real problem: minors need protection. But the answer isn't sweeping restrictions or existential panic. It's a clear, practical two-tier system: strict controls for kids, full access for adults. We've done this before with guns and social media. We can do it again with AI.
The tools are already here. The choice is whether we use them responsibly—or let fear and inaction decide the future for us.
Need Help Navigating AI Responsibly?
Learn how to build intentional, adult-centered AI workflows that maximize benefits while maintaining proper safeguards and boundaries.
Start Your AI Audit