PSYREFLECT
INDUSTRYJanuary 12, 20262 min read

California's AI Chatbot Law Sets the First Mental Health Safety Standard for Consumer AI

Key Findings
  • California SB 243 — the first-in-the-nation companion chatbot law — took effect January 1, 2026. Operators must implement self-harm prevention protocols with crisis resource referrals
  • Users must be notified they are interacting with AI. Minors receive activity reminders every 3 hours. Sexually explicit content involving minors is prohibited
  • Starting July 2027, operators must report crisis referral data to California's Office of Suicide Prevention
  • Sets a regulatory precedent that other US states and potentially EU legislators are expected to follow

California has done what no other jurisdiction has: set enforceable safety standards specifically for AI companion chatbots. SB 243 is not about general AI regulation — it targets the exact products that millions of people use for emotional support, companionship, and informal mental health management. If you work with patients who use Character.AI, Replika, or similar platforms, this law directly affects the tools in your patients' lives.

What the law requires

Three categories of requirements. First, safety protocols: operators must implement self-harm prevention measures and provide crisis resource referrals when harmful content is detected. This is not optional guidance — it is a legal mandate.

Second, transparency: users must be told they are interacting with AI, not a human. For minors, the law adds time-awareness prompts every three hours — a recognition that adolescent users can lose track of time in parasocial AI interactions.

Third, accountability: starting July 2027, operators must report their crisis referral data to California's Office of Suicide Prevention. This creates a feedback loop — regulators will see how often these tools encounter users in crisis and how they respond.

Why practitioners should care

Your patients — especially younger ones — are using these tools between sessions. A teenager spending hours in conversation with a companion chatbot is having a psychological experience that affects their therapy. SB 243 does not eliminate the risk, but it introduces guardrails: crisis resources when self-harm is detected, transparency about the non-human nature of the interaction, and time boundaries for minors.

Other states are watching. This is the template.

California's SB 243 is the first law requiring AI companion chatbots to implement self-harm prevention, crisis referrals, and transparency — a template other jurisdictions will follow.

Limitations

Enforcement mechanisms still being developed. Applies only to California-operating companies. Does not regulate general-purpose LLMs (ChatGPT, Claude) — only companion chatbots. Effectiveness depends on operator compliance and technical implementation quality.

Source
California Lawyers Association
California Companion Chatbot Law (SB 243) Now in Effect
2026-01-01·View original
Tags
AI-regulationCaliforniamental-health-safetychatbotslegislation
Related
Industry
$600 Million Per Year: The Federal Bill That Could Make Trauma-Informed Care Infrastructure
U.S. CongressRead →
Industry
37% of UK Adults Already Use AI for Mental Health — NHS Report Maps the Reality
NHS ConfederationRead →
Industry
AI Is Entering the Therapy Room — But Regulation Hasn't Arrived Yet
APA Monitor on PsychologyRead →
PsyReflect · Free · Mon & Thu
Get analyses like this every Monday and Thursday.
Only what matters for practice. Curated by a clinical psychologist. 5 minutes instead of 4 hours of monitoring.
← Previous
37% of UK Adults Already Use AI for Mental Health — NHS Report Maps the Reality
Next →
The $95B Behavioral Health Market Pivots From Growth to Proof — What It Means for Clinicians