AI Is Entering the Therapy Room — But Regulation Hasn't Arrived Yet
- APA's 2025 Practitioner Pulse Survey shows rising adoption of AI tools among psychologists for note-taking, treatment planning, and practice management
- Therabot (Dartmouth) — a fully generative AI chatbot — produced significant symptom improvements in a clinical trial for MDD, GAD, and eating disorder risk
- The FDA's Digital Health Advisory Committee convened in late 2025 to discuss regulatory frameworks for patient-facing mental health AI, but no binding guidance has emerged
- Practitioners report persistent concerns about patient privacy, clinical accuracy, and ethical boundaries of AI in therapeutic contexts
AI is no longer a hypothetical in mental health practice. Psychologists are using it for clinical documentation, treatment planning, and administrative tasks. A fully generative chatbot has now shown symptom improvement in a controlled trial. Yet the regulatory framework remains a grey zone — and the gap between adoption speed and oversight speed is widening.
The adoption curve
The APA's 2025 Practitioner Pulse Survey captures a profession in transition. AI adoption for clinical documentation — session notes, treatment plans, progress summaries — has moved from novelty to routine for early adopters. The appeal is straightforward: these tasks consume 30–40% of a clinician's working hours, and AI handles them faster.
But the survey also reveals a sharp split. Clinicians using AI for back-office tasks (scheduling, billing, note formatting) report high satisfaction. Those considering patient-facing applications express deep reservations. The concern is not abstract: what happens when a depressed patient interacts with a chatbot that generates a clinically inappropriate response at 2 AM?
The Therabot signal
The first clinical trial results for Therabot — a fully generative AI chatbot developed at Dartmouth — showed significant symptom improvements for major depressive disorder, generalised anxiety, and eating disorder risk. This is not a rule-based system following a decision tree. It is a large language model generating therapeutic responses in real time.
The results demand attention, but also caution. A symptom improvement signal in a controlled trial is not the same as clinical safety at scale. The chatbot operated under research conditions with oversight. The unregulated market offers no such guardrails.
The regulatory vacuum
The FDA's Digital Health Advisory Committee met in late 2025 to discuss patient-facing mental health AI. No binding guidance emerged. Most AI mental health tools currently fall outside existing FDA and FTC oversight frameworks — they are not medical devices (no diagnosis, no treatment), not drugs, not therapy. They exist in a regulatory gap.
For practitioners, the practical question is not whether AI will enter your practice — it is entering. The question is: which tools, under what conditions, with what liability? The answers do not exist yet.
AI has moved from hypothetical to routine in clinical practice — but the regulatory framework remains a blank page, leaving practitioners to navigate adoption without guardrails.
The APA survey reflects US-centric practice patterns; AI adoption and regulation vary significantly across jurisdictions. Therabot trial details (sample size, control conditions) not fully disclosed in the APA Monitor article.