APA Draws the Line: Ethical Guidance for AI in Clinical Practice
- APA released its first formal ethical guidance for AI in clinical psychology practice (June 2025) — not a ban, but a framework of obligations for psychologists using AI tools
- Core principle: AI must augment, never replace, human clinical decision-making — psychologists remain responsible for all final decisions regardless of AI recommendations
- Informed consent now explicitly includes AI: if using AI scribes, note generators, or treatment planning tools, patients must be told before the session, not after
- Data sovereignty warning: cloud-based AI platforms offer limited HIPAA compliance, and psychologists do not control data once it leaves their systems — APA recommends local processing where possible
The APA does not move fast. When it publishes formal ethical guidance, the signal is clear: this is no longer theoretical. AI has entered enough therapy offices that the profession's governing body felt compelled to define what ethical use looks like — and what it does not.
The augmentation principle
The document's anchor is simple: AI augments, it does not replace. The psychologist remains the decision-maker. An AI that drafts a treatment plan produces a draft, not a plan. An AI that transcribes a session produces a transcript, not clinical notes. The distinction is professional responsibility — and it is not transferable to software.
This principle has teeth. If an AI-generated note contains an error that affects treatment, the psychologist is accountable — not the software company, not the vendor. Using AI does not dilute professional liability. It may increase it, because the clinician now bears responsibility for verifying AI output in addition to their own clinical judgment.
The informed consent expansion
The guidance makes explicit what was previously ambiguous: patients have a right to know when AI is present in their care. Not in general terms ("we use technology") but specifically: "An AI system will transcribe this session." "An AI tool will generate a preliminary progress note that I will review." The consent must be obtained in advance and documented.
This requirement will change workflows. Clinicians who adopted AI scribes quietly — introducing them without modifying consent forms — now face a clear ethical obligation. The 24-hour advance notice requirement in Florida's law (already passed) may become the practical standard.
The data problem
APA's strongest warning: cloud-based AI platforms present unresolved privacy risks. Once session data enters a cloud platform, the psychologist loses control. Even HIPAA-compliant vendors may retain, aggregate, or use data in ways that are technically legal but ethically problematic. APA's recommendation: process AI locally where possible, use organizational servers rather than consumer cloud platforms, and assume that any data sent to a third party is no longer confidential in the fullest sense.
For your practice
If you use any AI tool in clinical work: update your informed consent forms now. Specifically name the tools, what data they access, and where it goes. If you use cloud-based AI scribes: review the vendor's data retention and use policies — "HIPAA-compliant" is not the same as "your data stays yours." If you are considering adopting AI tools: the APA guidance is your ethical baseline. Read the full document before signing a vendor contract.
"HIPAA-compliant" is not the same as "your data stays yours" — and APA wants psychologists to know the difference.
The guidance is advisory, not enforceable through licensure action (though ethics committees may reference it). Applies specifically to health service psychology — other mental health professions (counselors, social workers) have separate ethics codes. Implementation details are left to individual practitioners.