How exactly should AI chatbots handle your deepest, darkest thoughts? Utah has an answer. The state’s newly established Office of Artificial Intelligence Policy (OAIP) just dropped its first guidance on AI mental health chatbots. And honestly? It’s about time someone figured this out.
Utah didn’t mess around when it created the OAIP in 2024. They immediately got to work with the Division of Professional Licensing to tackle the wild west of AI therapy. House Bill 452, passed in March 2025, now provides the legal backbone for regulating these digital therapists. Spoiler alert: your robot therapist needs to tell you it’s a robot. Shocking concept.
The guidelines are pretty straightforward. AI chatbots must announce themselves as non-human at the start of conversations and again after a week of silence. No ghosting allowed—even for algorithms. Human therapists using AI tools must get explicit consent from patients. Because nothing says “therapeutic relationship” like signing forms about the robot listening in.
Privacy is a big deal in the guidance. No selling your mental health data without permission. HIPAA still matters. And forget seeing ads while pouring your heart out to a chatbot—that’s banned. Violators face fines up to $2,500 per incident. Expensive mistake.
The rules emphasize tailoring AI use to individual needs and digital literacy levels. Not everyone’s comfortable telling their problems to a machine. Providers must regularly assess whether AI is actually helping each patient. Novel concept. These assessments align with best practices that emphasize critical evaluation skills when implementing AI in therapeutic settings.
OAIP didn’t create these guidelines in a vacuum. They brought together tech companies, mental health associations, academics, and other stakeholders in their “learning lab.” Even the Mormon Mental Health Association weighed in.
Utah’s approach balances innovation with protection. They’re using regulatory mitigation agreements instead of just saying “no” to everything new. This approach aims to ensure AI products don’t pose greater risk than humans when providing mental health support. Smart move. Other states are taking notes. The guidelines also recommend therapists develop contingency plans for potential technology failures to maintain care continuity if AI systems malfunction. Turns out Utah might actually know what it’s doing with this whole AI therapy thing.
References
- https://www.kuer.org/health/2025-07-08/in-its-first-year-of-work-utahs-ai-office-laid-out-mental-health-best-practices
- https://www.clearhq.org/news/utah-issues-guidelines-on-ai-in-mental-health-care-5-14-25
- https://blog.commerce.utah.gov/2025/04/29/news-release-utah-office-of-artificial-intelligence-policy-announces-best-practices-from-groundbreaking-study-on-ai-use-in-mental-health-therapy/
- https://nquiringminds.com/ai-legal-news/utah-enacts-comprehensive-ai-regulations-for-mental-health-services/
- https://ai.utah.gov/2025-2/