utah ai mental health guidelines

How exactly should AI chatbots handle your deepest, darkest thoughts? Utah has an answer. The state’s newly established Office of Artificial Intelligence Policy (OAIP) just dropped its first guidance on AI mental health chatbots. And honestly? It’s about time someone figured this out.

Utah didn’t mess around when it created the OAIP in 2024. They immediately got to work with the Division of Professional Licensing to tackle the wild west of AI therapy. House Bill 452, passed in March 2025, now provides the legal backbone for regulating these digital therapists. Spoiler alert: your robot therapist needs to tell you it’s a robot. Shocking concept.

The guidelines are pretty straightforward. AI chatbots must announce themselves as non-human at the start of conversations and again after a week of silence. No ghosting allowed—even for algorithms. Human therapists using AI tools must get explicit consent from patients. Because nothing says “therapeutic relationship” like signing forms about the robot listening in.

Privacy is a big deal in the guidance. No selling your mental health data without permission. HIPAA still matters. And forget seeing ads while pouring your heart out to a chatbot—that’s banned. Violators face fines up to $2,500 per incident. Expensive mistake.

The rules emphasize tailoring AI use to individual needs and digital literacy levels. Not everyone’s comfortable telling their problems to a machine. Providers must regularly assess whether AI is actually helping each patient. Novel concept. These assessments align with best practices that emphasize critical evaluation skills when implementing AI in therapeutic settings.

OAIP didn’t create these guidelines in a vacuum. They brought together tech companies, mental health associations, academics, and other stakeholders in their “learning lab.” Even the Mormon Mental Health Association weighed in.

Utah’s approach balances innovation with protection. They’re using regulatory mitigation agreements instead of just saying “no” to everything new. This approach aims to ensure AI products don’t pose greater risk than humans when providing mental health support. Smart move. Other states are taking notes. The guidelines also recommend therapists develop contingency plans for potential technology failures to maintain care continuity if AI systems malfunction. Turns out Utah might actually know what it’s doing with this whole AI therapy thing.

References

You May Also Like

Reddit’s Soul Crushed by Unstoppable Tsunami of Worthless AI-Generated Posts

Reddit’s human conversations are training the AI that’s replacing them—15% of posts already fake, heading toward complete digital collapse.

First Brain Study Reveals Alarming Neural Decline in ChatGPT Users

MIT researchers track brain activity of ChatGPT users for 4 months—the neural changes they documented will make you rethink everything.

Your Brain Tricks You: Scientists Reveal Why AI Images Fool Everyone

Your brain has a secret filing system that makes AI images indistinguishable from reality—and reveals disturbing racial biases you never knew existed.

Bay Area Residents’ Private COVID Emails Secretly Harvested for AI Training

Private emails sent during COVID are being secretly harvested for AI training without consent. Your pandemic messages to local officials might already fuel tomorrow’s algorithms.