utah ai mental health guidelines

How exactly should AI chatbots handle your deepest, darkest thoughts? Utah has an answer. The state’s newly established Office of Artificial Intelligence Policy (OAIP) just dropped its first guidance on AI mental health chatbots. And honestly? It’s about time someone figured this out.

Utah didn’t mess around when it created the OAIP in 2024. They immediately got to work with the Division of Professional Licensing to tackle the wild west of AI therapy. House Bill 452, passed in March 2025, now provides the legal backbone for regulating these digital therapists. Spoiler alert: your robot therapist needs to tell you it’s a robot. Shocking concept.

The guidelines are pretty straightforward. AI chatbots must announce themselves as non-human at the start of conversations and again after a week of silence. No ghosting allowed—even for algorithms. Human therapists using AI tools must get explicit consent from patients. Because nothing says “therapeutic relationship” like signing forms about the robot listening in.

Privacy is a big deal in the guidance. No selling your mental health data without permission. HIPAA still matters. And forget seeing ads while pouring your heart out to a chatbot—that’s banned. Violators face fines up to $2,500 per incident. Expensive mistake.

The rules emphasize tailoring AI use to individual needs and digital literacy levels. Not everyone’s comfortable telling their problems to a machine. Providers must regularly assess whether AI is actually helping each patient. Novel concept. These assessments align with best practices that emphasize critical evaluation skills when implementing AI in therapeutic settings.

OAIP didn’t create these guidelines in a vacuum. They brought together tech companies, mental health associations, academics, and other stakeholders in their “learning lab.” Even the Mormon Mental Health Association weighed in.

Utah’s approach balances innovation with protection. They’re using regulatory mitigation agreements instead of just saying “no” to everything new. This approach aims to ensure AI products don’t pose greater risk than humans when providing mental health support. Smart move. Other states are taking notes. The guidelines also recommend therapists develop contingency plans for potential technology failures to maintain care continuity if AI systems malfunction. Turns out Utah might actually know what it’s doing with this whole AI therapy thing.

References

You May Also Like

Texas Lawmakers Advance Unprecedented Teen Social Media Ban Despite Constitutional Concerns

Texas could ban all social media for anyone under 18 – the strictest law ever proposed in America.

ChatGPT: The Controversial AI Tool 79% of Lawyers Can’t Resist

79% of lawyers secretly use ChatGPT while 63.6% of people say it shouldn’t give legal advice. The profession faces an identity crisis.

AI’s Masterpiece Mimicry: Creative Revolution or Stealing Artists’ Soul?

Can AI create masterpieces or just steal artists’ souls? The creative revolution forces us to question who truly deserves credit when machines make museum-worthy art.

Is AI Development Outpacing Moral Governance? Pope Leo XIV Warns Politicians

Pope Leo XIV condemns AI’s $391 billion stampede while 97 million jobs transform and corporations chase profits over souls.