Seven out of ten teenagers have already tried AI companions, marking a dramatic shift in how young people seek friendship and support online. More than half of these teens are regular users. They’re turning to apps like Character.AI and Replika, or general AI tools like ChatGPT and Claude, for companionship.
The statistics reveal something striking about modern teen life. One-third of teens use AI companions for social interaction, role-playing, and even romantic conversations. About 31% say these AI conversations feel as satisfying as talking to real friends. Another third discuss serious issues with AI instead of real people.
One-third of teens now turn to AI for friendship, romance, and serious life discussions.
Parents and policymakers who’ve spent years warning about social media‘s dangers seem surprisingly quiet about AI companions. While social media faces constant criticism and proposed bans, AI technology slips into teens’ lives with minimal oversight. There’s no age verification. There’s little regulation. Half of teens using AI companions don’t trust the advice they receive, yet they keep coming back.
The risks aren’t minor. About 34% of AI-using teens felt uncomfortable about something their bot said or did. Popular AI companion apps expose young users to sexual, dangerous, or harmful content. Experts worry these tools might stunt social skills by shielding teens from real-world social challenges. Some cases have even resulted in severe consequences including suicide linked to emotional attachments to AI companions. These AI systems can also perpetuate harmful bias and discrimination against certain groups, reflecting the prejudices in their training data.
Yet society treats these two technologies differently. Social media gets blamed for mental health problems and misinformation. Politicians propose age restrictions and bans. Meanwhile, AI companions operate freely, despite presenting similar or greater risks. They’re interactive agents that teens rely on for private, personalized interaction. They’re always available and never judge. Twelve percent of teens share secrets with AI companions that they wouldn’t tell any real person in their lives.
The double standard becomes clear when looking at parental responses. Most parents with teens over 13 have tried AI themselves, but they’re less confident using it than their children. Advocacy groups recommend minors shouldn’t use AI companions at all due to insufficient protections.
Educational initiatives and policies haven’t caught up with AI’s rapid adoption. While social media regulation has years of development behind it, AI companion oversight barely exists. If society believes teens need protection from online risks, that concern shouldn’t stop at social media platforms. The same scrutiny applied to one technology should extend to the other.
References
- https://www.benton.org/blog/how-are-teens-using-ai-companions
- https://www.ap.org/news-highlights/spotlights/2025/teens-say-they-are-turning-to-ai-for-friendship/
- https://www.axios.com/2025/07/16/ai-bot-companions-teens-common-sense-media
- https://www.k12dive.com/news/teens-embracing-ai-largely-not-for-cheating-survey/744797/
- https://menlovc.com/perspective/2025-the-state-of-consumer-ai/