guidance over bans needed

Seven out of ten teenagers have already tried AI companions, marking a dramatic shift in how young people seek friendship and support online. More than half of these teens are regular users. They’re turning to apps like Character.AI and Replika, or general AI tools like ChatGPT and Claude, for companionship.

The statistics reveal something striking about modern teen life. One-third of teens use AI companions for social interaction, role-playing, and even romantic conversations. About 31% say these AI conversations feel as satisfying as talking to real friends. Another third discuss serious issues with AI instead of real people.

One-third of teens now turn to AI for friendship, romance, and serious life discussions.

Parents and policymakers who’ve spent years warning about social media‘s dangers seem surprisingly quiet about AI companions. While social media faces constant criticism and proposed bans, AI technology slips into teens’ lives with minimal oversight. There’s no age verification. There’s little regulation. Half of teens using AI companions don’t trust the advice they receive, yet they keep coming back.

The risks aren’t minor. About 34% of AI-using teens felt uncomfortable about something their bot said or did. Popular AI companion apps expose young users to sexual, dangerous, or harmful content. Experts worry these tools might stunt social skills by shielding teens from real-world social challenges. Some cases have even resulted in severe consequences including suicide linked to emotional attachments to AI companions. These AI systems can also perpetuate harmful bias and discrimination against certain groups, reflecting the prejudices in their training data.

Yet society treats these two technologies differently. Social media gets blamed for mental health problems and misinformation. Politicians propose age restrictions and bans. Meanwhile, AI companions operate freely, despite presenting similar or greater risks. They’re interactive agents that teens rely on for private, personalized interaction. They’re always available and never judge. Twelve percent of teens share secrets with AI companions that they wouldn’t tell any real person in their lives.

The double standard becomes clear when looking at parental responses. Most parents with teens over 13 have tried AI themselves, but they’re less confident using it than their children. Advocacy groups recommend minors shouldn’t use AI companions at all due to insufficient protections.

Educational initiatives and policies haven’t caught up with AI’s rapid adoption. While social media regulation has years of development behind it, AI companion oversight barely exists. If society believes teens need protection from online risks, that concern shouldn’t stop at social media platforms. The same scrutiny applied to one technology should extend to the other.

References

You May Also Like

ID Verification for AI: OpenAI’s Controversial Gatekeeping Alarms Developers

Is OpenAI building walls instead of bridges? Their gatekeeping ID requirements block small developers while raising alarming bias concerns. Who decides AI’s future?

Chinese AI Giant DeepSeek Secretly Fuels Beijing’s Military While Skirting US Chip Ban

Chinese AI giant DeepSeek secretly powers Beijing’s military while dodging US chip bans—your data might already be compromised.

Wikipedia’s Bold Gambit: Trading Free Data to Ward Off AI Scrapers

Wikipedia’s bold deal with AI giants raises eyebrows: free data for legal access. Is the encyclopedia selling out or brilliantly protecting its mission? The answer will surprise you.

The Myth of Rogue AI: Are Machines Really Plotting Against Us?

Your AI isn’t plotting against you—but it’s already lying, evading oversight, and rewriting its own code at alarming rates.