How real is real anymore? AI-generated deepfakes have exploded from 500,000 files in 2023 to 8 million in 2025. That’s a staggering jump. And it’s happening faster than most people realize.
Deepfakes use artificial intelligence to replace or alter someone’s face, voice, or body in videos, images, or audio. Early versions were easy to spot. Today’s versions aren’t. Newer technologies like diffusion models and transformer models make fakes look frighteningly real.
The numbers tell a troubling story. Fraud attempts tied to deepfakes spiked 3,000% in 2023. North America alone saw 1,740% growth. In just the first four months of 2025, over 179 deepfakes of public figures appeared online. That’s already more than all of 2024.
Here’s the problem. Most people think they can spot a fake. About 60% believe they could identify a deepfake video or image. But the actual accuracy rate for high-quality deepfake videos is just 24.5%. For images, it’s 62%. People are far less skilled at this than they think. A University of Amsterdam study found that overconfidence persists even when people are given incentives to get it right.
Detection tools are trying to keep up. Deep learning can catch resolution inconsistencies from face-swapping. Neural networks can spot frame irregularities in videos. But detection technology’s still lagging behind the tools creating the fakes.
The damage goes beyond fooling people. Consumer trust in online content dropped 71% in just six months. Ninety-one percent of people believe AI will make the deepfake problem worse. Deepfakes are now the biggest online concern for 35% of users.
They’re also being weaponized. Deepfakes have placed real people in fake political speeches and pornographic videos without their knowledge. They’re spreading misinformation on social media. Research across eight countries found that prior exposure to deepfakes makes people more likely to believe future false content. Colorado’s Senate Bill 288 seeks to address this threat by criminalizing deepfake pornography, including its creation, possession, and distribution under state law.
Mental health’s taking a hit too. Deepfakes distort reality, fuel anxiety, and erode trust in what’s real. Adolescents and vulnerable groups are especially at risk. Social isolation can further heighten a person’s susceptibility to believing deepfakes as genuine. The gap between what exists and what people can detect keeps growing. Experts stress that public awareness education is a critical component of any broader strategy to combat the spread and impact of deepfake misinformation.
References
- https://research.gatech.edu/when-ai-blurs-reality-rise-hyperreal-digital-culture
- https://www.brookings.edu/articles/artificial-intelligence-deepfakes-and-the-uncertain-future-of-truth/
- https://www.mimecast.com/blog/deepfake-news-recent-data-reveals-gaps-between-perception-and-reality/
- https://deepstrike.io/blog/deepfake-statistics-2025
- https://www.nationwide.com/lc/resources/cyber-resource-center/articles/falling-for-deepfakes
- https://www.unesco.org/en/articles/deepfakes-and-crisis-knowing
- https://www.gov.uk/government/publications/deepfake-detection-technology/deepfake-detection-technology
- https://cyber.harvard.edu/story/2025-10/chatbots-and-deepfakes-are-eroding-our-shared-reality