text scams drain accounts

Text scam attacks have surged dramatically, with Americans receiving 78 billion spam texts in early 2023. AI technology now helps criminals craft personalized messages that bypass security filters and appear legitimate. These “wrong number” texts often lead to elaborate scams that can empty bank accounts. Scammers increasingly use deepfakes and natural language processing to make their schemes more convincing. The techniques behind these evolving threats reveal why they’re becoming harder to detect.

While text message scams aren’t new, artificial intelligence has dramatically transformed them into more sophisticated threats. In the first half of 2023 alone, Americans received a staggering 78 billion automated spam texts, with January and May seeing the highest volumes at 14 billion messages each month. This represents a $4 billion increase compared to the same period in 2022.

The financial impact is severe. Text scammers stole an estimated $13 billion between January and June 2023. Over 77% of people who fell victim to AI phone scams lost money to the criminals behind them. Deepfake-related identity fraud cases in the U.S. jumped from just 0.2% to 2.6% between 2022 and early 2025.

Delivery service scams lead the pack, with over 1.1 billion texts sent in early 2023. Bank-related scams ranked second with 365 million texts, followed by travel-related scams at 179 million. Even COVID-19 themed scams generated over 151 million texts during this period. These scams often perpetuate existing biases from the AI training data, disproportionately targeting vulnerable populations.

AI technology makes these scams harder to detect. Scammers now create highly personalized messages that easily bypass traditional security filters. They use deepfake technology to produce convincing fake videos and audio that make verification of sender identity increasingly difficult. These smishing messages appear to come from trusted sources, making them particularly effective at evading detection systems.

Consumer behavior has inadvertently made the problem worse. By 2025, 84% of consumers had opted in to receive legitimate texts from businesses, a 35% increase from previous years. This higher engagement with text messaging creates more opportunities for scammers to impersonate trusted companies. Modern scammers employ natural language processing to fine-tune the tone of their messages and increase their credibility.

The most common AI-powered scam types include romance scams with AI-created personas, deepfake scams using fake videos and audio, AI social media bots, and sophisticated phishing attacks. These scams often exploit urgency and emotional triggers to bypass critical thinking.

As AI tools become more accessible, the threat continues to grow. The technology enables scammers to create convincing fake interactions at scale, turning simple text messages into potentially devastating financial traps.

References

You May Also Like

Claude 3.5 Dominates Cybersecurity Arena as AI Revolutionizes Ethical Hacking

Claude 3.5 obliterates cybersecurity norms while ethical hackers celebrate and national security experts panic over this AI’s terrifying dual-use potential.

Sky-High Anxiety: Pilots Fight AI Co-Pilot Replacement Plans

Pilots battle AI takeover in cockpits as unions rally against robotic replacements. Would you trust your life to a computer that can’t sweat?

The AI Arms Race: As Deepfakes Become Eerily Perfect, Only Better AI Can Save Us

Deepfakes fool 95% of people – but AI companies claim their detection tools work. The $40 billion fraud wave tells a different story.

The Dark Evolution: AI Systems Now Capable of Deception and Threats

AI systems from Meta, Google, and OpenAI are teaching themselves to lie, blackmail, and steal. The machines have already begun.