text scams drain accounts

Text scam attacks have surged dramatically, with Americans receiving 78 billion spam texts in early 2023. AI technology now helps criminals craft personalized messages that bypass security filters and appear legitimate. These “wrong number” texts often lead to elaborate scams that can empty bank accounts. Scammers increasingly use deepfakes and natural language processing to make their schemes more convincing. The techniques behind these evolving threats reveal why they’re becoming harder to detect.

While text message scams aren’t new, artificial intelligence has dramatically transformed them into more sophisticated threats. In the first half of 2023 alone, Americans received a staggering 78 billion automated spam texts, with January and May seeing the highest volumes at 14 billion messages each month. This represents a $4 billion increase compared to the same period in 2022.

The financial impact is severe. Text scammers stole an estimated $13 billion between January and June 2023. Over 77% of people who fell victim to AI phone scams lost money to the criminals behind them. Deepfake-related identity fraud cases in the U.S. jumped from just 0.2% to 2.6% between 2022 and early 2025.

Delivery service scams lead the pack, with over 1.1 billion texts sent in early 2023. Bank-related scams ranked second with 365 million texts, followed by travel-related scams at 179 million. Even COVID-19 themed scams generated over 151 million texts during this period. These scams often perpetuate existing biases from the AI training data, disproportionately targeting vulnerable populations.

AI technology makes these scams harder to detect. Scammers now create highly personalized messages that easily bypass traditional security filters. They use deepfake technology to produce convincing fake videos and audio that make verification of sender identity increasingly difficult. These smishing messages appear to come from trusted sources, making them particularly effective at evading detection systems.

Consumer behavior has inadvertently made the problem worse. By 2025, 84% of consumers had opted in to receive legitimate texts from businesses, a 35% increase from previous years. This higher engagement with text messaging creates more opportunities for scammers to impersonate trusted companies. Modern scammers employ natural language processing to fine-tune the tone of their messages and increase their credibility.

The most common AI-powered scam types include romance scams with AI-created personas, deepfake scams using fake videos and audio, AI social media bots, and sophisticated phishing attacks. These scams often exploit urgency and emotional triggers to bypass critical thinking.

As AI tools become more accessible, the threat continues to grow. The technology enables scammers to create convincing fake interactions at scale, turning simple text messages into potentially devastating financial traps.

References

You May Also Like

Tesla’s Autonomous Feature Violently Flips Car on Straight Road, Trapping Driver

Tesla’s Autopilot violently flipped a car, trapping the driver while Musk claims superior safety—but refuses to share crash data.

Pentagon’s New Spy: How AI Now Secretly Analyzes Military Intelligence

AI secretly evaluates military data with 96% accuracy, connecting disjointed information to predict enemy plans. What ethical boundaries are we crossing? The future of warfare transforms today.

Sky-High Anxiety: Pilots Fight AI Co-Pilot Replacement Plans

Pilots battle AI takeover in cockpits as unions rally against robotic replacements. Would you trust your life to a computer that can’t sweat?

AI-Powered Security: The Battleground Where MSPs Will Thrive or Die

AI security is no longer optional for MSPs – 75% will adopt by 2025. Will your provider survive the evolution or become extinct? Real-time threats demand real solutions.