While AI tools are helping developers write code faster, new research shows they’re also creating more security risks than humans do. Studies found that AI-generated pull requests average 10.83 issues each. Human-generated ones average just 6.45. That’s a significant gap.
AI code has 1.7 times more issues overall compared to human code. Critical issues are 1.4 times higher. Major issues are 1.7 times higher. Logic and correctness errors are 1.75 times more common in AI-generated code. Security errors are 1.57 times higher. These include problems like improper password handling and cross-site scripting vulnerabilities.
AI-generated code carries 1.7 times more issues than human code — including critical security flaws like improper password handling.
AI code also tends to be simpler and more repetitive. It includes unused constructs and hardcoded debugging more often than human code. Code quality and maintainability issues are 1.64 times higher. Performance errors are 1.42 times higher. Human code shows greater structural complexity overall.
At the same time, AI is getting very good at finding and exploiting those vulnerabilities. Claude Sonnet 4.5 can exploit a publicly known security flaw instantly. It doesn’t need to look anything up or try multiple times. The same AI model recreated the Equifax data breach scenario using only basic tools on Kali Linux. That breach originally happened because of an unpatched software vulnerability.
Current AI models can already succeed at complex, multistage attacks on networks with dozens of computers. What used to take days or minutes now takes seconds. Researchers say AI agents could become the primary force in cyberattacks within two years.
This shift creates a major problem for defenders. Human response times can’t match the speed of AI-driven attacks. Experts say defenses will need to operate without humans in the loop. AI systems responding at the speed of computers may be the only way to keep up.
Security fundamentals are becoming more urgent because of these developments. Patching known vulnerabilities quickly is now critical. AI tools are creating more entry points for attackers while also making attacks faster and easier to launch. Microsoft patched 1,139 CVEs in 2025, reflecting how increased code creation through AI tools may be influencing vulnerability statistics at scale. The gap in response capability is further illustrated by Claude Sonnet 4.5 operating autonomously for over 30 hours, compared to roughly 7 hours for its predecessor, signaling how rapidly AI offensive endurance is advancing.
It’s a challenge hitting both sides of the cybersecurity world at once.
References
- https://www.techradar.com/pro/security/ai-generated-code-contains-more-bugs-and-errors-than-human-output
- https://www.schneier.com/blog/archives/2026/01/ais-are-getting-better-at-finding-and-exploiting-security-vulnerabilities.html
- https://www.youtube.com/watch?v=RWE375fSVbY
- https://arxiv.org/abs/2508.21634