When a heart attack strikes or a building catches fire, every second counts—which makes the idea of handing 911 calls over to AI both brilliant and terrifying. The technology’s already here, transcribing calls in real-time, hunting for keywords that scream “send a counselor, not a cop.” Denver’s STAR program does exactly that, routing mental health calls away from police. Smart? Sure. AI’s cutting call volumes by 30% and boosting efficiency by up to 10%. That’s real money saved, real humans freed up for actual emergencies.
But here’s where it gets dicey. What happens when the AI gets it wrong? Imagine this: someone’s choking, can barely speak, and the algorithm decides it’s a prank call. Or worse, some hacker feeds the system bad data, teaching it to ignore certain neighborhoods or types of emergencies. Yeah, that’s a thing—it’s called data poisoning, and it’s not science fiction.
The threats keep coming. Swatting attacks could get supercharged with AI-generated calls so convincing they’d fool anyone. Adversarial actors—fancy term for bad guys with keyboards—can manipulate the system, creating chaos by misdirecting emergency resources. Send all the ambulances to the wrong side of town while the real crisis unfolds elsewhere. Nice.
That’s why human dispatchers aren’t going anywhere. They’re the safety net, the BS detectors when something feels off. The systems work best with what tech folks call “human-in-the-loop”—basically, keeping real people around to catch what the machines miss. Because let’s face it, no algorithm understands the panic in a caller’s voice quite like someone who’s been answering these calls for years. This mirrors how people responded to 9/11, where perceived support reduced psychological distress while technology alone couldn’t address the human need for connection during crisis.
The bottom line? AI in 911 centers is happening whether we’re ready or not. Geofencing pinpoints emergency hotspots, automated systems handle the “what time does the pharmacy close?” calls, and response times are getting faster. These tools promise to reduce the burden on police by identifying calls better suited for alternative responders like social workers or mental health counselors.
But every efficiency gain comes with a new vulnerability. Regular security audits, constant updates, rigorous training—that’s the price of progress. Emergency services must be vigilant as research shows AI systems can hallucinate information in 3-27% of outputs, potentially creating life-threatening situations during crisis response.
References
- https://www.policingproject.org/rethinking-response-articles/2025/5/8/part-two-body-worn-camera-analytics-e3zg9
- https://www.ntia.gov/category/next-generation-911/improving-911-operations-with-artificial-intelligence
- https://journals.sagepub.com/doi/10.1163/157361211X575736
- https://domesticpreparedness.com/articles/ai-and-911-call-systems-a-new-ally-or-a-hidden-risk
- https://www.usmcu.edu/Outreach/Marine-Corps-University-Press/MCU-Journal/JAMS-vol-14-no-1/The-Singleton-Paradox/