silenced warning belated regret

A mass shooting in Tumbler Ridge, B.C., on February 10, 2026, left eight people dead. The shooter was identified as Jesse Van Rootselaar. After the attack, OpenAI linked a ChatGPT account to the shooter and alerted the RCMP with account details. A second ChatGPT account under his name was also discovered after the shooting.

OpenAI had actually banned Van Rootselaar’s first ChatGPT account back in June 2025, eight months before the shooting. Both automated tools and human investigators flagged the account for violent activity. It violated OpenAI’s usage policies. Despite systems designed to flag repeat offenders, he managed to open a second account.

OpenAI banned Van Rootselaar’s account eight months before the shooting — yet he simply opened another.

OpenAI CEO Sam Altman issued an apology letter to the Tumbler Ridge community. The letter expressed deep regret that the company didn’t alert police before the shooting. Altman acknowledged the irreversible loss and stated that while words weren’t enough, an apology was still necessary. OpenAI said its thoughts were with those affected. The apology letter was shared on social media by British Columbia Premier David Eby.

The key question is why OpenAI didn’t contact police in June 2025. The company said it weighed that option but determined there was no imminent, credible risk of serious physical harm. The account’s activity didn’t meet the threshold required to make a referral to law enforcement. Human reviewers assess flagged cases for imminent threats, and ChatGPT is trained to refuse requests that promote real-world harm.

After the February 10 shooting, OpenAI proactively shared the shooter’s ChatGPT information with the RCMP and continued supporting the investigation. The RCMP confirmed they received the outreach from the platform after the deadly event.

Public safety analysts noted the significance of the pre-shooting ban. The shooter was reportedly open about violent intentions on the platform, yet OpenAI reviewed and decided no immediate threat existed that required notifying police.

This case isn’t isolated. A Florida shooting also prompted OpenAI to share information after the fact. Officials are now issuing subpoenas for OpenAI’s protocols on reporting threats. One official stated ChatGPT provided significant advice to an alleged shooter, highlighting serious challenges in setting threat detection thresholds. The family of Maya Gebala announced a civil lawsuit against OpenAI, citing the company’s failure to notify law enforcement regarding the shooter’s prior violent activity.

References

You May Also Like

2030 Deadline: DeepMind’s AGI Prediction Could Mark Humanity’s Final Chapter

Is 2030 humanity’s deadline? DeepMind’s AGI prediction divides experts while scientists warn of existential threats through self-improving AI. The clock is ticking.

Democracy Under Siege: AI Weaponization Threatens Global Political Stability in 2025

Democracy’s worst nightmare arrives: AI weapons manipulate elections and crush dissent while institutions crumble. Your vote may no longer matter.

Former Pentagon Insider Exposes Classified UFO Footage, Claims Hidden Government Program

Pentagon insider leaks classified UFO videos the government hoped you’d never see. Officials claim it’s “human error,” but their rigorous approval process tells a different story. What are they hiding?

AI ‘Friends’ or Real Connections? Meta’s Vision Clashes With What Users Actually Want

Can AI “friends” fix your loneliness or deepen it? Meta’s vision for digital companions clashes with experts’ warnings about authentic human connection. The future of friendship hangs in balance.