A mass shooting in Tumbler Ridge, B.C., on February 10, 2026, left eight people dead. The shooter was identified as Jesse Van Rootselaar. After the attack, OpenAI linked a ChatGPT account to the shooter and alerted the RCMP with account details. A second ChatGPT account under his name was also discovered after the shooting.
OpenAI had actually banned Van Rootselaar’s first ChatGPT account back in June 2025, eight months before the shooting. Both automated tools and human investigators flagged the account for violent activity. It violated OpenAI’s usage policies. Despite systems designed to flag repeat offenders, he managed to open a second account.
OpenAI banned Van Rootselaar’s account eight months before the shooting — yet he simply opened another.
OpenAI CEO Sam Altman issued an apology letter to the Tumbler Ridge community. The letter expressed deep regret that the company didn’t alert police before the shooting. Altman acknowledged the irreversible loss and stated that while words weren’t enough, an apology was still necessary. OpenAI said its thoughts were with those affected. The apology letter was shared on social media by British Columbia Premier David Eby.
The key question is why OpenAI didn’t contact police in June 2025. The company said it weighed that option but determined there was no imminent, credible risk of serious physical harm. The account’s activity didn’t meet the threshold required to make a referral to law enforcement. Human reviewers assess flagged cases for imminent threats, and ChatGPT is trained to refuse requests that promote real-world harm.
After the February 10 shooting, OpenAI proactively shared the shooter’s ChatGPT information with the RCMP and continued supporting the investigation. The RCMP confirmed they received the outreach from the platform after the deadly event.
Public safety analysts noted the significance of the pre-shooting ban. The shooter was reportedly open about violent intentions on the platform, yet OpenAI reviewed and decided no immediate threat existed that required notifying police.
This case isn’t isolated. A Florida shooting also prompted OpenAI to share information after the fact. Officials are now issuing subpoenas for OpenAI’s protocols on reporting threats. One official stated ChatGPT provided significant advice to an alleged shooter, highlighting serious challenges in setting threat detection thresholds. The family of Maya Gebala announced a civil lawsuit against OpenAI, citing the company’s failure to notify law enforcement regarding the shooter’s prior violent activity.
References
- https://globalnews.ca/news/11816272/openai-ceo-apologizes-tumbler-ridge-not-alerting-police-shooters-account/
- https://www.cbsnews.com/news/sam-altman-deeply-sorry-not-flagging-law-enforcement-canada-school-shooters-chatgpt-account/
- https://www.youtube.com/watch?v=T1gschHQaxg
- https://www.youtube.com/watch?v=BaK1IYyI3bA