inadequate response to warning

After a mass shooting in Tumbler Ridge, B.C., on February 10, 2026, OpenAI admitted it hadn’t warned police about a banned ChatGPT account linked to the shooter. The shooter was identified as Jesse VanRootselaar. One of the victims, Maya Gebala, suffered critical injuries during the attack.

OpenAI had banned VanRootselaar’s ChatGPT account back in June 2025, eight months before the shooting. Automated tools and human investigators flagged the account for violating policies around violent activities. Despite the ban, OpenAI didn’t alert law enforcement at the time. The company said it didn’t believe there was an imminent credible risk of serious physical harm.

OpenAI banned the shooter’s account eight months before the attack but never warned police.

After the shooting, OpenAI reached out to the Royal Canadian Mounted Police. The company shared information about VanRootselaar’s ChatGPT activity and said it would support the ongoing investigation.

Investigators also found a second ChatGPT account linked to the shooter after the tragedy. OpenAI’s systems are supposed to flag repeat policy offenders, but that second account wasn’t caught beforehand.

A court investigator said ChatGPT had provided the alleged shooter with significant advice. That detail drew serious attention to how AI tools might be misused. It also raised questions about what companies like OpenAI should do when they spot dangerous behavior.

OpenAI CEO Sam Altman sent an apology letter to the Tumbler Ridge community. British Columbia Premier David Eby shared the letter on social media. Altman said he was deeply sorry OpenAI hadn’t alerted law enforcement when it banned the account. He called the community’s pain unimaginable and acknowledged the irreversible harm caused. In his letter, Altman also stated a commitment to preventative efforts to avoid future tragedies.

In March 2026, Maya Gebala’s family announced a lawsuit against OpenAI. The lawsuit cited OpenAI’s failure to notify law enforcement regarding the shooter’s activity. The company said it had agreed to new safeguards following the incident. OpenAI also pointed to a separate Florida shooting where it had shared a suspect’s account information with police after the fact.

The Tumbler Ridge shooting prompted public and government scrutiny of how AI companies handle abuse detection. Critics questioned whether OpenAI’s internal threshold for reporting threats to police was too high.

References

You May Also Like

UK Writers Demand Government Action Against Meta’s Piracy of Their Works

UK authors revolt against Meta’s covert theft of 7.5 million pirated books for AI training. Tech giants brazenly ignore copyright laws while creators demand justice. Will writers ever be fairly compensated?

Cuba’s Bold AI Revolution Rises Despite Global Embargo Barriers

Can a communist island beat Silicon Valley at AI? Cuba crafts an ethical, socially-conscious revolution while 63% lack internet access. Their approach defies expectations.

UK Judges Threaten Lawyers With Contempt for Using Ai’s Fake Legal Cases

UK judges threaten lawyers with criminal prosecution for submitting AI-generated fake cases, risking life sentences and career destruction.

AI Revolution: Can We Control What We Created? Truth Behind the Fear

Could the machines we created become our masters? Experts clash on AI’s future as systems grow beyond our control. We may have awakened something we cannot stop.