Feb 25, 2026 6:48 PM - Connect Newsroom - Ramandeep Kaur with files from The Canadian Press

Federal ministers say Ottawa is prepared to consider new safeguards around artificial intelligence tools following questions about how OpenAI handled warning signs linked to a deadly shooting in Tumbler Ridge, British Columbia.
The issue has drawn national attention after reports revealed that the accused shooter, Jesse Van Rootselaar, had been removed from OpenAI’s ChatGPT platform months before the February 10 killings. According to reporting first published by The Wall Street Journal, the account was shut down over troubling posts that referenced violent scenarios, including gun use. However, police were not notified before the attack.
After meeting with company representatives on Tuesday, Artificial Intelligence Minister Evan Solomon said federal officials conveyed their disappointment that law enforcement was not alerted earlier. He told reporters that all regulatory options remain under consideration as the federal government examines how online platforms manage potentially dangerous content generated through AI systems.
Justice Minister Sean Fraser said the government’s priority is protecting public safety while balancing privacy and innovation. He described the meeting as an effort to identify what responsibilities technology companies and governments should share when credible threats appear online.
The case has also renewed debate in British Columbia and across Canada about whether existing laws adequately address rapidly evolving AI technologies. While Ottawa has previously introduced legislation aimed at regulating artificial intelligence systems, no specific law currently requires companies to report threatening user behaviour to police.



