LOCAL NEWS, DELIVERED DAILY. Subscribe to our daily news wrap and get the top stories sent straight to your inbox every evening.

Eby credits OpenAI for coming clean after mass killings in Tumbler Ridge, B.C.

Feb 27, 2026 | 12:17 PM

VANCOUVER — British Columbia Premier David Eby is crediting artificial intelligence firm OpenAI for not trying to hide problematic interactions between the Tumbler Ridge shooter and its chatbot.

But while he says the firm “did come forward” about Jesse Van Rootselaar’s ChatGPT activity and “didn’t try to cover it up after the fact,” the firm still made a “colossal, horrific mistake” by not telling police about it before the killings.

Eby has been scathing about OpenAI’s possible role in the killing of eight people by Van Rootselaar on Feb. 10 before she shot herself dead.

He says there’s no date yet for his planned meeting with OpenAI CEO Sam Altman, but he wants the executive to realize the scale of the error in not telling police about the actions that saw Van Rootselaar banned from ChatGPT last year.

Eby, who was speaking on a tour of construction at the new St. Paul’s Hospital site in Vancouver, is reiterating his call for a national reporting standard for the AI industry, and Canada needs to find ways to prevent harm while not losing its benefits.

Van Rootselaar, 18, used a ChatGPT account that was shut down last June after being flagged internally but it was not reported to authorities.

OpenAI revealed Thursday that Van Rootselaar got around the ban by having a second ChatGPT account.

Eby had promised Thursday that there would be “a public process” after the police investigation, to answer questions surrounding AI’s role in the shooting and to “make sure this never happens again.”

“I want to recognize that OpenAI did come forward,” Eby says. “They did bring the information forward to police. They didn’t try to cover it up after the fact, but this was a colossal, horrific mistake, I guess, is the most generous interpretation I can offer, to fail to bring that information forward to authorities.

“It’s important that Mr. Altman realizes that, and I will be looking for his support for a national standard across Canada, a national threshold where all AI companies must report — and clear consequences for if they fail to report — incidents where people are planning violence, planning to hurt other people, and using these tools to develop those plans.”

This report by The Canadian Press was first published Feb. 27, 2026.

Chuck Chiang, The Canadian Press