The White House TikTok account posted AI-altered video of a U.S. Olympic hockey player and Ottawa Senators captain, fabricating anti-Canadian slurs. The video received over 11 million views.
A 19-year-old used AI face-swap to put Ghislaine Maxwell's face on a Quebec City pedestrian; the video went viral with 7M views, leading to widespread conspiracy theories.
OpenAI's safety systems flagged and banned a ChatGPT account for violent content in June 2025. The account holder carried out a mass shooting in Tumbler Ridge, BC in February 2026. OpenAI had not reported the flagged account to law enforcement. The incident prompted federal calls for mandatory AI safety reporting requirements.
Toronto Police said AI scams 'took off like a rocket'; Competition Bureau warned of AI government impersonators; Toronto fraud losses hit $433M in 2025 with national losses reaching $704M.
Edmonton Police launched the world's first facial recognition body camera pilot in December 2025, scanning faces against a watch list of 6,341 people in silent mode without real-time field alerts. EPS stated regulation requires submission of a privacy assessment but not prior approval; Alberta's Privacy Commissioner rejected this interpretation.
A joint advisory by the RCMP, Public Safety Canada, Global Affairs Canada, FINTRAC, and CCCS warned that North Korean operatives use AI-enabled deepfake technologies to obtain remote IT positions, posing as freelancers, with income funding DPRK weapons programs.
York Regional Police and Peel Regional Police jointly deployed IDEMIA facial recognition in May 2024, followed by Halton Regional Police in December 2025. The three services share a database of 1.6 million mugshots; they state matches are treated as investigative leads reviewed by trained analysts. Civil liberties organizations called for a moratorium on police facial recognition in Canada.
Russia's Doppelganger network published more than a dozen articles targeting Canadian politics through the "Reliable Recent News" site (2023–2024). OpenAI confirmed the broader operation used ChatGPT for translation and social media comment generation.
Canada's RRM detected multiple PRC-attributed Spamouflage campaigns (2023–2025) using AI-generated deepfake videos and, in a first for Canada, non-consensual intimate imagery to target Canadian MPs and Chinese-Canadian critics.
A tribunal ruled Air Canada liable after its chatbot provided inaccurate information about bereavement fare policy, setting a precedent in British Columbia.
An algorithm allegedly pooled rival landlords' confidential data to generate coordinated rent recommendations, triggering a Competition Bureau investigation.