AI-Generated Wildfire Images Spread Emergency Misinformation During British Columbia's 2025 Fire Season
First documented case in Canada where AI-generated images created misinformation during an active natural disaster emergency. The BC Wildfire Service warned that fabricated imagery could affect emergency decision-making in both directions — exaggerating fire intensity to cause unnecessary panic, or underrepresenting danger and leading people to underestimate risk. No injuries or deaths have been attributed to the AI-generated imagery.
Narrative
During British Columbia’s 2025 wildfire season, AI-generated images depicting wildfire scenes circulated widely on social media platforms, prompting an official warning from the BC Wildfire Service on August 5, 2025.
The service identified multiple fabricated images being shared on social media that inaccurately portrayed fire conditions around British Columbia. One image was posted by a self-described “digital creator” on Facebook on July 31 with a caption referencing the Drought Hill fire near Peachland. The following day, the caption was edited to add a disclaimer that the image was AI-generated and intended for “illustrative purposes only” — but by then it had already been shared as authentic documentation of the fire.
The BC Wildfire Service noted that many of the AI-generated images exaggerated the size and intensity of blazes burning around the province, stoking fear. However, the service also warned of the inverse risk: someone could generate an image showing an aggressive wildfire behaving with less intensity, which could mean someone in danger pays less attention. Both directions of misinformation — exaggeration and minimization — carry safety consequences in an emergency where people make evacuation decisions based on perceived fire behavior.
The service emphasized that people routinely turn to social media for wildfire updates, and that the proliferation of AI-generated imagery “is a new wrinkle that could change someone’s decision-making in an emergency if they don’t know any better.” Whether well-intentioned or deliberately misleading, false or outdated information can make wildfire season worse, even causing people to take unnecessary risks.
The incident occurred during an active fire season with significant fire activity across BC, when accurate real-time information was critical for public safety. No specific injuries or deaths have been attributed to AI-generated wildfire misinformation, but the documented potential for AI imagery to affect emergency decision-making represents a novel harm pathway that had not previously been observed in the Canadian context.
Harms
AI-generated images exaggerating the size and intensity of BC wildfires circulated widely on social media, stoking public fear during an active wildfire emergency. The BC Wildfire Service warned that false imagery could alter evacuation decisions — either by causing unnecessary panic or by underrepresenting real danger, leading people in affected areas to underestimate risk.
Emergency communication integrity was undermined as AI-generated wildfire imagery mixed with authentic reporting, making it harder for the public to distinguish real fire conditions from fabricated ones during a period when accurate information was critical for personal safety decisions.
Affected Populations
- British Columbia residents in wildfire-affected areas
- social media users following BC wildfire updates
- emergency responders managing public communication
AI System Context
Generative AI image tools (unspecified) used to create realistic but fabricated wildfire images. At least one image was posted by a self-described "digital creator" on Facebook on July 31 with a caption referencing the Drought Hill fire near Peachland, BC. The caption was edited the following day to add a disclaimer that the image was AI-generated and intended for "illustrative purposes only." The BC Wildfire Service identified multiple additional AI-generated images circulating on social media that exaggerated fire size and intensity.
Preventive Measures
- Require social media platforms to implement AI-generated content detection and labeling for images related to active emergencies, with heightened enforcement during declared disaster periods
- Develop official emergency imagery verification channels so the public can distinguish authentic emergency images from AI-generated ones
- Include AI-generated misinformation in provincial emergency communication protocols as a recognized threat to public safety during natural disasters
- Establish platform obligations to suppress or label AI-generated content that depicts active emergency situations, particularly wildfire, flood, and evacuation scenarios
Related Records
Taxonomy
Sources
- AI-generated wildfire images spreading misinformation in B.C., fire officials warn
- AI images of B.C. wildfires circulating online, officials warn
- BC Wildfire Service warns of sharing AI-generated images of fires
- BC Wildfire Service warns AI photos spread misinformation and uncertainty
Changelog
| Version | Date | Change |
|---|---|---|
| v1 | Mar 8, 2026 | Initial publication |