Des images d'incendies de forêt générées par l'IA propagent de la désinformation en situation d'urgence durant la saison des feux 2025 en Colombie-Britannique
Premier cas documenté au Canada où des images générées par l'IA ont créé de la désinformation durant une urgence liée à une catastrophe naturelle. Le BC Wildfire Service a averti que les images fabriquées pouvaient affecter les décisions d'urgence dans les deux sens — en exagérant l'intensité des feux ou en sous-représentant le danger. Aucun décès ni blessure n'a été attribué aux images générées par l'IA.
Récit
During British Columbia’s 2025 wildfire season, AI-generated images depicting wildfire scenes circulated widely on social media platforms, prompting an official warning from the BC Wildfire Service on August 5, 2025.
The service identified multiple fabricated images being shared on social media that inaccurately portrayed fire conditions around British Columbia. One image was posted by a self-described “digital creator” on Facebook on July 31 with a caption referencing the Drought Hill fire near Peachland. The following day, the caption was edited to add a disclaimer that the image was AI-generated and intended for “illustrative purposes only” — but by then it had already been shared as authentic documentation of the fire.
The BC Wildfire Service noted that many of the AI-generated images exaggerated the size and intensity of blazes burning around the province, stoking fear. However, the service also warned of the inverse risk: someone could generate an image showing an aggressive wildfire behaving with less intensity, which could mean someone in danger pays less attention. Both directions of misinformation — exaggeration and minimization — carry safety consequences in an emergency where people make evacuation decisions based on perceived fire behavior.
The service emphasized that people routinely turn to social media for wildfire updates, and that the proliferation of AI-generated imagery “is a new wrinkle that could change someone’s decision-making in an emergency if they don’t know any better.” Whether well-intentioned or deliberately misleading, false or outdated information can make wildfire season worse, even causing people to take unnecessary risks.
The incident occurred during an active fire season with significant fire activity across BC, when accurate real-time information was critical for public safety. No specific injuries or deaths have been attributed to AI-generated wildfire misinformation, but the documented potential for AI imagery to affect emergency decision-making represents a novel harm pathway that had not previously been observed in the Canadian context.
Préjudices
AI-generated images exaggerating the size and intensity of BC wildfires circulated widely on social media, stoking public fear during an active wildfire emergency. The BC Wildfire Service warned that false imagery could alter evacuation decisions — either by causing unnecessary panic or by underrepresenting real danger, leading people in affected areas to underestimate risk.
Emergency communication integrity was undermined as AI-generated wildfire imagery mixed with authentic reporting, making it harder for the public to distinguish real fire conditions from fabricated ones during a period when accurate information was critical for personal safety decisions.
Populations touchées
- British Columbia residents in wildfire-affected areas
- social media users following BC wildfire updates
- emergency responders managing public communication
Contexte du système d'IA
Generative AI image tools (unspecified) used to create realistic but fabricated wildfire images. At least one image was posted by a self-described "digital creator" on Facebook on July 31 with a caption referencing the Drought Hill fire near Peachland, BC. The caption was edited the following day to add a disclaimer that the image was AI-generated and intended for "illustrative purposes only." The BC Wildfire Service identified multiple additional AI-generated images circulating on social media that exaggerated fire size and intensity.
Mesures préventives
- Require social media platforms to implement AI-generated content detection and labeling for images related to active emergencies, with heightened enforcement during declared disaster periods
- Develop official emergency imagery verification channels so the public can distinguish authentic emergency images from AI-generated ones
- Include AI-generated misinformation in provincial emergency communication protocols as a recognized threat to public safety during natural disasters
- Establish platform obligations to suppress or label AI-generated content that depicts active emergency situations, particularly wildfire, flood, and evacuation scenarios
Fiches connexes
Taxonomie
Sources
- AI-generated wildfire images spreading misinformation in B.C., fire officials warn
- AI images of B.C. wildfires circulating online, officials warn
- BC Wildfire Service warns of sharing AI-generated images of fires
- BC Wildfire Service warns AI photos spread misinformation and uncertainty
Historique des modifications
| Version | Date | Modification |
|---|---|---|
| v1 | 8 mars 2026 | Initial publication |