Risque de développement d'armes biologiques et chimiques facilité par l'IA
Les modèles d'IA de pointe démontrent des capacités pertinentes au développement d'armes biologiques et chimiques. Le Canada héberge une infrastructure de NBS-4, préside l'évaluation internationale identifiant ce risque et a signé des engagements le reconnaissant — mais n'a pas d'évaluation dédiée ni de mandat d'évaluation en matière de biosécurité liée à l'IA.
Frontier AI systems are demonstrating capabilities relevant to biological and chemical weapon development that multiple AI developers have been unable to confidently rule out as providing meaningful uplift to non-expert actors. In May 2025, Anthropic activated ASL-3 protections — its second-highest safety tier — for Claude Opus 4 because it could not "confidently rule out the ability of their most advanced model to uplift people with basic STEM backgrounds" for bio/chem weapons development. This was the first time any AI developer deployed a model under its highest activated safety level specifically due to biosecurity concerns.
In June 2025, RAND Corporation tested three frontier models (Llama 3.1 405B, ChatGPT-4o, and Claude 3.5 Sonnet) and found that all three "successfully provide accurate instructions and guidance for recovering a live poliovirus from a construct built from commercially obtained synthetic DNA." RAND argued that existing safety assessments "underestimate this risk" due to flawed assumptions about the tacit knowledge barriers that remain. The IASR 2026, chaired by Canadian researcher Yoshua Bengio, reports that "in one study a recent model outperformed 94% of domain experts at troubleshooting virology laboratory protocols" — referring to OpenAI's o3 achieving 43.8% accuracy on SecureBio's Virology Capabilities Test versus 22.1% average for human experts in their sub-specialties.
A separate line of evidence concerns AI protein design tools. In October 2025, research published in Science found that AI protein design tools generated over 70,000 DNA sequences for variant forms of controlled toxic proteins, and one screening tool missed more than 75% of potential toxins. After a 10-month remediation effort, screening was improved to catch 97% of high-risk sequences — but the episode demonstrated that biosecurity controls have not kept pace with capability.
Canada's exposure to this risk is direct and multi-layered. Canada hosts the sole BSL-4 facility in the country — the Canadian Science Centre for Human and Animal Health in Winnipeg — which works with Ebola, Marburg, and other high-consequence pathogens. The Qiu/Cheng incident (scientists terminated 2019-2021, CSIS investigation confirmed 2024) demonstrated that insider threats at this facility are real: CSIS found intentional transfer of scientific knowledge and materials related to Ebola and Marburg viruses. Canada has 17+ academic BSL-3 facilities through the CCABL3 consortium. VIDO at the University of Saskatchewan is constructing a second BSL-4, which will be the only non-government CL4 in Canada.
Canada's Sensitive Technology List (published February 2025) explicitly identifies the convergence: "advancements in nanotechnology, synthetic biology, artificial intelligence and sensing technologies could provide enhancements to existing weapons, such as biological/chemical weapons." Canada signed the Seoul Declaration (May 2024) recognizing that frontier AI could "meaningfully assist non-state actors in advancing the development, production, acquisition or use of chemical or biological weapons."
Yet Canada has published no dedicated assessment of AI-enabled bio/chem weapon risk. The Canadian AI Safety Institute (CAISI, launched November 2024, $50M over five years) does not yet explicitly include biosecurity evaluations in its public mandate. The gap between Canada's international commitments acknowledging this risk and its domestic institutional capacity to evaluate it is the core governance concern.
Anthropic's activation of ASL-3 protections — the first such deployment by any AI developer — represents a case where voluntary safety frameworks functioned as designed: the company identified a risk during pre-deployment evaluation and applied its highest activated safety tier in response. Several other frontier AI developers have also implemented pre-deployment biosecurity evaluations. The debate centers on whether voluntary measures are sufficient or whether mandatory requirements are needed to ensure consistent evaluation across all developers.
Préjudices
Les modèles d'IA de pointe fournissent des connaissances exploitables pour le développement d'armes biologiques. RAND a constaté que trois modèles de pointe « fournissent avec succès des instructions et des orientations précises pour la récupération d'un poliovirus vivant à partir d'ADN synthétique obtenu commercialement ». Anthropic a activé les protections ASL-3 pour la première fois en raison de l'incapacité à exclure une assistance significative en matière de biosécurité.
Les outils de conception de protéines par IA créent des risques à double usage. La conception de novo de protéines permet la création de structures moléculaires nouvelles avec des applications potentielles incluant l'ingénierie de toxines, le RISAI 2026 avertissant des « risques à double usage » de tels outils.
Le Canada manque de gouvernance de biosécurité spécifique à l'IA : aucune évaluation de biosécurité pré-déploiement obligatoire pour les modèles d'IA de pointe, aucune évaluation nationale du risque de biosécurité lié à l'IA, et aucun cadre réglementaire reliant l'évaluation de la sécurité de l'IA à la surveillance de la biosécurité malgré l'infrastructure BSL-4 du Canada et son rôle de président du RISAI.
Note éditoriale:Il s'agit d'une lacune de gouvernance, non d'un préjudice matérialisé. Sa gravité est évaluée en fonction des conséquences potentielles si la lacune persiste à mesure que les capacités de biosécurité de l'IA progressent.
Preuves
9 rapports
- Activating ASL-3 Protections Source principale
First model deployed with ASL-3 protections due to biosecurity concerns
- Contemporary Foundation AI Models Increase Biological Weapons Risk Source principale
Three frontier models provided accurate poliovirus recovery instructions from synthetic DNA
- AI-designed toxins slip through safety checks used by companies that make custom DNA Source principale
AI protein design tools generated 70K+ toxic protein sequences; screening missed 75%+
- International AI Safety Report 2026 Source principale
AI model outperformed 94% of domain experts on virology lab protocols
-
LLMs showed 80% improvement in instructions for releasing lethal substances in 2024
-
AI access brought student bio/chem performance to expert baseline on magnification/formulation
-
Claude exceeded expert baselines on molecular biology and cloning workflow benchmarks
-
Canada lacks a dedicated biosecurity strategy
-
Canada's Sensitive Technology List identifies AI-biosecurity convergence
Détails de la fiche
Recommandations de politiqueévalué
Include biosecurity evaluation explicitly in CAISI's mandate and research priorities
Centre for International Governance InnovationDevelop a comprehensive Canadian biosecurity strategy addressing AI-enabled threats
Centre for International Governance InnovationRequire mandatory pre-deployment biosecurity assessment for frontier models deployed in Canada
International AI Safety Report 2026Évaluation éditoriale évalué
Plusieurs développeurs d'IA ont activé leurs protocoles de sécurité les plus élevés car ils ne peuvent exclure que leurs modèles fournissent une assistance significative au développement d'armes bio/chimiques. Le Canada héberge la seule installation NBS-4 du pays, a signé des engagements internationaux reconnaissant le risque, mais n'a pas d'évaluation dédiée.
Entités impliquées
Taxonomieévalué
Historique des modifications
| Version | Date | Modification |
|---|---|---|
| v1 | 10 mars 2026 | Initial publication |