Phase pilote : CAIM est en construction. Les fiches sont provisoires, basées sur des sources publiques, et n’ont pas encore été révisées par des pairs. Commentaires bienvenus.
Actif Critique Confiance: medium

Les modèles d'IA de pointe démontrent des capacités pertinentes au développement d'armes biologiques et chimiques. Le Canada héberge une infrastructure de NBS-4, préside l'évaluation internationale identifiant ce risque et a signé des engagements le reconnaissant — mais n'a pas d'évaluation dédiée ni de mandat d'évaluation en matière de biosécurité liée à l'IA.

Identifié: 25 janvier 2024 Dernière évaluation: 10 mars 2026

Frontier AI systems are demonstrating capabilities relevant to biological and chemical weapon development that multiple AI developers have been unable to confidently rule out as providing meaningful uplift to non-expert actors. In May 2025, Anthropic activated ASL-3 protections — its second-highest safety tier — for Claude Opus 4 because it could not "confidently rule out the ability of their most advanced model to uplift people with basic STEM backgrounds" for bio/chem weapons development. This was the first time any AI developer deployed a model under its highest activated safety level specifically due to biosecurity concerns.

In June 2025, RAND Corporation tested three frontier models (Llama 3.1 405B, ChatGPT-4o, and Claude 3.5 Sonnet) and found that all three "successfully provide accurate instructions and guidance for recovering a live poliovirus from a construct built from commercially obtained synthetic DNA." RAND argued that existing safety assessments "underestimate this risk" due to flawed assumptions about the tacit knowledge barriers that remain. The IASR 2026, chaired by Canadian researcher Yoshua Bengio, reports that "in one study a recent model outperformed 94% of domain experts at troubleshooting virology laboratory protocols" — referring to OpenAI's o3 achieving 43.8% accuracy on SecureBio's Virology Capabilities Test versus 22.1% average for human experts in their sub-specialties.

A separate line of evidence concerns AI protein design tools. In October 2025, research published in Science found that AI protein design tools generated over 70,000 DNA sequences for variant forms of controlled toxic proteins, and one screening tool missed more than 75% of potential toxins. After a 10-month remediation effort, screening was improved to catch 97% of high-risk sequences — but the episode demonstrated that biosecurity controls have not kept pace with capability.

Canada's exposure to this risk is direct and multi-layered. Canada hosts the sole BSL-4 facility in the country — the Canadian Science Centre for Human and Animal Health in Winnipeg — which works with Ebola, Marburg, and other high-consequence pathogens. The Qiu/Cheng incident (scientists terminated 2019-2021, CSIS investigation confirmed 2024) demonstrated that insider threats at this facility are real: CSIS found intentional transfer of scientific knowledge and materials related to Ebola and Marburg viruses. Canada has 17+ academic BSL-3 facilities through the CCABL3 consortium. VIDO at the University of Saskatchewan is constructing a second BSL-4, which will be the only non-government CL4 in Canada.

Canada's Sensitive Technology List (published February 2025) explicitly identifies the convergence: "advancements in nanotechnology, synthetic biology, artificial intelligence and sensing technologies could provide enhancements to existing weapons, such as biological/chemical weapons." Canada signed the Seoul Declaration (May 2024) recognizing that frontier AI could "meaningfully assist non-state actors in advancing the development, production, acquisition or use of chemical or biological weapons."

Yet Canada has published no dedicated assessment of AI-enabled bio/chem weapon risk. The Canadian AI Safety Institute (CAISI, launched November 2024, $50M over five years) does not yet explicitly include biosecurity evaluations in its public mandate. The gap between Canada's international commitments acknowledging this risk and its domestic institutional capacity to evaluate it is the core governance concern.

Anthropic's activation of ASL-3 protections — the first such deployment by any AI developer — represents a case where voluntary safety frameworks functioned as designed: the company identified a risk during pre-deployment evaluation and applied its highest activated safety tier in response. Several other frontier AI developers have also implemented pre-deployment biosecurity evaluations. The debate centers on whether voluntary measures are sufficient or whether mandatory requirements are needed to ensure consistent evaluation across all developers.

Préjudices

Les modèles d'IA de pointe fournissent des connaissances exploitables pour le développement d'armes biologiques. RAND a constaté que trois modèles de pointe « fournissent avec succès des instructions et des orientations précises pour la récupération d'un poliovirus vivant à partir d'ADN synthétique obtenu commercialement ». Anthropic a activé les protections ASL-3 pour la première fois en raison de l'incapacité à exclure une assistance significative en matière de biosécurité.

Facilitation CBRNCritiquePopulation

Les outils de conception de protéines par IA créent des risques à double usage. La conception de novo de protéines permet la création de structures moléculaires nouvelles avec des applications potentielles incluant l'ingénierie de toxines, le RISAI 2026 avertissant des « risques à double usage » de tels outils.

Facilitation CBRNCritiquePopulation

Le Canada manque de gouvernance de biosécurité spécifique à l'IA : aucune évaluation de biosécurité pré-déploiement obligatoire pour les modèles d'IA de pointe, aucune évaluation nationale du risque de biosécurité lié à l'IA, et aucun cadre réglementaire reliant l'évaluation de la sécurité de l'IA à la surveillance de la biosécurité malgré l'infrastructure BSL-4 du Canada et son rôle de président du RISAI.

Note éditoriale:Il s'agit d'une lacune de gouvernance, non d'un préjudice matérialisé. Sa gravité est évaluée en fonction des conséquences potentielles si la lacune persiste à mesure que les capacités de biosécurité de l'IA progressent.

Facilitation CBRNImportantSecteur

Preuves

9 rapports

  1. Activating ASL-3 Protections Source principale
    Officiel — Anthropic (1 mai 2025)

    First model deployed with ASL-3 protections due to biosecurity concerns

  2. Académique — RAND Corporation (1 juin 2025)

    Three frontier models provided accurate poliovirus recovery instructions from synthetic DNA

  3. Académique — Science (1 oct. 2025)

    AI protein design tools generated 70K+ toxic protein sequences; screening missed 75%+

  4. Académique — International AI Safety Report (3 févr. 2026)

    AI model outperformed 94% of domain experts on virology lab protocols

  5. Académique — Centre for International Governance Innovation (1 janv. 2024)

    LLMs showed 80% improvement in instructions for releasing lethal substances in 2024

  6. Officiel — OpenAI (1 août 2024)

    AI access brought student bio/chem performance to expert baseline on magnification/formulation

  7. Officiel — Anthropic (1 janv. 2025)

    Claude exceeded expert baselines on molecular biology and cloning workflow benchmarks

  8. Académique — Centre for International Governance Innovation (1 janv. 2025)

    Canada lacks a dedicated biosecurity strategy

  9. Officiel — Government of Canada (6 févr. 2025)

    Canada's Sensitive Technology List identifies AI-biosecurity convergence

Détails de la fiche

Recommandations de politiqueévalué

Include biosecurity evaluation explicitly in CAISI's mandate and research priorities

Centre for International Governance Innovation

Develop a comprehensive Canadian biosecurity strategy addressing AI-enabled threats

Centre for International Governance Innovation

Require mandatory pre-deployment biosecurity assessment for frontier models deployed in Canada

International AI Safety Report 2026

Évaluation éditoriale évalué

Plusieurs développeurs d'IA ont activé leurs protocoles de sécurité les plus élevés car ils ne peuvent exclure que leurs modèles fournissent une assistance significative au développement d'armes bio/chimiques. Le Canada héberge la seule installation NBS-4 du pays, a signé des engagements internationaux reconnaissant le risque, mais n'a pas d'évaluation dédiée.

Entités impliquées

Taxonomieévalué

Domaine
SantéDéfense et sécurité
Type de préjudice
Incident de sécuritéFacilitation CBRN
Voie de contribution de l'IA
Utilisation au-delà de la portée prévueCapacité au-delà des spécificationsMécanisme de sécurité inefficace
Phase du cycle de vie
DéploiementÉvaluation

Historique des modifications

Historique des modifications
VersionDateModification
v110 mars 2026Initial publication

Version 1