Pilot phase: CAIM is under construction. Records are provisional, based on public sources, and have not yet been peer-reviewed. Feedback welcome.
Escalating Critical Confidence: high

Canada's policy commits to appropriate human involvement in lethal force, but allied militaries are deploying AI targeting systems (Maven, Lavender) that compress decision cycles from weeks to minutes. No framework exists for CAF to manage this gap in coalition operations.

Identified: March 1, 2024 Last assessed: February 28, 2026

Canada has stated that fully autonomous weapons systems would be unacceptable and that the Canadian Armed Forces is committed to maintaining appropriate human involvement in the use of military capabilities that can exert lethal force. At the 78th UN General Assembly in December 2023, Canada voted in favour of Resolution 78/241 on autonomous weapons, and at the 79th session in December 2024, voted in favour of Resolution 79/239 on lethal autonomous weapons systems.

Allied militaries with which Canada must maintain interoperability are deploying AI targeting systems that operate under different assumptions about the speed and nature of human oversight.

In February 2026, the United States used the Maven Smart System — integrating Anthropic's Claude via a Palantir contract and running on AWS — in Operation Epic Fury against Iran. The system helped generate approximately 1,000 strike targets in the first 24 hours. The U.S. conducted approximately 900 strikes in the first 12 hours. Reporting described how AI-driven systems compressed decision cycles from weeks into minutes.

In Gaza, the Israel Defense Forces used AI targeting systems Lavender and The Gospel. An investigation by +972 Magazine documented Lavender's approximately 10% error rate and analyst review time of approximately 20 seconds per target. Human Rights Watch published separate analysis of the legal and methodological concerns about the use of machine learning for targeting. The AI Incident Database catalogued this as Incident 672.

Concurrently, DND/CAF's own AI strategy, drafted in 2022 and not approved until March 2024, acknowledged that neither DND nor the CAF is "positioned to adopt and take advantage of AI" and described AI initiatives as "fragmented, with each command and environment addressing AI independently." In October 2025, Canada's National Security and Intelligence Review Agency (NSIRA) initiated a formal review of AI use in national security and intelligence activities, indicating that AI deployment in the security apparatus has outpaced existing oversight frameworks.

The interoperability hazard is structural: in a coalition operation, CAF personnel may need to act on intelligence, targeting data, or operational plans generated by allied AI systems that operate at speeds and error tolerances that are incompatible with Canada's stated policy on human oversight. No framework currently exists to manage this gap.

Harms

Allied militaries with which Canada operates in coalition deploy AI systems for targeting, intelligence analysis, and autonomous weapons platforms with error tolerances and decision speeds that may conflict with Canada's stated commitment to meaningful human control over lethal force decisions.

Safety IncidentAutonomy UnderminedCriticalPopulation

Canada has no national doctrine specifying which allied AI outputs Canadian forces may rely on, what validation is required, or how to maintain meaningful human control when receiving AI-generated intelligence and targeting data from coalition partners with different operational standards.

Autonomy UnderminedSignificantSector

Evidence

6 reports

  1. Official — Department of National Defence (Mar 1, 2024)

    DND/CAF acknowledged AI approach is 'fragmented'; not positioned to adopt AI

  2. Official — NSIRA (Jan 6, 2026)

    NSIRA initiated formal review of AI use in national security activities

  3. Academic — Nature (Mar 1, 2026)

    Maven Smart System processed millions of objects; AI compressed decision cycles from weeks to minutes

  4. Media — +972 Magazine (Apr 3, 2024)

    Lavender ~10% error rate; 20-second analyst review; targeting methodology

  5. Media — CBC News (Jul 1, 2024)

    DND AI strategy drafted 2022, approved March 2024; 'fragmented' finding

  6. Official — United Nations Office for Disarmament Affairs (Oct 1, 2024)

    Canada's position requiring 'context-appropriate human involvement'; voted for Resolution 78/241

Record details

Responses & Outcomes

National Security and Intelligence Review AgencyinvestigationActive

NSIRA initiated a formal review of the use and governance of artificial intelligence in national security and intelligence activities, issuing a notification letter on January 6, 2026.

Review is ongoing. Scope and findings not yet published.

Policy Recommendationsassessed

DND/CAF should develop a framework governing Canadian forces' engagement with AI-generated targeting data from allied systems in coalition operations

DND/CAF Artificial Intelligence Strategy (Mar 1, 2024)

Canada should work with Five Eyes and NATO partners to establish transparency and accountability standards for AI targeting systems used in coalition operations

SIPRI / DND CCW GGE side event (Mar 3, 2026)

NSIRA's review of AI in national security should assess the interoperability gap between Canada's LAWS policy commitments and allied AI system capabilities

NSIRA (Jan 6, 2026)

Editorial Assessment assessed

This is the structural gap between Canada's stated commitment to appropriate human involvement in autonomous weapons and the operational reality of allied AI systems. Operation Epic Fury demonstrated AI targeting at a scale and speed that is incompatible with Canada's policy position. DND's own AI strategy acknowledges the institutional capability gap. NSIRA's October 2025 review indicates the oversight body recognizes that AI deployment has outpaced governance frameworks.

Status History

Escalating Confidence: high Critical

Operation Epic Fury demonstrated AI targeting at unprecedented operational scale; NSIRA initiated AI review; DND strategy admits fragmented approach

Active Confidence: medium Severe

DND/CAF AI strategy acknowledged institutional AI capability gap; Lavender/Gospel documented in Gaza operations

Entities Involved

Related Records

Taxonomyassessed

Domain
Defence & Security
Harm type
Safety Incident
AI pathway
Deployment ContextOversight Absent
Lifecycle phase
DeploymentProcurement

Changelog

Changelog
VersionDateChange
v1Mar 11, 2026Initial publication
v2Mar 11, 2026Verification upgraded from corroborated to confirmed: DND/CAF acknowledged AI approach is fragmented; NSIRA initiated formal review.

Version 1