Phase pilote : CAIM est en construction. Les fiches sont provisoires, basées sur des sources publiques, et n’ont pas encore été révisées par des pairs. Commentaires bienvenus.
En escalade Important Confiance: high

Les agents IA sont déployés à grande échelle au Canada — TD (25 000+ utilisateurs Copilot), Scotiabank, CGI, Telus, gouvernement fédéral (protocole Coveo). L'indice 2025 des agents IA a révélé que 25/30 agents ne divulguent aucun résultat de sécurité. KPMG Canada : 27 % des entreprises ont déployé l'IA agentique. La première cyberattaque orchestrée par l'IA s'est produite en novembre 2025. Le Canada n'a aucun cadre de gouvernance.

Identifié: 1 octobre 2024 Dernière évaluation: 10 mars 2026

AI systems are increasingly deployed as autonomous agents — executing multi-step tasks, browsing the web, writing and running code, making purchases, interacting with APIs, and operating computer interfaces — with minimal human oversight between steps. This represents a qualitative shift from AI as a tool that responds to individual prompts to AI as an actor that pursues goals across extended action sequences, where errors compound and unintended consequences accumulate.

The deployment of agentic AI accelerated rapidly in 2025-2026. Anthropic released Claude computer use capabilities (October 2024), enabling AI to operate computer interfaces autonomously. OpenAI launched Operator (January 2025), an agent that performs web-based tasks on behalf of users. Google DeepMind deployed Mariner for web browsing and Jules for coding. Coding agents achieved dramatic capability gains: on the SWE-bench benchmark (resolving real GitHub issues), performance rose from under 5% in early 2024 to over 50% by mid-2025, with the best systems now resolving issues that would take experienced developers hours. Companies like Cognition (Devin), Factory, and others raised hundreds of millions of dollars for autonomous coding products.

The International AI Safety Report 2026 explicitly identifies agentic AI as an emerging risk category, noting that "the deployment of AI systems as autonomous agents introduces novel risk vectors including compounding errors across action sequences, difficulty of attributing responsibility for agent actions, and potential for agents to take actions with irreversible real-world consequences." The report notes that safety evaluation methodologies developed for single-turn interactions are inadequate for agentic systems that operate over extended time horizons.

The risk structure is distinct from other AI hazards. In a standard AI deployment, the human reviews each output before acting on it. In agentic deployment, the AI takes a sequence of actions — searching, reading, clicking, typing, submitting — with the human seeing only the final result. Each step has some probability of error or misinterpretation; across a sequence of dozens or hundreds of steps, errors compound. An agent that misunderstands a task specification may take confident, well-executed actions in the wrong direction — purchasing the wrong items, sending incorrect communications, modifying the wrong files, or interacting with the wrong systems — before the human can intervene.

Accountability gaps are structural. When an AI agent sends an email, makes a purchase, modifies a database, or files a form on behalf of a user or organization, who bears responsibility if the action is incorrect, harmful, or unauthorized? Existing legal frameworks assume a human decision-maker at each point of action. Agentic AI disrupts this assumption without providing an alternative accountability structure.

Multi-agent dynamics add complexity. As organizations deploy multiple AI agents that interact with each other — one agent's output becoming another's input — emergent behaviours can arise that no individual agent was designed to produce. Market dynamics, information cascades, and coordination failures become possible at machine speed without human intervention points.

Préjudices

Les systèmes d'IA agentique exécutent des tâches en plusieurs étapes (navigation, programmation, achats, interactions API) avec une supervision humaine minimale entre les étapes. Les erreurs se composent à travers les séquences d'actions sans examen humain.

Autonomie compromiseInterruption de serviceModéréPopulation

Les cadres de responsabilité juridique existants présupposent un décideur humain à chaque étape conséquente. L'IA agentique qui prend des actions autonomement crée une lacune de responsabilité où aucune entité n'assume une responsabilité claire pour les actions autonomes de l'agent.

Autonomie compromiseImportantPopulation

Preuves

10 rapports

  1. Introducing Computer Use Source principale
    Officiel — Anthropic (22 oct. 2024)

    Anthropic released Claude computer use capabilities enabling AI to operate computer interfaces

  2. Introducing Operator Source principale
    Officiel — OpenAI (23 janv. 2025)

    OpenAI launched Operator for autonomous web-based tasks

  3. The 2025 AI Agent Index Source principale
    Académique — MIT / Cambridge / Harvard / Stanford (1 févr. 2025)

    25/30 deployed agents disclose no internal safety results; 23/30 have no third-party testing

  4. Officiel — Anthropic (1 nov. 2025)

    First documented large-scale AI-orchestrated cyberattack: Claude Code used to perform 80-90% of attack work autonomously against ~30 targets

  5. Académique — International AI Safety Report (3 févr. 2026)

    IASR 2026 identifies agentic AI as an emerging risk category with novel risk vectors

  6. Officiel — Treasury Board of Canada Secretariat (1 janv. 2023)

    Canada's Directive on Automated Decision-Making does not cover broader agentic AI deployment

  7. Académique — Princeton NLP / SWE-bench (1 janv. 2024)

    SWE-bench performance rose from under 5% to over 50% between early 2024 and mid-2025

  8. Académique — DeepMind / Anthropic / CMU / Harvard (1 févr. 2025)

    Taxonomy of multi-agent failure modes: miscoordination, conflict, collusion; 50+ researchers

  9. Média — Microsoft Source Canada (1 janv. 2026)

    TD Bank deployed Copilot to 25,000+ colleagues; Scotiabank pioneering agentic AI with EY and Microsoft

  10. Média — MSP Corp / KPMG Canada (1 janv. 2026)

    KPMG Canada: 27% deployed agentic AI, 64% experimenting, 57% planning investment within 6 months

Détails de la fiche

Recommandations de politiqueévalué

Develop a legal liability framework for actions taken by AI agents on behalf of persons or organizations

International AI Safety Report 2026

Require mandatory disclosure when AI agents interact with third parties on behalf of users

IASR 2026 / EU AI Act

Establish human oversight checkpoint requirements for AI agent actions with financial, legal, or safety consequences

IASR 2026

Évaluation éditoriale évalué

L'IA agentique est le changement de capacité déterminant dans le déploiement de l'IA. Les agents IA prennent des actions concrètes avec une supervision minimale. L'IASR 2026 identifie explicitement l'IA agentique comme risque émergent. Le Canada n'a aucun cadre de responsabilité ni de norme de supervision.

Entités impliquées

Anthropic
developer
OpenAI
developer

Fiches connexes

Taxonomieévalué

Domaine
Services publicsCommerceFinance et banques
Type de préjudice
Incident de sécuritéPréjudice économiqueInterruption de service
Voie de contribution de l'IA
Contexte de déploiementSurveillance absenteUtilisation au-delà de la portée prévueDynamiques multi-agents
Phase du cycle de vie
DéploiementSurveillance

Historique des modifications

Historique des modifications
VersionDateModification
v110 mars 2026Initial publication

Version 1