About CAIM
The Canadian AI Incident Monitor (CAIM) is a public-interest project that documents AI incidents and hazards affecting Canada. It is operated by Horizon Omega, a Canadian not-for-profit working to reduce risks from artificial intelligence.
Mission
CAIM exists to make Canada's experience with AI systems (failures, near-misses, and emerging hazards) visible and structured. By documenting what goes wrong and what prevents harm, it aims to support better decisions about AI deployment and oversight, from procurement to regulation.
What CAIM does
- Documents AI incidents and hazards with a Canada nexus, using structured records with transparent sourcing and verification status
- Publishes a searchable, citable database with structured API access
- Maintains editorial standards, privacy safeguards, and a published correction process
- Aligns with international frameworks (OECD, AIID) for cross-border comparability
- Operates bilingually in English and French
What CAIM does not do
CAIM does not regulate, adjudicate, or enforce. It does not determine liability or compliance. It does not publish personal data about victims. It does not optimize for volume at the expense of quality. It is not a substitute for reporting to law enforcement, regulators, or privacy commissioners.
Why a monitor
Canada deploys AI systems across healthcare, public services, law enforcement, and critical infrastructure. When these systems fail or are misused, information about the failures is fragmented, scattered across news coverage, regulatory filings, court records, and institutional memory.
Incident monitors exist in aviation (ASRS), medical devices (FDA MAUDE), and chemical safety (CSB) because individual failures are anecdotes, but aggregated and structured, they become evidence.
Aviation's confidential near-miss reporting revealed that most accidents stemmed from communication failures, not mechanical ones, reshaping crew training worldwide. The FDA's device database has triggered recalls that no single hospital's report would have justified. By making failures visible, these systems produce the institutional learning that scattered reporting cannot.
The International AI Safety Report 2026, produced by an independent expert group convened through the AI Safety Summit process, identifies incident reporting as a critical but underdeveloped component of AI risk governance, noting that policymakers have "limited visibility into how risks are identified, evaluated, and managed in practice" and that "evidence on the real-world effectiveness of AI risk management practices remains limited."
CAIM applies this model to AI in Canada: consolidating fragmented information into a structured, well-sourced, publicly accessible evidence base.
International context
AI risks are international but governance is national. Effective monitoring requires both layers: cross-country surveillance to detect global patterns, and national-depth monitoring to connect incidents to the specific laws, institutions, and authorities that can act on them. This is the model that works in aviation safety, where national investigation boards (Canada's TSB, France's BEA, the UK's AAIB) conduct deep investigations under national jurisdiction while feeding findings into ICAO's global safety system.
Three international databases provide the cross-country layer for AI: the OECD AI Incidents Monitor, which scans international media and classifies incidents across countries; the AI Incident Database (AIID), which catalogues incidents through crowdsourced submissions and researcher curation; and AIAAIC, which independently curates incidents and emerging issues.
CAIM adds the national depth that cross-country surveillance cannot provide: francophone sources, provincial and municipal events, government automated decision systems, jurisdictional mapping that identifies which level of government has regulatory authority, and tracking of governance responses: whether a regulator investigated, what they found, and what changed. As a bilingual monitor, CAIM also bridges anglophone and francophone AI safety communities.
Current status
CAIM is in its pilot phase. The editorial framework, record schema, and initial records are established. A continuous monitoring pipeline scans Canadian sources in English and French, using LLM-assisted triage and extraction with mandatory human review before publication. All editorial functions are currently held by a single editor. The project is growing its record corpus and building toward a multi-person editorial team.
Records are based on public sources and have not yet been peer-reviewed. They should be treated as provisional. Not all schema fields are populated on all records; coverage of response tracking, trajectory analyses, and harms data is still growing. We welcome corrections and feedback at caim@horizonomega.org.
Toward public infrastructure
Canada will eventually need a statutory AI incident reporting function, with compulsory disclosure, investigation authority, and whistleblower protection. CAIM advocates for its creation and is designed so its methodology and data structures can inform it.
But effective safety systems have both layers. In aviation, government investigation boards coexist with independent safety foundations and confidential reporting programs operated outside the regulator. The civil society layer didn't disappear when the government layer matured; it became more valuable, because someone needs to monitor government AI systems, surface cross-cutting patterns that no single regulator sees, and hold the statutory body itself accountable.
CAIM is built to be that independent layer: permanent civil society infrastructure that complements a government function, not a prototype that dissolves into one.
Governance and independence
Editorial decisions follow published standards. No funder, organization, or government body has editorial control, and governance policies are designed to maintain that independence as CAIM grows beyond its founding team. Funding sources will be disclosed publicly as they are secured.
For details on editorial roles, conflicts of interest, corrections and appeals, and transparency reporting, see the Governance page. For schema, taxonomy, and editorial methodology, see the Methodology page.
Contact
For general inquiries, collaboration proposals, or questions about CAIM's methodology and governance, contact caim@horizonomega.org.
To submit a report, see the Submit page.