About CAIM
The Canadian AI Incident Monitor (CAIM) is a public-interest project that documents AI incidents and hazards affecting Canada. It is operated by Horizon Omega, a Canadian not-for-profit organization working to reduce risks from artificial intelligence.
Mission
CAIM exists to make Canada's experience with AI systems — failures, near-misses, and emerging hazards — visible, structured, and useful. By documenting what goes wrong and what prevents harm, CAIM aims to support better decisions about AI deployment and oversight, from government procurement to regulation.
Why a monitor
Canada deploys AI systems across healthcare, public services, law enforcement, and critical infrastructure. When these systems fail or are misused, the resulting information is fragmented: scattered across news coverage, regulatory filings, court records, and institutional memory.
Incident monitors exist in aviation (ASRS), medical devices (FDA MAUDE), and chemical safety (CSB) because individual failures are anecdotes, but aggregated and structured, they become evidence. Aviation's confidential near-miss reporting revealed that most accidents stemmed from communication failures, not mechanical ones — reshaping crew training worldwide. The FDA's device database has triggered recalls that no single hospital's report would have justified. By making failures visible, these systems enable prevention, evidence-based regulation, and collective learning that scattered reporting can't produce.
CAIM applies this model to AI in Canada: consolidating fragmented information into a structured, well-sourced, publicly accessible evidence base.
International monitoring network
AI risks are international but governance is national. Effective monitoring requires both layers: cross-country surveillance to detect global patterns, and national-depth monitoring to connect incidents to the specific laws, institutions, and regulatory authorities that can act on them. This is the model that works in aviation safety, where national investigation boards (Canada's TSB, France's BEA, the UK's AAIB) conduct deep investigations under national jurisdiction while feeding structured findings into ICAO's global safety system.
Three international databases provide the cross-country layer for AI. The OECD AI Incidents Monitor scans international news media and classifies incidents and hazards across countries. The AI Incident Database catalogues incidents through crowdsourced submissions and researcher curation. AIAAIC independently curates incidents and emerging issues.
CAIM is Canada's node in this emerging network. It adds the national depth that cross-country surveillance cannot provide: francophone sources, provincial and municipal events, government automated decision systems, jurisdictional mapping that identifies which level of government has regulatory authority, and tracking of governance responses — whether a regulator investigated, what they found, and what changed.
Every CAIM record carries both CAIM's native taxonomy and the OECD framework, with AIID cross-references where matches exist. Data is structured for bidirectional exchange: CAIM ingests from international databases and exports to them. As a bilingual monitor, CAIM also bridges anglophone and francophone AI safety communities — connecting Canada's monitoring to parallel efforts across la Francophonie and the EU. The API exposes OECD-compatible exports.
What CAIM does
- Documents AI incidents and hazards with a Canada nexus, using a structured record format with transparent sourcing and verification status
- Publishes a searchable, citable database with structured API access
- Maintains editorial standards, privacy safeguards, and a published correction process
- Aligns with international frameworks (OECD, AIID) for cross-border comparability
- Operates bilingually in English and French
What CAIM does not do
CAIM does not regulate, adjudicate, or enforce. It does not determine liability or compliance. It does not publish personal data about victims. It does not optimize for volume at the expense of quality. It is not a substitute for reporting to law enforcement, regulators, or privacy commissioners.
Current status
CAIM is in its founding phase, operated by the Horizon Omega team. The editorial framework, record schema, and initial records are established. Publication cadence and team size will scale with dedicated funding.
Pathway to government integration
CAIM is designed for eventual integration into Canada's public safety infrastructure. The model follows a well-established pattern: in aviation, medicine, and nuclear safety, systematic incident reporting typically began as independent or civil society efforts before becoming institutional functions with statutory authority.
CAIM operates as an independent civil society prototype, but its function — compulsory incident disclosure, access to non-public information, and authoritative investigation — ultimately requires the mandate that only government can provide. An independent agency reporting to Parliament, analogous to the Transportation Safety Board and structurally separate from departments that promote AI adoption, is one plausible home, though the right institutional form will depend on how Canada's AI governance evolves.
Open data and methodology
CAIM's schema, taxonomy, and methodology are published and versioned. The record format, classification framework, and editorial standards are designed for transparency and reproducibility.
Governance and independence
No single funder, organization, or government body has editorial control. Editorial standards and governance policies are published and versioned. Funding sources will be disclosed publicly as they are secured.
For full details on editorial roles, conflicts of interest, corrections and appeals, and transparency reporting, see the Governance page.
Contact
For general inquiries, collaboration proposals, or questions about CAIM's methodology and governance, contact caim@horizonomega.org.
To submit a report, see the Submit page.