As Canada pours more money into military AI, serious concerns are emerging about safety, oversight, and who’s really in control. Defence spending is surging, and AI is being built into weapons and command systems. A new Cabinet portfolio for AI and Digital Innovation shows how seriously the government is taking this shift.
Canada is pouring money into military AI — but who’s really in control, and at what cost?
Canada’s military is using AI to process huge amounts of data, especially as its presence grows in the Arctic and Indo-Pacific. AI systems track social media, legacy media, and support coordination between agencies like Global Affairs Canada and Border Services. Tools like CastleGuard AI turn raw defence data into actionable insights.
But critics are raising red flags. In similar AI systems, human oversight has dropped to just 20 seconds per decision. That’s barely enough time to catch a mistake. AI has also been found to misidentify targets in 10% of cases, which means human overseers can become little more than rubber stampers. Canada’s been urged to stop AI weapons from selecting human targets without proper human oversight.
There’s also concern about where Canada’s AI research is going. Canadian researchers may be unknowingly helping China’s military through joint academic programs. Canada ranked third in collaborations with China’s People’s Liberation Army, with 84 joint publications in 2017 alone. Universities of Waterloo, Toronto, and McGill were among the top ten schools involved over an 11-year period. Experts advise avoiding 160 Chinese military-focused labs flagged by the Australian Strategic Policy Institute.
At home, the military’s AI infrastructure is built to serve two purposes. In peacetime, it supports civilian use. In a crisis, it shifts to the Canadian Armed Forces. It can support drones, battlefield management, cybersecurity, and even medical applications.
Dalhousie University is simulating human-AI decision-making for Arctic surveillance, helping shape Canada-specific systems that soldiers can actually trust. Thales’s Cognitive Shadow platform further supports this effort by learning human decision-making patterns to provide real-time assistance and reduce cognitive overload among operators.
But researchers and advocates say clear rules are still missing. Canada needs transparent directives covering military AI testing, controls, and accountability. The UN General Assembly adopted Resolution 79/62 in December, signalling a global push toward regulating lethal autonomous weapons. Without them, the line between national security and unchecked surveillance gets harder to see.
References
- https://www.dal.ca/news/2026/02/26/dalsolutions-human-ai-defend-canada-sovereignty-national-security.html
- https://policyoptions.irpp.org/2025/09/ai-defence-spending/
- https://www.canada.ca/en/security-intelligence-service/corporate/publications/hybrid-methods-in-the-grey-zone/technology-hybrid-threat-military-artificial-intelligence-data-surveillance.html
- https://sciencepolicy.ca/posts/establishing-a-dual-use-military-civilian-ai-compute-capability-for-canada/
- https://www.cgai.ca/th_pp_reflections_on_artificial_intelligence_and_canadian_defence
- https://www.cse-cst.gc.ca/en/mission/research-cse/communications-security-establishment-canada-artificial-intelligence-strategy
- https://www.youtube.com/watch?v=YPfvpMV-ri8
- https://nextria.ca/en/empowering-defence