military ai surveillance crisis

As Canada pours more money into military AI, serious concerns are emerging about safety, oversight, and who’s really in control. Defence spending is surging, and AI is being built into weapons and command systems. A new Cabinet portfolio for AI and Digital Innovation shows how seriously the government is taking this shift.

Canada is pouring money into military AI — but who’s really in control, and at what cost?

Canada’s military is using AI to process huge amounts of data, especially as its presence grows in the Arctic and Indo-Pacific. AI systems track social media, legacy media, and support coordination between agencies like Global Affairs Canada and Border Services. Tools like CastleGuard AI turn raw defence data into actionable insights.

But critics are raising red flags. In similar AI systems, human oversight has dropped to just 20 seconds per decision. That’s barely enough time to catch a mistake. AI has also been found to misidentify targets in 10% of cases, which means human overseers can become little more than rubber stampers. Canada’s been urged to stop AI weapons from selecting human targets without proper human oversight.

There’s also concern about where Canada’s AI research is going. Canadian researchers may be unknowingly helping China’s military through joint academic programs. Canada ranked third in collaborations with China’s People’s Liberation Army, with 84 joint publications in 2017 alone. Universities of Waterloo, Toronto, and McGill were among the top ten schools involved over an 11-year period. Experts advise avoiding 160 Chinese military-focused labs flagged by the Australian Strategic Policy Institute.

At home, the military’s AI infrastructure is built to serve two purposes. In peacetime, it supports civilian use. In a crisis, it shifts to the Canadian Armed Forces. It can support drones, battlefield management, cybersecurity, and even medical applications.

Dalhousie University is simulating human-AI decision-making for Arctic surveillance, helping shape Canada-specific systems that soldiers can actually trust. Thales’s Cognitive Shadow platform further supports this effort by learning human decision-making patterns to provide real-time assistance and reduce cognitive overload among operators.

But researchers and advocates say clear rules are still missing. Canada needs transparent directives covering military AI testing, controls, and accountability. The UN General Assembly adopted Resolution 79/62 in December, signalling a global push toward regulating lethal autonomous weapons. Without them, the line between national security and unchecked surveillance gets harder to see.

References

You May Also Like

China’s Spy-Grade Laser Reads Sesame-Sized Text From 62 Miles Away

China’s military laser reads text smaller than sesame seeds from 62 miles away—making every satellite and face visible from space.

16 Billion Login Credentials Exposed: Your Digital Identity Is Now at Risk

Your digital identity is worth more than your bank account—and hackers already have 16 billion login credentials ready to exploit.

Critical Nevada Systems Go Dark: Officials Silent on Cause of Massive Security Breach

Nevada’s entire government system vanished overnight while officials refuse to explain why emergency services mysteriously survived the catastrophic breach.

The Telltale Flicker: How Light Patterns Expose AI-Generated Fake Videos

AI-generated videos betray themselves through impossible shadows, flickering skin tones, and physics-defying light patterns that experts can spot instantly.