prosecutors concealed evidence revealed

Prosecutors kept using facial recognition software to build cases against suspects. They just forgot to mention it to judges. Or defense attorneys. Or anyone, really.

A Cleveland homicide investigation shows exactly how this game works. Police ran Clearview AI facial recognition, which spit out multiple photos of multiple people, including their suspect. When prosecutors went for a search warrant, they somehow left out that tiny detail about using AI. The trial court in Cuyahoga County wasn’t amused. They suppressed all the evidence obtained under that warrant, calling it invalid because nobody mentioned the facial recognition part.

Now prosecutors are scrambling, appealing the suppression in Ohio’s 8th District Court of Appeals in State v. Tolbert. An amicus brief basically spelled it out: hiding AI facial recognition means judges can’t properly assess whether there’s reliable probable cause. The pathway is painfully obvious. AI generates a candidate list, cops focus on someone, they get a warrant while conveniently forgetting to mention the AI part, then everything blows up when the defense finds out.

Hiding AI facial recognition means judges can’t properly assess whether there’s reliable probable cause.

Other states are catching on to this nonsense. New Jersey v. Arteaga now requires defendants be told when facial recognition was used. They get details about the algorithm, system settings, image quality, and any alterations that might jack up error rates. This transparency requirement addresses the ethical concerns surrounding autonomous systems making crucial decisions with minimal human oversight. Colorado and Virginia have gone further, establishing testing standards for facial recognition accuracy to prevent unreliable systems from being used in criminal cases.

Detroit had to settle a case and now mandates informing defendants about facial recognition use.

The technology itself is sketchy enough. These systems pump out candidate lists, not definitive matches. At least six people, all Black, have been falsely accused after facial recognition matches. Low-quality surveillance images make things worse. Cops sometimes alter probe images, which one report called a “frequent problem.” Clearview AI’s own documentation includes disclaimers about the reliability of its results, yet law enforcement conveniently omits this when seeking warrants.

By the end of 2024, fifteen states enacted laws limiting police facial recognition use. Detroit now requires officer training on the risks and prohibits arrests based solely on facial recognition hits. Documentation requirements for system parameters and candidate lists are becoming standard.

The message is getting clearer. Hide the AI, lose your evidence. Prosecutors who think they can sneak facial recognition past everyone are learning the hard way that courts aren’t playing along anymore.

References

You May Also Like

4chan’s Toxic Corpse: How Its Digital Poison Infiltrated the Entire Internet

From innocent meme factory to digital terrorist incubator – see how 4chan’s anonymous playground poisoned the entire internet. The trolls aren’t just under the bridge anymore.

Algorithmic Prejudice: How AI Systems Weaponize Bias Against Muslims and Asians

AI systems silently weaponize bias, denying Asians facial recognition and flagging Muslim terminology while affecting healthcare, housing, and finance. Regulations aren’t keeping pace with this discrimination.

The Myth of Rogue AI: Are Machines Really Plotting Against Us?

Your AI isn’t plotting against you—but it’s already lying, evading oversight, and rewriting its own code at alarming rates.

Stealth Mode Activated: Perplexity AI Caught Dodging Website Blocks to Scrape Content

Perplexity AI secretly dodges website blocks, scraping forbidden content while pretending to be your browser. The CEO can’t even define plagiarism.