ai driven judicial decisions

While California’s judges haven’t been replaced by ChatGPT just yet, the state’s court system is racing to figure out what to do with AI before things get weird. The Judicial Council is scrambling to push through a statewide AI policy by July 2025. Chief Justice Patricia Guerrero set up a task force last year, and they’ve cooked up a model policy that every court using generative AI will have to adopt by September 2025.

If it passes, California becomes the biggest court system in the country with actual AI rules. About time.

The proposed policy tackles how courts can use generative AI for their daily grind. Think drafting opinions, legal research, summarizing the mountain of paperwork that floods in. Some California courts are already running pilot programs, letting AI handle document analysis and sort through simple cases.

But here’s the catch – humans still need to review everything the robots spit out. Nobody’s letting AI make the final call on anything important. Yet.

Meanwhile, the California Civil Rights Council just dropped new regulations on AI in hiring. Starting July 2025, employers using AI to screen resumes or analyze interview videos need to test for bias and keep detailed records. Those third-party AI vendors? They’re now on the hook as “employer’s agents” under state law.

AI vendors now legally responsible as employer’s agents when screening job candidates

The rules cover everything from resume scanners to predictive algorithms. Basically, if it makes decisions about people’s jobs, it’s regulated.

The courts aren’t being reckless here. The model policy demands transparency – if AI helped write a filing, you have to say so. Judges can’t use AI to decide disputed facts or interpret the law. Every new AI tool needs a risk assessment before it goes live. Similar to how DHS requires human judgment in immigration decisions despite using AI to screen millions of social media accounts, California courts recognize technology’s limitations.

The task force knows these language models can be biased as hell and might not understand local legal quirks. The Mobley v. Workday case shows why this matters – a federal judge just allowed a lawsuit to proceed against an AI vendor whose recruitment tools allegedly rejected older, Black, and disabled applicants at disproportionate rates.

Recent federal cases in Northern California threw AI companies a bone, ruling that training models on copyrighted material counts as fair use. As long as the AI doesn’t copy stuff word-for-word in its output, they’re probably fine. Northern California judges found AI training transformative when models learn from copyrighted works without reproducing significant portions of the original text.

But the legal environment keeps shifting. California’s courts are trying to stay ahead of the curve. Or at least not fall too far behind.

References

You May Also Like

The Hollow Comfort: Why Your AI Companion Lacks True Friendship

Young adults are choosing AI over human friends, but these digital relationships might be destroying their ability to form real connections.

Llms.Txt: the Controversial Web Protocol Dividing Website Owners and AI Companies

A new web protocol is forcing website owners to choose: feed AI companies clean data or risk digital extinction.

Millions Wasted: Alabama’s Prison Defense Firm Caught Submitting AI-Generated Fake Citations

Major law firm caught billing millions while submitting fake AI-generated citations threatens Alabama’s prison defense case.

AI’s Hidden Presence: Web Data Shows How Algorithms Infiltrate Your Daily Online Life

AI controls 83% of your online experience while 300 million jobs vanish—but nobody notices the algorithms deciding your life.