While California’s judges haven’t been replaced by ChatGPT just yet, the state’s court system is racing to figure out what to do with AI before things get weird. The Judicial Council is scrambling to push through a statewide AI policy by July 2025. Chief Justice Patricia Guerrero set up a task force last year, and they’ve cooked up a model policy that every court using generative AI will have to adopt by September 2025.
If it passes, California becomes the biggest court system in the country with actual AI rules. About time.
The proposed policy tackles how courts can use generative AI for their daily grind. Think drafting opinions, legal research, summarizing the mountain of paperwork that floods in. Some California courts are already running pilot programs, letting AI handle document analysis and sort through simple cases.
But here’s the catch – humans still need to review everything the robots spit out. Nobody’s letting AI make the final call on anything important. Yet.
Meanwhile, the California Civil Rights Council just dropped new regulations on AI in hiring. Starting July 2025, employers using AI to screen resumes or analyze interview videos need to test for bias and keep detailed records. Those third-party AI vendors? They’re now on the hook as “employer’s agents” under state law.
AI vendors now legally responsible as employer’s agents when screening job candidates
The rules cover everything from resume scanners to predictive algorithms. Basically, if it makes decisions about people’s jobs, it’s regulated.
The courts aren’t being reckless here. The model policy demands transparency – if AI helped write a filing, you have to say so. Judges can’t use AI to decide disputed facts or interpret the law. Every new AI tool needs a risk assessment before it goes live. Similar to how DHS requires human judgment in immigration decisions despite using AI to screen millions of social media accounts, California courts recognize technology’s limitations.
The task force knows these language models can be biased as hell and might not understand local legal quirks. The Mobley v. Workday case shows why this matters – a federal judge just allowed a lawsuit to proceed against an AI vendor whose recruitment tools allegedly rejected older, Black, and disabled applicants at disproportionate rates.
Recent federal cases in Northern California threw AI companies a bone, ruling that training models on copyrighted material counts as fair use. As long as the AI doesn’t copy stuff word-for-word in its output, they’re probably fine. Northern California judges found AI training transformative when models learn from copyrighted works without reproducing significant portions of the original text.
But the legal environment keeps shifting. California’s courts are trying to stay ahead of the curve. Or at least not fall too far behind.
References
- https://newsroom.courts.ca.gov/news/california-court-system-decide-ai-rule-0
- https://www.klgates.com/2025-Review-of-AI-and-Employment-Law-in-California-5-29-2025
- https://courts.ca.gov/system/files/file/2025-court-statistics-report.pdf
- https://www.skadden.com/insights/publications/2025/07/fair-use-and-ai-training
- https://calawyers.org/business-law/summary-of-developments-related-to-artificial-intelligence-taken-from-chapter-3a-of-the-july-2025-update-to-internet-law-and-practice-in-california-courtesy-of-ceb/