Several Australian lawyers have been fined for submitting fake legal cases created by artificial intelligence to court. The incidents involved lawyers using AI tools like ChatGPT, Anthropic Claude, and Microsoft Copilot to generate legal citations that turned out to be completely false.
In Melbourne, the law firm Massar Briggs Law was ordered to pay court costs after filing documents with made-up citations in a Federal Court native title case. The junior solicitor responsible had been working from home without access to the firm’s proper law resources and relied on Google Scholar to generate the problematic citations. Another lawyer in Western Australia had to pay over $8,000 in costs and now faces investigation by the state’s Legal Practice Board. These lawyers submitted documents that included non-existent court cases and entirely false quotes from judges.
The fake citations were discovered when court staff couldn’t find the cases in legal databases. Some submissions contained fabricated quotes from legislative speeches and references to Supreme Court judgments that never existed. One judge called the reliance on unchecked AI output a “dangerous mirage” that undermines justice and the court’s reliability. Victoria’s Supreme Court Justice James Elliott expressed disappointment in how legal counsel handled their submissions during a high-profile murder trial.
The lawyers admitted they were overconfident in what the AI produced. They didn’t check the citations through official legal databases like they normally would. Some defense teams thought the AI technology would produce reliable research results. They assumed if the first few citations looked right, the rest must be accurate too. This lack of verification caused major problems.
Court proceedings were delayed up to 24 hours while staff fact-checked and corrected the false submissions. Judges issued strong warnings to all lawyers about the need to verify any AI-assisted work independently. They emphasized that lawyers remain personally responsible for everything they submit to court, even if AI helped create it. Similar to the Coomer v. Lindell case in the US, these incidents demonstrate what experts call AI hallucinations where models generate convincing but entirely fictional information.
At least three Australian states have now issued guidelines restricting AI use to simple, easily checked legal tasks. The official guidance states that AI shouldn’t be used unless the product is “independently and thoroughly verified.” Legal regulators stress that while more lawyers are experimenting with AI, they must maintain professional standards.
These cases show what happens when lawyers trust AI too much. Courts have made it clear that submissions must be accurate, and AI-generated content isn’t exempt from verification requirements.
References
- https://ia.acs.org.au/article/2025/melbourne-law-firm-caught-using-fake-ai-citations.html
- https://completeaitraining.com/news/australian-lawyer-issues-apology-after-ai-generated-fake/
- https://ia.acs.org.au/article/2025/inherent-dangers-australian-lawyers-busted-using-ai.html
- https://www.the-independent.com/news/world/australasia/ai-quotes-australia-lawyer-murder-b2808235.html