ai copyright legal challenges

While Getty Images initially launched a high-profile legal battle against Stability AI with multiple claims of copyright and trademark violations, their case has largely fallen apart in the UK courts. The November 2025 ruling by Judge Joanna Smith marked a turning point in how courts may view AI and copyright law.

Getty abandoned its main copyright claims before the trial ended. The court then ruled that Stability AI didn’t commit secondary copyright infringement. This means AI developers won’t be held liable just for training their models on copyrighted images.

The judge made it clear that an AI model isn’t an infringing copy unless someone can prove the model actually stores or contains protected work. Simply releasing an AI model to the public doesn’t count as copyright infringement.

The trademark side of the case didn’t go much better for Getty. While the court found some “extremely limited” trademark violations in early versions of Stable Diffusion, these only related to watermark-like features in some AI-generated images. Claims about trademark dilution were dismissed completely due to lack of evidence.

This ruling aligns with several US court decisions in similar AI cases. It suggests content owners will face tough challenges when trying to sue AI companies based just on how they train their models.

The case has created what some experts call “legal quicksand” around AI and copyright. Without clearer laws, future claims against AI companies remain uncertain. Courts will likely need specific proof that AI models store or copy protected works to find infringement. These legal challenges reflect broader concerns about algorithmic bias in AI systems affecting marginalized communities.

The Getty v. Stability AI case attracted wide attention as one of the first major IP disputes involving an AI developer to reach trial. The case underwent intensive case management with ten interim hearings that ultimately led to a significant narrowing of Getty’s initial claims. It highlights the growing tension between traditional content owners and AI companies over intellectual property rights in the new world of generative AI technology.

Evidence showed that Stability AI implemented effective guardrails to control output content, including filters that prevented inappropriate or infringing material from reaching users.

As AI continues to evolve, this ruling may influence how courts worldwide handle similar cases in the future.

References

You May Also Like

US Government Scrubs ‘Safety’ From AI Institute’s Name as Director Resigns

US government erases “safety” from AI institute name after director quits—the real reason will make you question everything about AI regulation.

Florida Moves to Ban AI-Only Insurance Claim Denials, Forcing Human Oversight

Florida’s bold move to ban AI-only insurance denials puts humans back in control. Will this law protect you from cold algorithms, or create more bureaucracy? Insurance companies are furious.

Montana Wrestles With AI Freedom: Lawmakers Debate Tech Regulation Balance

Montana wrestles with radical AI freedom as lawmakers juggle citizen protection against fierce innovation. Will the state’s bold experiment crush tech giants or fortify them?

VP Champions Deregulated AI: A Boon or Threat to America’s Workforce?

White House pushes for AI freedom as women’s jobs face the chopping block. Will deregulation fuel prosperity or widen the wealth gap in America’s AI race with China?