ai copyright legal challenges

While Getty Images initially launched a high-profile legal battle against Stability AI with multiple claims of copyright and trademark violations, their case has largely fallen apart in the UK courts. The November 2025 ruling by Judge Joanna Smith marked a turning point in how courts may view AI and copyright law.

Getty abandoned its main copyright claims before the trial ended. The court then ruled that Stability AI didn’t commit secondary copyright infringement. This means AI developers won’t be held liable just for training their models on copyrighted images.

The judge made it clear that an AI model isn’t an infringing copy unless someone can prove the model actually stores or contains protected work. Simply releasing an AI model to the public doesn’t count as copyright infringement.

The trademark side of the case didn’t go much better for Getty. While the court found some “extremely limited” trademark violations in early versions of Stable Diffusion, these only related to watermark-like features in some AI-generated images. Claims about trademark dilution were dismissed completely due to lack of evidence.

This ruling aligns with several US court decisions in similar AI cases. It suggests content owners will face tough challenges when trying to sue AI companies based just on how they train their models.

The case has created what some experts call “legal quicksand” around AI and copyright. Without clearer laws, future claims against AI companies remain uncertain. Courts will likely need specific proof that AI models store or copy protected works to find infringement. These legal challenges reflect broader concerns about algorithmic bias in AI systems affecting marginalized communities.

The Getty v. Stability AI case attracted wide attention as one of the first major IP disputes involving an AI developer to reach trial. The case underwent intensive case management with ten interim hearings that ultimately led to a significant narrowing of Getty’s initial claims. It highlights the growing tension between traditional content owners and AI companies over intellectual property rights in the new world of generative AI technology.

Evidence showed that Stability AI implemented effective guardrails to control output content, including filters that prevented inappropriate or infringing material from reaching users.

As AI continues to evolve, this ruling may influence how courts worldwide handle similar cases in the future.

References

You May Also Like

Brexit Didn’t Save UK Businesses From the EU AI Act’s Regulatory Grip

Brexit promised freedom, but UK’s AI firms now juggle two regulatory worlds instead of one. The £10 billion industry faces double compliance costs while EU penalties loom large.

British Regulator Targets Musk’s X as Grok AI Generates Disturbing Sexual Deepfakes

British regulators confront X after Grok AI creates explicit deepfakes of real people, sparking global bans while Silicon Valley defends “innovation.”

Trump Orders DOJ Attack on State AI Regulations, Threatens Federal Funding

Trump threatens to defund states protecting citizens from AI discrimination—federal power grab sparks constitutional crisis as tech giants celebrate unprecedented regulatory overthrow.

Trump’s Radical AI Gambit: Deregulation Blitz to Dominate Global Tech Race

Trump’s $90 billion AI gambit strips away 90 federal safeguards while critics warn of catastrophic consequences. Will deregulation beat China or destroy us?