Meta Wins Lawsuit Over AI Training on Copyrighted Books, but Questions Remain

A balance scale with "Meta" and digital icons on one side and a stack of books with an open book and question mark on the other, showing an imbalance.

In a significant ruling for the tech industry, a federal judge in San Francisco has sided with Meta Platforms in a high-profile copyright lawsuit brought by a group of authors, including Sarah Silverman and Ta-Nehisi Coates. The case, Kadrey v. Meta, centered on allegations that Meta illegally used copyrighted books to train its AI model, Llama, without permission or compensation. The decision, handed down by U.S. District Judge Vince Chhabria on June 25, 2025, marks a pivotal moment in the ongoing debate over AI training and intellectual property rights. However, the ruling is far from a blanket endorsement of Meta’s practices, leaving the door open for future legal challenges.

A judge’s gavel hits a sound block beside a stack of legal books, with digital code imagery in the background.

The Case: Authors vs. Meta

In 2023, thirteen prominent authors, including Pulitzer Prize winners and bestselling novelists, filed a class-action lawsuit against Meta. They claimed the company used pirated versions of their books, sourced from shadow libraries like Library Genesis (LibGen), to train Llama, Meta’s large language model. The authors argued that this constituted a “massive copyright infringement” and threatened their livelihoods by potentially flooding the market with AI-generated content that could compete with their works.

Meta countered that its use of the books fell under the “fair use” doctrine of U.S. copyright law, which permits limited use of copyrighted material without permission under certain conditions. The company argued that training Llama was a transformative process, enabling the AI to generate original content rather than replicate the books themselves. Meta’s legal team emphasized that Llama does not output the authors’ works verbatim and serves purposes like assisting with creative ideation, writing code, or generating business reports—none of which directly substitute for reading the original books.

The Ruling: A Narrow Victory for Meta

Judge Chhabria issued a summary judgment in Meta’s favor, ruling that the authors failed to provide sufficient evidence that Meta’s AI training harmed the market for their books—a key factor in determining copyright infringement. “The plaintiffs presented no meaningful evidence on market dilution at all,” Chhabria stated, effectively dismissing the case without sending it to a jury.

However, the judge was quick to clarify that this ruling does not give Meta or other AI companies carte blanche to use copyrighted materials. “This ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful,” Chhabria wrote. Instead, he noted that the authors “made the wrong arguments” and failed to build a strong enough case. He suggested that future plaintiffs could succeed with better evidence, particularly on how AI-generated content might undermine the market for original works.

A Broader Context: AI and Copyright Law

This decision comes on the heels of a similar ruling in favor of Anthropic, another AI company, which was also cleared of copyright infringement for training its Claude model on copyrighted books. Both cases highlight the complexities of applying decades-old copyright laws to modern AI technologies. While Judge Chhabria emphasized the lack of evidence for market harm in the Meta case, he expressed concern about the broader implications of AI training. “Generative AI has the potential to flood the market with endless images, songs, articles, and books,” he noted, which could “dramatically undermine the incentive for human beings to create things the old-fashioned way.”

The Meta ruling contrasts with Anthropic’s case, where Judge William Alsup deemed the AI training process “quintessentially transformative” but still ordered a trial to address Anthropic’s use of pirated books in a “central library.” This distinction underscores the nuanced nature of fair use defenses, which depend heavily on the specifics of each case.

What’s Next for Authors and AI Companies?

While Meta’s victory is a win for the tech industry, it’s not the sweeping precedent some might have hoped for. The ruling leaves room for future lawsuits, particularly as evidence emerges about AI models’ potential to memorize and reproduce copyrighted content. For instance, recent research showed that Meta’s Llama 3.1 70B model memorized significant portions of popular books like Harry Potter and the Philosopher’s Stone and The Great Gatsby. Such findings could bolster future claims if plaintiffs can demonstrate market harm.

Authors and creative professionals continue to fight back. The Authors Guild, a plaintiff in a separate lawsuit against OpenAI, is urging writers to protect their works by adding “NO AI TRAINING” notices and pursuing legal action. Meanwhile, other high-profile cases, such as The New York Times v. OpenAI and Microsoft and lawsuits against Midjourney by Disney and Universal, are still working their way through the courts, signaling that the battle over AI and copyright is far from over.

The Bigger Picture

The Kadrey v. Meta ruling raises critical questions about the balance between technological innovation and the rights of creators. While Meta argues that fair use is “vital” for developing transformative AI, authors and publishers warn that unchecked use of their works could devalue human creativity and erode their livelihoods. As AI technologies advance, the legal system will need to grapple with how to fairly compensate creators while fostering innovation.

For now, Meta has dodged a bullet, but Judge Chhabria’s warnings suggest that AI companies may face tougher scrutiny ahead. As he put it, if using copyrighted works is as essential as tech companies claim, “they will figure out a way to compensate copyright holders for it.” With billions of dollars at stake, the next chapter in this legal saga is sure to be contentious.

What do you think about the ruling? Should AI companies be required to pay for using copyrighted works, or does fair use adequately protect innovation? Share your thoughts in the comments below!

Skip to content