Just two days after Judge William Alsup issued a mixed ruling in the Anthropic AI copyright case, a second California federal judge handed down a clear win for another big player in the AI space. In Wednesday's decision, Judge Vince Chhabria granted summary judgment to Meta Platforms, finding that its use of copyrighted works to train its Llama LLMs qualifies as fair use.
Recent federal court rulings are beginning to shape the legal boundaries for how companies may train AI models on copyrighted content. While the decisions vary in outcome and focus, together they signal that fair use remains a viable defense but not an automatic shield. The application of copyright law is starting to evolve to account for these groundbreaking AI technologies, but the boundaries are still in flux.
This week, Meta Platforms won a significant victory in San Francisco when Judge Vince Chhabria dismissed copyright claims brought by a group of 13 authors. The plaintiffs, including Sarah Silverman and Ta-Nehisi Coates, alleged that Meta used pirated copies of their books to train its Llama model. While the court agreed that the use was transformative, Judge Chhabria emphasized that transformation alone does not resolve the fair use inquiry. He found that the plaintiffs had failed to offer evidence of market harm or substitution, which he deemed essential to their claim. Importantly, he cautioned that the ruling does not validate Meta’s conduct and that in many cases, training on copyrighted works without permission will be unlawful. He suggested that Meta may well be a serial infringer but concluded that these particular plaintiffs had not presented a case that could survive summary judgment.
This week's Anthropic case was decided on a similar fair use question, but the court allowed claims to proceed to trial based on the company’s alleged use of pirated content. There, Judge Alsup found that training on copyrighted books was likely transformative and fair under the law but criticized the company’s sourcing methods and left open the possibility of liability depending on how the materials were obtained. Judge Chhabria, in his own opinion, pushed back on Alsup’s approach, arguing that courts must give more weight to the potential harm to the market for the original works.
These decisions, along with the outcome earlier this year in Ross Intelligence v. Thomson Reuters, illustrate that courts can be, but are not always, sympathetic to fair use defenses in the AI training context. In Ross, the defendant used Westlaw’s proprietary headnotes to train a legal research tool and argued that the resulting product was transformative. The court disagreed, finding that the Ross product competed directly with Westlaw, served the same market purpose, and therefore failed to meet the standard for fair use. The jury ultimately found in favor of Thomson Reuters.
Taken together, these three decisions reveal both limits and opportunities. Courts are not dismissing AI fair use arguments out of hand, but they are scrutinizing the details. Key factors include how the material was obtained, whether the AI output substitutes for the original, and whether the plaintiffs can show harm to the value of their work. While Meta and Anthropic (to some degree) secured wins for now, their victories came with narrow reasoning and pointed warnings. And Ross illustrates how market competition and commercial substitution can defeat a fair use defense even when the AI system transforms its inputs.
The case law is young but growing. Developers who rely on unlicensed data are on notice that courts may ask not just what the model produces, but on what it was trained and whom it affects. Rights holders may likewise see a path forward by focusing on tangible economic harm and market overlap. From a practical standpoint, though, smaller companies that are banking on fair use to protect training practices should be cautious. The fact-specific nature of the defense, and the resulting likelihood that infringement claims will survive early motions to dismiss, means the cost of litigation alone may become a deciding factor.