A U.S. federal court has delivered a major victory to Meta in the ongoing legal disputes surrounding the use of copyrighted content for AI training. On Wednesday, Judge Vince Chhabria ruled that Meta did not violate copyright laws when it used books by 13 authors to train its large language models (LLMs), determining that the usage falls under “fair use.” “The Court has no choice but to grant summary judgment to Meta,” wrote Judge Chhabria, noting that the plaintiffs failed to provide sufficient evidence that the company’s use of their work led to financial harm, as reported by Wired. In 2023, a group of well-known authors, including Sarah Silverman and Ta-Nehisi Coates, filed a lawsuit against Meta, alleging the company had used their copyrighted books without permission to train its AI systems. The case was one of the earliest of its kind, although multiple similar lawsuits are now under review in courts across the United States. Under copyright law, fair use is assessed by factors including whether the new work is 鈥渢ransformative鈥 and whether it negatively impacts the market for the original. In his ruling, Judge Chhabria placed greater emphasis on the issue of market harm. This contrasts with Judge William Alsup鈥檚 approach earlier in the week, where he ruled that Anthropic鈥檚 AI training qualified as fair use but focused more on how transformative the usage was. 鈥淭he key question in virtually any case where a defendant has copied someone鈥檚 original work without permission is whether allowing people to engage in that sort of conduct would substantially diminish the market for the original,鈥 Chhabria stated. He ultimately determined that the authors failed to demonstrate that Meta鈥檚 actions had any negative effect on their book sales or income. This decision could influence the direction of upcoming legal arguments in similar disputes. The judge also appeared sceptical of the concept of 鈥渕arket dilution鈥 as a sole justification against fair use. 鈥淲e haven鈥檛 seen the last of this novel market dilution theory,鈥 remarked Cardozo Law professor Jacob Noti-Victor. Still, the judge acknowledged that similar actions may not be legal in all cases. 鈥淚n many circumstances, it will be illegal to copy copyright-protected works to train generative AI models without permission,鈥 Chhabria wrote. 鈥淲hich means that the companies, to avoid liability for copyright infringement, will generally need to pay copyright holders for the right to use their materials.鈥 Meta welcomed the ruling, but the plaintiffs’ legal team voiced their disappointment. 鈥淭he court ruled that AI companies that 鈥榝eed copyright-protected works into their models without getting permission鈥 are generally violating the law,鈥 stated Boies Schiller Flexner. 鈥淵et, despite the undisputed record of Meta鈥檚 historically unprecedented pirating of copyrighted works, the court ruled in Meta鈥檚 favour. We respectfully disagree.鈥 Meta spokesperson Thomas Richards responded positively to the ruling: 鈥淥pen-source AI models are powering transformative innovations, productivity, and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology.鈥 This case is one of several unfolding across the globe. Earlier this week, Judge Alsup ruled in favour of Anthropic鈥檚 AI training methods under fair use, though he called for a separate trial to address the storage of pirated books. Microsoft is also under legal scrutiny, facing allegations that it used 200,000 pirated books to train its Megatron AI model. In the UK, Getty Images recently dropped key copyright claims against Stability AI. Together, these legal battles highlight the growing tension between content creators and AI firms over the use of creative material in training datasets. While recent rulings have leaned toward recognising transformative AI use as fair use, the broader legal debate is still evolving.