TLDRs;
Contents
- Authors Grady Hendrix and Jennifer Roberson sue Apple for allegedly using pirated books in AI training datasets.
- Lawsuit adds Apple to growing list of tech giants facing copyright claims over artificial intelligence training.
- Similar suits led Anthropic to a $1.5 billion settlement requiring dataset deletion and compensation to authors.
- Global lawsuits, including cases in Japan, highlight rising copyright disputes as AI companies rely on protected works.
Apple is facing a proposed class action lawsuit in the U.S. District Court for the Northern District of California. The case, filed by authors Grady Hendrix and Jennifer Roberson, accuses the tech giant of unlawfully using their copyrighted works to train its artificial intelligence system, “OpenELM.”
The plaintiffs claim their books were copied without consent, credit, or compensation, and included in a dataset that allegedly contained pirated material.
The lawsuit highlights growing unease among writers and publishers as generative AI tools become more advanced.
Apple Joins Growing List of Defendants
The case against Apple comes amid a surge of similar lawsuits targeting major technology firms. Microsoft, Meta, OpenAI, and Anthropic have all been taken to court in recent years over the alleged use of copyrighted material in AI training.
Anthropic, for instance, agreed last month to a settlement worth at least US$1.5 billion with a group of authors, making it one of the largest copyright settlements ever reported. That deal not only provided compensation of roughly US$3,000 per book plus interest but also required the company to delete datasets containing the disputed works.
Apple, by contrast, has not yet responded publicly to the lawsuit. Legal experts note that its silence may signal a careful strategy as the company assesses the potential impact on its AI development plans.
Authors Push Back Against AI Training Practices
Authors and publishers argue that their work is being misused in ways that threaten both their livelihoods and creative rights. Hendrix and Roberson allege that Apple’s use of their books in AI training datasets equates to large-scale copyright infringement, stripping authors of the ability to control or profit from their intellectual property.
This case underscores a broader debate: whether scraping copyrighted material for AI training can be defended as “fair use,” or if it constitutes outright theft. A previous case involving Anthropic saw a judge rule that using books for training could qualify as fair use. However, the court left open the question of whether sourcing material from piracy-focused websites like Library Genesis and Pirate Library Mirror was lawful.
For authors, the lawsuits represent a stand against what they view as unchecked exploitation. For technology companies, the outcomes could set precedents that shape the economics of AI development for years to come.
Global Legal Battles Over AI Intensify
Apple’s lawsuit is part of a wider global trend in which AI companies are increasingly challenged by content creators. Just weeks earlier, Japanese media groups Nikkei and the Asahi Shimbun filed a lawsuit against AI search engine Perplexity, accusing it of storing and republishing articles without permission. The media outlets are seeking damages of 2.2 billion yen (US$14.7 million) each.
Taken together, these lawsuits highlight the mounting legal pressure on the AI sector. As more creators step forward, courts will be forced to decide whether innovation justifies the use of copyrighted works without direct approval or compensation.
For Apple, a company that has long positioned itself as a defender of privacy and user rights, the lawsuit could prove a significant reputational test. The case may also determine how it competes in the fast-evolving race to dominate AI, where access to training data remains one of the most contested issues.