Apple Accused Of Using Pirated Books To Train AI
Two authors have filed a class-action lawsuit against Apple, accusing the tech giant of copyright infringement for allegedly using their books-without permission-to train its artificial intelligence models, Azernews reports.
The plaintiffs, Grady Hendrix and Jennifer Roberson, claim that Apple utilized datasets containing pirated copies of copyrighted works, including their own, as part of the training process for its AI. According to the complaint, Applebot, the company's web crawler, accessed so-called "shadow libraries" - online repositories known for hosting illegally distributed books.
The lawsuit, which is currently under review as a class action due to the vast number of authors and books involved, alleges that Apple's actions amounted to large-scale copyright theft.
"This conduct stripped the plaintiffs and the proposed class of control over their intellectual property, devalued their creative works, and enabled Apple to reap enormous commercial benefits through unlawful means," the lawsuit states.
This is just the latest in a growing wave of legal challenges targeting companies involved in generative AI development. OpenAI, the maker of ChatGPT, is currently facing multiple lawsuits - including high-profile cases brought by The New York Times and The Authors Guild, one of the oldest nonprofit writers' organizations in the United States.
Notably, Anthropic - the company behind the Claude chatbot - recently agreed to a massive $1.5 billion settlement in a similar class-action lawsuit filed by authors. That case, like the one against Apple, centered on allegations that the company used pirated literary works from online sources to train its models. Reports indicate that the 500,000 authors involved will receive up to $3,000 per work.
The Apple case marks a significant escalation in the debate over data sourcing in AI training. While tech companies argue that large-scale data scraping is essential for building powerful models, authors and publishers maintain that such practices amount to digital looting - violating copyright law and undermining the livelihood of creators.
Legal experts say these lawsuits could reshape how AI companies collect and use data, potentially leading to stricter licensing requirements, new compensation models for authors, or even major changes in the way AI is developed.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Most popular stories
Market Research

- United States Lubricants Market Growth Opportunities & Share Dynamics 20252033
- Daytrading Publishes New Study On The Dangers Of AI Tools Used By Traders
- Newcastle United Announce Multi-Year Partnership With Bydfi
- Ecosync & Carboncore Launch Full Stages Refi Infrastructure Linking Carbon Credits With Web3
- Utila Triples Valuation In Six Months As Stablecoin Infrastructure Demand Triggers $22M Extension Round
- From Zero To Crypto Hero In 25 Minutes: Changelly Introduces A Free Gamified Crash Course
Comments
No comment