One billion dollars doesn’t hold the same weight as it once did, yet it still captures attention. This was my reaction upon finding out that the AI firm Anthropic settled for at least $1.5 billion with authors and publishers whose books were employed to train an early iteration of its large language model, Claude. This came after a judge’s summary ruling that the books had been used unlawfully. The suggested settlement, currently under the judge’s scrutiny, would purportedly grant authors a minimum of $3,000 per book. I have authored eight books, and my wife has written five. We are talking about funds for a bathroom renovation here!
As the settlement pertains to unlawfully used books, it does not tackle the larger issue of whether it is permissible for AI firms to use copyrighted materials to train their models. However, the involvement of significant monetary amounts signals a change. Previously, discussions regarding AI copyright revolved around legal, ethical, and political hypotheticals. Now that the matter has become concrete, it’s time to confront the essential question: Given that elite AI depends on book content, is it equitable for companies to establish trillion-dollar enterprises without compensating the authors?
Aside from legal matters, I’ve been wrestling with this dilemma. But as the focus shifts from courtrooms to financial remuneration, my viewpoint has evolved. I merit compensation! Compensating authors appears just, despite resistance from powerful entities, including the US president, Donald Trump.
Disclaimer
Before I proceed, I must reveal that I’m an author myself and have a vested interest in the outcome of this discussion. I also serve on the council of the Author’s Guild, which advocates for authors and is suing OpenAI and Microsoft for employing authors’ works for training purposes. (I refrain from voting on matters involving litigation against these companies due to my role in covering tech firms.) Today, I express my personal views.
In the past, I have found myself in the minority on the council, conflicted over whether firms should have the right to utilize models trained on legally acquired books. The notion that humanity is creating a vast repository of knowledge resonates with me. When I spoke with the artist Grimes in 2023, she showed excitement about participating in this initiative: “Oh, wow, I might get to live forever!” That sentiment struck a chord with me as well. Spreading my consciousness is a primary reason I cherish what I do.
Nevertheless, embedding a book into a large language model developed by a major corporation is a different matter altogether. Books may be the most valuable content that an AI model can absorb. Their length and coherence uniquely educate human cognition. Their topics are expansive and all-encompassing. They are more trustworthy than social media and offer richer insights than news articles. Without books, large language models would be drastically less effective.
One might contend that OpenAI, Google, Meta, Anthropic, and others should provide substantial payments for access to books. Just last month, at a heated White House tech dinner, CEOs bragged to Donald Trump about the enormous investments they were reportedly making in US-based data centers to fulfill AI’s computational demands. Apple committed $600 billion, and Meta matched that figure. OpenAI is part of a $500 billion collaborative project named Stargate. In comparison, the $1.5 billion Anthropic has agreed to allocate to authors and publishers in the infringement matter seems unimpressive.
Fair Use
Nonetheless, the legal framework may favor these companies. Copyright law incorporates “fair use,” allowing uncompensated utilization of books and articles based on multiple criteria, including whether the use is “transformational,” meaning it builds upon the book’s content innovatively without competing with the original product. The judge involved in the Anthropic infringement case determined that using legally acquired books for training falls under fair use protection. This ruling is complex since it draws on legal standards established prior to the internet, let alone AI.
A solution relevant to the current landscape is imperative. The AI Action Plan introduced by the White House in May did not offer one. Nevertheless, in his comments regarding the plan, Trump commented on the topic. He opines that authors should not receive payment because establishing a viable system for fair compensation is too convoluted. “You can’t be expected to have a successful AI program when every single article, book, or anything else that you’ve read or studied, you’re supposed to pay for,” Trump stated. “We understand that, but it’s just not feasible—because it’s not doable.” (An administration source mentioned to me this week that the statement “sets the tone” for official policy.)


