Anthropic has reached a settlement and agreed to put a stop to showing users music lyrics based on copyrighted songs from several music publishers. Back in 2023, the AI company was sued by Universal Music Group, Concord Music Group, and others after it was found that its Claude chatbot would return lyrics to songs like Beyoncé’s “Halo” when prompted.
The entertainment industry is one of the most litigious out there and fights vigorously to defend its copyrights—just look back at historic cases, from the destruction of Napster to the multi-year legal battle Viacom fought against YouTube. More recently, the popular lyric annotation website Rap Genius (now just called Genius) was sued by the National Music Publishers Association for reproducing lyrics of copyrighted songs.
Music publishers suing Anthropic acknowledged that other websites like music annotation platform Genius distribute lyrics online, but noted that Genius eventually began to pay a license fee to publish them on its website.
In this latest suit, the music publishers claimed that Anthropic scraped lyrics from the web and intentionally removed watermarks that are placed on lyric websites to help identify where the copyrighted material was published. After Genius began licensing song lyrics from music publishers, it cleverly inserted extra apostrophes into the lyrics so that, in the event the material was inappropriately copied, Genius would know the material it explicitly paid for had been stolen and be able to demand removal.
Anthropic did not concede the claims, but as part of the settlement agreed to better maintain guardrails that prevent its AI models from infringing on copyrighted material. It will also work in good faith with music publishers when it is found that the guardrails are not working.
Anthropic defended the act of using song lyrics and other copyrighted material for training AI models, telling The Hollywood Reporter, “Our decision to enter into this stipulation is consistent with those priorities. We continue to look forward to showing that, consistent with existing copyright law, using potentially copyrighted material in the training of generative AI models is a quintessential fair use.” This argument has been central to AI companies’ defense of copyrighted material showing up in their models. Advocates claim that remixing copyrighted content from websites like the New York Times constitutes fair use so long as it has been materially altered through derivative works.
News and music publishers disagree, and the lawsuit against Anthropic is not entirely over yet. The music publishers are still seeking a court injunction preventing Anthropic from training future models on any copyrighted music lyrics whatsoever.
The concern about abuse stems from the potential for Anthropic’s models to be used in music generation that causes a musician to lose control of their artistry. It is not an unfounded concern, as it has been widely speculated that OpenAI imitated the voice of Scarlett Johansson after she declined to provide her voice for its AI voice model.
Tech companies like OpenAI and Google make their money on platforms and network effects, not by selling copyrighted material, which has always led to this tension between Hollywood and Silicon Valley. Art is merely “content” meant to serve the greater purpose of generating engagement and selling ads. The AI slop that’s filling Facebook today is representative of how tech companies see it all as interchangeable.
Publishers like the Times have been fighting high-profile battles against the likes of OpenAI in court to stop them from hoovering up copyrighted material. OpenAI has tried to respond by licensing material from some companies, and another AI player, Perplexity, has begun testing a revenue-sharing model. But publishers want more control and not to be forced into these shaky deals that could end at any time and still drive people away from their websites. Which is all to say, this is far from the end of the story when it comes to disputes over copyrighted material in large language models.
Read the full article here