Faced with the virality of deepfake Drake, Universal Music made a bid to shut down AI-generated songs—ushering in a new era in the intellectual property debate

What was the first AI-generated song you heard? It might have been Kanye’s counterfeit cover of “Hey There Delilah” by Plain White T’s, or maybe that ChatGPT-written track by DJ David Guetta, featuring Eminem’s simulated vocals. But more than likely, it was “Heart on My Sleeve,” an AI-powered collaboration between Drake and The Weeknd that took over social media feeds this week.

In response to the track’s viral popularity, Universal Music Group—an industry titan that controls one-third of the global music market—has asked streaming services to obstruct AI companies from scraping melodies and lyrics from copyrighted songs to create new, AI-generated bops, citing a “moral and commercial responsibility” to prevent unauthorized use of artists’ voices. “[This] begs the question as to which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deepfakes, fraud and denying artists their due compensation,” a spokesperson from the company stated. “These instances demonstrate why platforms have a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists.”

In the past 24 hours, “Heart on My Sleeve” has been yanked from platforms like Spotify, YouTube, Apple, Soundcloud, and Bandcamp—and while this may be the music industry’s first public reckoning with AI, these issues aren’t new for Holly Herndon. The experimental artist has been working with AI for years, even releasing her own “deepfake twin,” Holly+, in an effort to raise awareness of the challenges artists face as machine learning models advance. “I’ve been experimenting on my own voice and image, because it seems like the most ethical thing to do,” she told Document in a 2021 interview. “But as we move into this new reality, ideas around intellectual property are going to shift.”

“You cannot copyright a voice, but an artist retains exclusive commercial rights to their name and you cannot pass off a song as coming from them without their consent.”

According to Herndon, much of vocal mimicry comes down to personality rights. “You cannot copyright a voice, but an artist retains exclusive commercial rights to their name and you cannot pass off a song as coming from them without their consent,” she wrote in a recent Twitter thread, citing previous legal cases related to vocal impersonation. In a 1998 commercial, for instance, Ford hired one of Bette Midler’s backup singers to imitate her delivery of her own song, after she refused the gig; she sued them for copyright infringement and won. Then, there was Waits v. Frito-Lay, a 1992 case in which the musical legend Tom Waits sued the snack company for its impersonation of him for a Doritos commercial. Unlike with Midler’s case, the song in question had no association with the artist—but, in winning, he established that “some stylistic aspects of the voice are definable… and defensible.”

Amid the current AI boom, there’s a need not only to establish new criteria for intellectual property, but also to institute legal protections against exploitative contracts, through which artists could unknowingly forfeit the right to their own voice and image. “A lot of things are legal on a technicality,” says artist and technological researcher Mat Dryhurst, Herndon’s partner and longtime collaborator. “For instance, you might find that, at some point in 1999, you signed a piece of paper that said, ‘You are free to use this media for whatever purpose’—but of course, nobody anticipated that 20 years later, we’d have AI voice models you can train in an hour.”

According to Herndon and Dryhurst, the right to revoke the use of one’s visual and vocal likeness is foundational to this new era—a claim that echoes those of visual artists, many of whom raised concerns and even filed lawsuits against AI image generators for the non-consensual use of their data to train machine learning models. It’s why the pair founded Spawning, a project dedicated to developing tools for artists to better manage their virtual identities. Thus far, these tools includesHolly+ and Have I Been Trained?, a website that detects whether an artwork has been used to train AI, providing creatives with the option to remove their work from future datasets. Thus far, they’ve opted out over 80 million visual artworks—a number they expect to double within the year.

“This is not the end of art, Herndon says; rather, it’s the birth of a new musical genre, with accompanying growing pains as the industry adjusts.”

Many of the pair’s projects address the issue of personality rights: defined both as the right to not have one’s likeness represented publicly without consent, and to keep one’s image from being commercially exploited without compensation. Opt-outs address the former, while Holly+ provides a potential solution for the latter—because, while Herndon allows people to experiment with her vocal model for free, she also licenses it out for commercial use, using decentralized protocol to ensure she gets a cut of the proceeds.

In an era of viral, AI-generated tracks, it’s only a matter of time before musicians begin to monetize their own vocal deepfakes. But this is not the end of art, Herndon says; rather, it’s the birth of a new musical genre, with accompanying growing pains as the industry adjusts. The problem is that, like with the sample wars, the infrastructure to protect creators only comes after a new medium is popularized, leaving many in the lurch during periods of transition. For instance, the drummer behind “Amen Break”—a now ubiquitous six-second clip, used in millions of songs—died penniless, never having received royalties for the most-sampled piece of music of all-time.

Despite the potential challenges it poses to artists, Herndon and Dryhurst remain optimistic about what this technology could unlock—including both new modes of self-expression, and compensation models that could be better for artists than what came before. “So long as there are protections and education, the future of sharing your identity with others is bright and super interesting and weird,” Herndon writes; in the meantime, however, she cautions artists against signing any contracts that incorporate AI clauses, lest they accidentally sign away the rights to their virtual selves. For instance, she herself recently refused a record contract that required label approval for the use of her voice—an industry standard she hopes will change, so that people can take control of their personal data, and thus, their own image. “This is not science fiction,” Herndon says, emphasizing that, as the economy becomes more entangled with AI, personality rights will only become more important—because, after all, “our digital selves are us.”