At the core of the intellectual property debate is a question about who profits off the fruits of creative labor—and what it means to make something beautiful when everyone else can, too
When you type a prompt into an AI image generator, the results are manifold: a slew of artistic renderings, spit out in a matter of seconds based on whatever words come to mind. But whether hyper-realistic or high fantasy, these images all have something in common: They’re based on the work of human artists, most of which have been appropriated without permission.
By this point, you’ve probably heard about the boom in AI technology. Perhaps you’ve even heard about the subsequent lawsuit filed against the tech behemoths behind it, alleging that, because their programs are trained on millions of existing artworks available online, companies like Stability AI are stealing work from living artists. This is why computer scientist Ben Zhao, together with a group of researchers at the University of Chicago, developed Glaze: a tool that protects artists’ work from use by AI by instituting minute changes at the pixel-by-pixel level—which, while all-but-undetectable to the human eye, is enough to skew the accuracy of machine learning algorithms designed to replicate style. “Attacks on AI systems are possible because the mathematical representation deviates significantly from what humans perceive,” Zhao says. “And we’re leveraging that gap.”
This is one of several approaches to an ever-evolving issue: that of the non-consensual use of images to train algorithms, whether that’s large swathes of data fed to machine learning models like DALL-E, or one’s personal likeness in a pornographic deepfake. It’s the same reason experimental musician Holly Herndon and fellow artist Mat Dryhurst teamed up with a group of collaborators to launch Spawning, a project dedicated to putting tools in the hands of artists—including one that informs them whether their work has been used to train machine learning algorithms, and offers the opportunity to remove them from datasets in the future. So far, they’ve opted out 80 million images, delivering the wishes of artists to the organizations behind popular image generator tools, which have agreed to honor their requests.
The debate about the ethics of training data, Dryhurst says, “feels like a profound thing that the whole world is encountering right now—but this has been on our minds from the very beginning.” And it’s true: In a 2021 interview, Herndon warned just that. “I’ve been experimenting on my own voice and image, because it seems like the most ethical thing to do. But there is currently no intellectual property law around training sets—and as we move into this new reality of machine learning, ideas around intellectual property are going to shift.”
“Some people overstate what AI can do, and others overblow ethical concerns. There are, of course, very real dangers. But to me, the most interesting thing is what it means to be an artist in these times.”
“Good artists borrow, great artists steal,” goes the old adage credited to Pablo Picasso—the origin of which is, ironically, hotly disputed. But does this principle still apply when outside sources are being adapted by machine learning algorithms, rather than human hands?
If you ask Zhao, the answer is no. “Machine learning models can mimic, interpolate, and extrapolate, but they cannot create. They can paint in the style of a particular artist, but they cannot judge whether the modification of that style remains meaningful. These are questions only human artists can answer,” he says. “If we abdicate these creative roles to AI models, we risk losing human artists, and future generations may be limited to regurgitated versions of the same thing.” According to Zhao, the impact is already being felt: “Artists are abandoning their crafts after years of practice, and teachers are losing their livelihoods as students drop out of class.”
On the other hand, this kind of “super-powered paintbrush,” as Zhao puts it, has the potential to bring about a new culture of art—and some believe the potential future benefits outweigh the current negatives. “A lot of people have legitimate grievances to bring to the table, but at the same time, there’s no choice but to be pragmatic and have imagination about this—because anywhere you can imagine this stuff being integrated in the future, those meetings have already happened,” says Dryhurst, explaining that he takes a “glass half full” approach by necessity. In his view, artists should be focusing not on the existential threat of AI, but on how to make these tools work for them and create a sense of agency in how they’re used. “Right now, we’re facing an information gap, where you have a whole group of people who are seeing this stuff for the first time, and it’s something they never asked for. Their fears are valid, but people shouldn’t want to burn down server rooms. Instead, we should all take a deep breath and ask, What does this mean for our future, and what are our best options in the short-term?”
Since the public debut of Glaze, Zhao has received messages from creatives around the globe, who beg him to create new tools to protect their craft: from voice-actors scared they’ll be replaced, to choreographers seeking protection from artificially-generated mimicry. This collective panic is made worse by the fact that, according to Dryhurst, there are charlatans on both sides. “Some people overstate what AI can do, and others overblow ethical concerns,” he says. “There are, of course, very real dangers. But to me, the most interesting thing is what it means to be an artist in these times.”
“The injustice has become so monstrous that we now have to reassess the foundational logic by which we do things. We cannot live in a world where big companies get to accrue all the value from everyone’s intellectual property for the rest of time.”
“Our eyes are fleshy things, and for most of human history our visual culture has also been made of fleshy things,” wrote the artist Trevor Paglen in 2016. The history of art, he says, is filled with “pigments and dyes, oils, acrylics, silver nitrate and gelatin—materials that one could use to paint a cave, a church, or a canvas.”
In his essay “Invisible Images,” Paglen posits that we now possess a robust set of theoretical concepts with which to analyze art—but that the majority of visual culture, unbeknownst to us, has undergone a fundamental change in the last decade, transforming from the tactile and fleshy to the invisible and mechanized. Our built environments, he says, are filled with examples of machine-to-machine seeing apparatuses: automatic license plate readers and security cameras in mall department stores that track what someone is looking at, and for how long.
These systems are made possible because digital images are fundamentally machine-readable—though, of course, they’re still visible to humans when we want them to be. While an iPhone photo might exist as an immaterial collection of data points in our absence, it instantly transforms into a legible image the moment we conjure it up for display. Then, when we’re done with it, it reverts back to its immaterial form: one that can be “seen” by machines, but not by human eyes. Unlike a roll of film, Paglen says, “the image doesn’t need to be turned into human-readable form in order for a machine to do something with it”—and the fact that a visual culture can be formed by relations between machines represents a fundamental change in the politics of the image. “Images have begun to intervene in everyday life,” he says, their functions spanning from pictorial representation to mass surveillance: Everywhere we turn, “invisible images are actively watching us, poking and prodding, guiding our movements, inflicting pain and inducing pleasure.”
All of this, Paglen says, is hiding in plain sight. But luckily, the same qualities that make machine vision so powerful also make it fallible: The difference between what humans see, and what an AI thinks it sees, is what provides an opportunity for interventions like Glaze, in which the qualities legible to a machine as ‘style’ are altered just enough to skew its interpretation of what a replica might look like. The problem, of course, is that this doesn’t work for long. In fact, in a test case last week, Spawning reported that it was able to bypass Glaze’s highly-praised ‘cloaking’ technology in under an hour. “The Glaze approach is a valiant attempt to modulate the art files themselves, with the aspiration of making them untrainable,” a statement from Spawning reads. “We applaud the motivations and efforts behind the project.”
“People shouldn’t want to burn down server rooms. Instead, we should all take a deep breath and ask, What does this mean for our future, and what are our best options in the short-term?”
So, what now?
According to experts like Dryhurst, there are a few options for how to proceed—and, for the most part, figures in the fight for control over artists’ data largely fall into three distinct camps. There are the abolitionists, he says—where someone like Karla Ortiz, a vocal advocate against the use of AI and a plaintiff in the lawsuit against Stable Diffusion, might find herself. Then there’s the camp that believes in a universal solution for artist compensation: something not unlike Spotify, through which artists are afforded some set rate for the use of their work by machine learning models. Dryhurst is skeptical of both, which is why he and Herndon fall into what he describes as the “self-determination camp.” “We want to give individuals the tools and opportunities to exert their will over their own work, because I don’t personally like the idea of somebody else determining how much I get paid for my data,” he says. “I don’t think we should be designing Spotify for AI. I think we should be asking, Why should emerging artists be using the same compensation model that works for Taylor Swift?”
The way Dryhurst sees it, things just aren’t going to look how they used to—but maybe that’s a good thing. “What if we get the chance to actually rewire how intellectual property works, making it better for artists than what came before?” he asks, citing the way that current structures for creative compensation only serve to exacerbate the divide between the few creatives who get rich off their work, and those that are barely scraping by—not to mention the major tech companies that profit most from this system.
“The AI boom has only accelerated the inherent contradictions of our current creative economy,” Dryhurst says. “If you were to follow precedent—even the sample wars—there’s a collective entitlement to free content on the internet. The advent of new technologies is really just calling the bluff on whatever weaknesses existed in the first place—and with the use of artists’ data, the injustice has become so monstrous that we now have to reassess the foundational logic by which we do things. We cannot live in a world where big companies get to accrue all the value from everyone’s intellectual property for the rest of time.”
“I don’t think we should be designing Spotify for AI. What if we get the chance to actually rewire how intellectual property works, making it better for artists than what came before?”
“In principle, a work of art has always been reproducible,” wrote the cultural critic Walter Benjamin in his 1935 essay “The Work of Art in the Age of Mechanical Reproduction.” “Man-made artifacts could always be imitated by men. Replicas were made by pupils in practice of their craft, by masters for diffusing their works, and, finally, by third parties in the pursuit of gain. Mechanical reproduction of a work of art, however, represents something new.”
Benjamin cites the work of French poet Paul Valéry, who predicted in 1928 that, because the types and uses of fine art were developed “in times very different from the present, by men whose power of action upon things was insignificant in comparison with ours,” we should expect profound changes in “the ancient craft of the Beautiful.” He goes on to describe, with prescient accuracy, the world toward which we are headed: one where works of art will be ubiquitous, existing wherever someone with “a certain apparatus”—say, a phone—happens to go. “We shall only have to summon them and there they will be. Just as water, gas, and electricity are brought into our houses to satisfy our needs, so we shall be supplied with visual or auditory images, which will appear and disappear at a simple movement of the hand.”
It’s an accurate description of image generators, with which the entire history of art is at one’s fingertips, able to be remixed and reused, conjured up from oblivion based on any series of words that comes to mind. But, as Benjamin writes, even the most perfect reproduction of a work of art is lacking in one element: “its presence in time and space, its unique existence at the place where it happens to be.” This auratic quality of the work of art, Benjamin says, is also bolstered by its history: the wear and tear that takes place over the years, alongside various changes in ownership—something that is now being recorded using blockchain, a decentralized digital ledger by which the provenance of NFTs and, increasingly, other works of art, can be tracked.
The NFT boom, like the rapid development in generative technology, brought about much debate on the definition of art and its relationship to commerce. It’s an old story, but one bound to repeat itself again and again: From Duchamp’s Fountain to Marina Abramović’s ephemeral performances, severing the value of art from the physical—and the physically beautiful—has been a decades-long process, led by countless revolutionary thinkers and makers. And each time art takes new form, there is a cultural panic around its future. Two years ago, it was surrounding NFTs—which, according to Dryhurst, pushed some artists toward conformity in order to accrue value on platforms that promote what is most widely liked, resulting in a public revolt against their lack of artistic merit. “Many of the biggest NFTs weren’t necessarily great accomplishments of style or technique. Rather than the particular beauty of an object, they relied on conjuring moments of network power,” Dryhurst says, explaining that this is why the work of someone like Beeple—which embraces the inherently digital qualities of the NFT—might succeed.
Some believe that, rather than threatening to stamp artists out, AI art tools will democratize creativity at a scale we have never before seen. But to get beyond the initial panic, we need to find a way to broaden our definition of art. “Art is largely a social sport,” Dryhurst notes, predicting that the novelty of automation alone will wear out fast. There’s no doubt that whatever is coming is going to shift the landscape of cultural production for everyone—including, but not limited to, artists. “We’re looking at the next hundred years of media. This is bigger than the advent of Facebook, Twitter, or Instagram; it might even be bigger than the internet,” he says. “At the same time, being an artist is a really complex social need—and we do a disservice to their essential value by pretending that they’re just producing pretty pictures.”