The company behind popular art tool Stable Diffusion is facing accusations of copyright infringement from all sides, raising questions about the future of AI art

If you’ve logged into social media in the past week, you’ve probably seen an AI-generated artwork—from Lensa’s eye-catching (and ego-stroking) “magic avatars,” to the myriad of convincingly rendered images being churned out by text-to-image generating tools like Midjourney, Stable Diffusion, and DreamUp.

You may not have seen is that a group of artists—Sarah Andersen, Kelly McK­er­nan, and Karla Ortiz—has filed a class-action lawsuit against the AI giants behind these tools, claiming that they’ve have violated the copyrights of millions of artists by using their work to train image generators. Shortly, stock photo licensing giant Getty Images followed suit, with their own, well, suit—stating that while it “believes artificial intelligence has the potential to stimulate creative endeavors,” the parent company behind Stable Diffusion, Stability AI did not pursue the official licensing Getty Images provides to technology innovators seeking to train their artificial intelligence systems. Rather, they “unlawfully copied and processed millions of images protected by copyright” to the detriment of the content creators.

“AI art is inherently derivative, scraping the work of human artists for training data—but the question of how much influence constitutes copyright violation, and how to appropriately compensate creators, is still up for debate.”

These allegations echo the claims of the aforementioned three artists, who earlier this week argued that, because the technological process behind Stable Diffusion—the software library underpinning all three of these popular AI generator tools—is fed the work of real artists as training data in order to reconstruct images in their likeness, these companies have effectively been stealing and reproducing the work of artists without offering any form of compensation or recognition. “The harm to artists is not hypothetical,” their claim reads, going on to note that, while “the rapid success of Stable Diffusion has been partly reliant on a great leap forward in computer science, it has been even more reliant on a great leap forward in appropriating copyrighted images.”

AI art is inherently derivative, scraping the work of human artists for training data—but the question of how much influence constitutes copyright violation, and how to appropriately compensate creators, is still up for debate. Meanwhile, suppliers of traditional media are taking different approaches to grapple with this monumental shift in the availability of AI-generated imagery. On the other side of the spectrum, Shutterstock—a Getty Images competitor—recently announced that it was linking arms with Meta, enabling the company to use Shutterstock’s expansive content library to develop, evaluate, and train its machine learning model. This followed their announcement in October that Shutterstock would be expanding its partnership with OpenAI—the company behind viral AI chatbot ChatGPT—revealing that the firm had licensed imagery from Shutterstock to train their popular image generator DALL-E beginning in 2021.

While all-but-identical in their stock image offerings, the two companies have diverged notably in their approach to coexisting with the rapidly-expanding landscape of text-to-image generation, and its accompanying legal concerns. One of the first companies to pay artists for their contributions to training machine learning models, Shutterstock announced an expansion of AI content, simultaneously launching a fund to compensate artists—while Getty Images appears to have joined the resistance, alongside the group of artists pushing back against the technology’s increasing popularity.

Let’s just hope the suit isn’t argued by an AI legal assistant.

Tags