Countless artists have been inspired by ‘The Starry Night’ since Vincent van Gogh painted the swirling scene in 1889.
Now artificial intelligence systems are doing the same, training themselves on a vast collection of digitized artwork to produce new images that you can call up from a smartphone app in seconds.
The images generated by tools like DALL-E, Midjourney, and Stable Diffusion can be weird and otherworldly, but also increasingly realistic and customizable. Ask for a ‘Van Gogh style peacock owl’ and they can make something similar to what you imagined.
But while Van Gogh and other long-dead master painters aren’t complaining, some living artists and photographers are starting to fight back against the AI software companies creating images derived from their works.
Two new lawsuits — one this week from Seattle-based photography giant Getty Images — target popular image-generating services for allegedly copying and processing millions of unlicensed copyrighted images.
Getty said it has initiated legal proceedings in the High Court of Justice in London against Stability AI – the maker of Stable Diffusion – for violating intellectual property rights to the benefit of the London-based startup’s commercial interests.
Another lawsuit in a U.S. federal court in San Francisco describes AI image generators as “21st-century collage tools that violate the rights of millions of artists.” The lawsuit, filed Jan. 13 by three working artists on behalf of others like them, also names Stability AI as a defendant, along with San Francisco-based image generator start-up Midjourney and online gallery DeviantArt.
The lawsuit alleges that AI-generated images “compete in the marketplace with the original images. Until now, a buyer seeking a new image ‘in the style’ of a particular artist must pay to commission or license an original image by that artist.”
Companies that provide image generation services typically charge users a fee. For example, after a free trial of Midjourney through the Discord chat app, users must purchase a plan that starts at $10 per month or up to $600 per year for corporate memberships. The startup OpenAI also charges for using its DALL-E image generator, and StabilityAI offers a paid service called DreamStudio.
Stability AI said in a statement: “Anyone who believes this is not fair use misunderstands the technology and misunderstands the law.”
In a December interview with The Associated Press, before the lawsuits were filed, David Holz, CEO of Midjourney, described his image creation service as “a kind of search engine” that pulls in a wide variety of images from around the internet. He compared copyright concerns about technology to how such laws have adapted to human creativity.
“Can a person look at someone else’s photo and learn from it and take a similar photo?” said Mr. Holz. “Of course it is allowed for people and if it wasn’t it would destroy the entire professional art industry, probably the non-professional industry as well. As far as AIs learn like humans, it’s about the same and if the visuals come out differently, it looks like it’s good.
The copyright disputes mark the beginning of a backlash against a new generation of impressive tools – some of which were only introduced last year – that can generate new visual media, readable text and computer code on command.
They also raise wider concerns about the tendency of AI tools to amplify misinformation or cause other harm. For AI image generators, that includes creating non-consensual sexual images.
Some systems produce photo-realistic images that are impossible to trace, making it difficult to tell the difference between what is real and what is AI. And while some have taken precautions to block offensive or harmful content, experts fear it’s only a matter of time before people use these tools to spread disinformation and further undermine public trust.
“Once we lose the ability to tell what’s real and what’s fake, suddenly everything becomes fake because you lose confidence in anything and everything,” said Wael Abd-Almageed, a professor of electrical and computer engineering at the University of the South. -California.
As a test, the AP submitted a text prompt to Stable Diffusion with the keywords “Ukraine war” and “Getty Images.” The tool created photo-like images of soldiers in combat with warped faces and hands, pointing and guns. Some images also contain the Getty watermark, but with garbled text.
AI can also get things wrong, like feet and fingers or details on ears that can sometimes give away they’re not real, but there’s no set pattern to look out for. Those visual cues can also be edited. On Midjourney, users often post to the Discord chat asking for advice on correcting distorted faces and hands.
As some generated images travel around social networks and potentially go viral, debunking them can be challenging because they can’t be traced back to a specific tool or data source, said Chirag Shah, a professor at the Information School at the University of Washington , which uses these tools for research.
“You could make some guesses if you have enough experience working with these tools,” said Mr. Shah. “But other than that, there’s no easy or scientific way to really do this.”
Despite all the backlash, there are many people who are embracing the new AI tools and the creativity they unleash. Some use them as a hobby to create intricate landscapes, portraits, and art; others to brainstorm marketing materials, video game sets, or other ideas related to their profession.
There’s plenty of room for fear, but “what else can we do with it?” artist Refik Anadol asked this week at the World Economic Forum in Davos, Switzerland, showing an exhibition of climate-themed work created by training AI models on a wealth of publicly available images of coral.
At New York’s Museum of Modern Art, Mr. Anadol designed ‘Unsupervised’, which draws from artworks from the museum’s prestigious collection – including ‘The Starry Night’ – and feeds them into a digital installation that generates animations of mesmerizing colors and shapes in the museum lobby.
The installation is “constantly changing, evolving and dreaming of 138,000 ancient works of art in the MoMA archive,” said Mr. Anadol. “From Van Gogh to Picasso to Kandinsky, incredible, inspiring artists who defined and pioneered various techniques exist in this artwork, in this AI dream world.”
Mr Anadol, who builds his own AI models, said in an interview that he prefers to look on the bright side of the technology. But he hopes future commercial applications can be refined to make it easier for artists to opt out.
“I fully hear and agree that certain artists or creators feel very uncomfortable about using their work,” he said.
For painter Erin Hanson, whose Impressionist landscapes are so popular and easy to find online that she’s seen their influence in AI-produced imagery, it’s not about her own prolific production, which earns $3 million a year.
She is, however, concerned about the art community as a whole.
“The original artist should be recognized or compensated in some way,” said Ms Hanson. “That’s what copyright laws are about. And if artists are not recognised, it will be difficult for artists to make a living in the future.”
This story was reported by The Associated Press. Matt O’Brien reported from Providence, Rhode Island.