One Thursday last month, 19-year-old Robbie Barrat woke to a fusillade of messages on his phone. “I was half asleep but saw they all contained the same number,” he says. “Then I fell back asleep for a few hours. I didn’t really want to believe.”
The number in those messages was $432,500—the winning bid at Christie’s New York on a ghostly portrait created using artificial intelligence, following a recipe Barrat posted online not long after graduating high school. Barrat was shocked, because Christie’s had previously estimated the portrait would sell for $7,000 to $10,000. He already felt ripped off by the sale, because he wasn’t credited. He probably won’t receive a cent.
Edmond de Belamy, from La Famille de Belamy, as the portrait is called, was created by a Parisian art collective that goes by the name Obvious. It appears to have made only minor tweaks to Barrat’s methodology to produce the portrait. The incident has triggered a debate about authorship and ethics in the nascent field of AI art.
Obvious and Christie’s did not respond to requests for comment. Barrat says he posted his code to help and inspire others but that Obvious went too far by profiting from re-creating his work. “It’s a very awful situation,” he says.
Barrat and some sympathizers in the small world of AI art are also disappointed that their rapidly evolving movement’s first big flash of public attention revolved around what they consider a derivative work, far from the field’s cutting edge. “People have been doing nearly identical stuff since 2016,” Barrat says. Adds Marian Mazzone, an art historian who studies AI art at College of Charleston: “It doesn’t look like they did anything very new or interesting with what they took.”
People have made art with computers for more than 50 years. Barrat and Obvious are part of a recent movement of creative coding piggybacking on the hottest technology in Silicon Valley.
Google, Facebook, and other tech companies have turned an area of AI research known as machine learning into an intensely competitive arena. The technology lets computers figure out tasks like recognizing objects in images for themselves by digesting example data. A rejuvenated technique called neural networks has given the approach impressive new power. While corporate labs direct that power to uses such as helping autonomous cars navigate traffic, some artists direct it to generate images.
Barrat got into that world via an unconventional route. He’s part of a blooming scene of self-taught AI experts enabled by open source tools from corporate AI labs. Barrat taught himself to code, and work with neural networks, in his bedroom in rural West Virginia, where his first machine learning project involved training software to generate rap lyrics in the style of Kanye West.
Barrat’s adventures in visual AI art are built on a technique known as Generative Adversarial Networks, invented by Ian Goodfellow, a researcher now at Google. It involves setting up a duel between two neural networks looking at the same collection of images. One network tries to generate fake images that could blend in with the originals, the other tries to spot any fakes. Over many rounds of competition, the fake-generating network can get good enough to make fakes that can fool a human.
The network that created Edmond de Belamy originated in a 2016 research paper from researchers at Facebook and Boston startup Indico. They described a new implementation of the technique called DCGAN and showed that after processing millions of photos it could generate imperfect but recognizable images of bedrooms and faces that never existed.
Barrat adapted DCGAN to artistic ends—ultimately enabling Obvious’ big win—by training it on centuries of art history. He wrote a script to scrape images of different styles or genres of art from WikiArt, an online encyclopedia with more than 250,000 images. Using those images, he then trained networks to generate landscapes, portraits, and surreal nudes. He posted a Github project that provides everything you need to replicate his workflow and even included some of the networks he’d trained.
The three members of Obvious dove in. LinkedIn profiles indicate that only one has formal training in machine learning. In a message thread on Github last year, that member, Hugo Caselles-Dupré, repeatedly prodded Barrat to update his code and upload new pretrained networks.
On the day of the auction, Obvious tweeted that it didn’t use one of those pretrained networks to create the work sold at Christie’s. Instead, the members claim, Edmond de Belamy was made by a version of DCGAN they trained themselves, using data gathered with Barrat’s WikiArt scraper.
However they did it, their portraits are strikingly similar to those generated by Barrat. The controversy over the Obvious sale prompted New Zealand artist and academic Tom White to try the teenager’s pretrained networks for himself. The images he produced wouldn’t have looked out of place next to Edmond de Belamy on the wall of Christie’s viewing room in New York.
Something Obvious didn’t do was talk about was where it got the recipe, and some of the code, that produced its artwork. A blog post from the collective in February on its project didn’t mention Barrat at all, according to a version saved by the Internet Archive in April. By September, Barrat had been added.
Mazzone at College of Charleston said borrowing ideas and images in art is no problem—think of Warhol’s soup cans—as long as you don’t try to hide it. “They could have solved this problem very easily by saying here’s what we started with,” she says. In a tweet posted the day before the auction last month, Obvious apologized and told Barrat, “You deserve a lot of credit, it’s true. We cannot control how big it has become.”
Barrat continues to work on AI art around his day job applying machine learning to biological sciences in a Stanford research lab. He’s currently experimenting with using images from fashion shows to generate glitchy new garments, and he’s working with a clothes designer to get the weird creations made for real. He says he expects to keep publishing code and ideas openly, but more cautiously.
“Open source is important to me, because this is how I learned to do this stuff growing up in the middle of nowhere in West Virginia,” Barrat says. “I‘m going to keep doing open source but be more careful about it.”