• Generative Art
  • Art Analytics
  • Right Click Save
    • About
    • Advisory Board
    • Press
    • Contact
    • Art Authentication
    • Community
  • Blog
Menu

Artnome

100 State Street
Framingham, MA, 01702
Phone Number
Exploring Art Through Data

Your Custom Text Here

Artnome

  • Generative Art
  • Art Analytics
  • Right Click Save
  • About
    • About
    • Advisory Board
    • Press
    • Contact
    • Art Authentication
    • Community
  • Blog

Blog

Exploring art through data using the Artnome database. 

 

Giving Generative Art Its Due

April 17, 2019 Jason Bailey
Mantel Blue, Manolo Gamboa Naon (Personal collection of Kate Vass), 2018

Mantel Blue, Manolo Gamboa Naon (Personal collection of Kate Vass), 2018

I have long dreamed of attending an art exhibition that presented the full range of generative art starting with early analog works of the late 1950s and ranging all the way up to new AI work we have seen in just the last few years. To my knowledge, no such show has ever existed. Just to attend such a show would be a dream come true for me.

So when the Kate Vass galerie proposed that I co-curate a show on the history of generative art, I thought I had died and gone to heaven. While I love early generative art, especially artists like Vera Molnar and Frieder Nake, my passion is really centered around contemporary generative art. So pairing up with my good friend Georg Bak, expert in early generative photography, was the perfect match. Georg brings an unmatched passion and detailed understanding of early generative art that firmly plants this show in a deep and rich tradition that many have yet to learn about.

As my wife can attest, I have regularly been waking up at four in the morning and going to bed past midnight as we race to put together this historically significant show, unprecedented in its scope.

I couldn’t be more enthusiastic and proud of the show we are putting together and I am excited to share the official press release with you below:


Invitation for Automat Und Mensch (Man and Machine)

Invitation for Automat Und Mensch (Man and Machine)

“This may sound paradoxical, but the machine, which is thought to be cold and inhuman, can help to realize what is most subjective, unattainable, and profound in a human being.” - Vera Molnar

In the last twelve months we have seen a tremendous spike in the interest of “AI art,” ushered in by Christie’s and Sotheby’s both offering works at auction developed with machine learning. Capturing the imaginations of collectors and the general public alike, the new work has some conservative members of the art world scratching their heads and suggesting this will merely be another passing fad. What they are missing is that this rich genre, more broadly referred to as “generative art,” has a history as long and fascinating as computing itself. A history that has largely been overlooked in the recent mania for “AI art” and one that co-curators Georg Bak and Jason Bailey hope to shine a bright light on in their upcoming show Automat und Mensch (or Machine and Man) at Kate Vass Galerie in Zurich, Switzerland.  

Generative art, once perceived as the domain of a small number of “computer nerds,” is now the artform best poised to capture what sets our generation apart from those that came before us - ubiquitous computing. As children of the digital revolution, computing has become our greatest shared experience. Like it or not, we are all now computer nerds, inseparable from the many devices through which we mediate our worlds.

Though slow to gain traction in the traditional art world, generative art produces elegant and compelling works that extend the very same principles and goals that analog artists have pursued from the inception of modern art. Geometry, abstraction, and chance are important themes not just for generative art, but for much of the important art of the 20th century.

Every generation claims art is dead, asking, “Where are our Michelangelos? Where are our Picassos?” only to have their grandchildren point out generations later that the geniuses were among us the whole time. With generative art we have the unique opportunity to celebrate the early masters while they are still here to experience it.

 
9 Analogue Graphics, Herbert W. Franke, 1956/’57

9 Analogue Graphics, Herbert W. Franke, 1956/’57

 

The Automat und Mensch (Man and Machine) exhibition is, above all, an opportunity to put important work by generative artists spanning the last 70 years into context by showing it in a single location. By juxtaposing important works like the 1956/’57 oscillograms by Herbert W. Franke (age 91) with the 2018 AI Generated Nude Portrait #1 by contemporary artist Robbie Barrat (age 19), we can see the full history and spectrum of generative art as has never been shown before.

 
Correction of Rubens: Saturn Devouring His Son, Robbie Barrat, 2019

Correction of Rubens: Saturn Devouring His Son, Robbie Barrat, 2019

 

Emphasizing the deep historical roots of AI and generative art, the show takes its title from the 1961 book of the same name by German computer scientist and media theorist Karl Steinbuch. The book contains important early writings on machine learning and was inspirational for early generative artists like Gottfried Jäger.

We will be including in the exhibition a set of 10 pinhole structures created by Jäger with a self-made pinhole camera obscura. Jäger, generally considered the father and founder of “generative photography,” was also the first to use the term “generative aesthetics” within the context of art history.

10 Pinhole Structures, Gottfried Jäger, 1967/’94

10 Pinhole Structures, Gottfried Jäger, 1967/’94

We will also be presenting some early machine-made drawings by the British artist Desmond Paul Henry, considered to be the first artist to have an exhibition of computer generated art. In 1961 Henry won first place in a contest sponsored in part by well-known British artist L.S. Lowery. The prize was a one-man show at The Reid Gallery in August, 1962, which Henry titled Ideographs. In the show, Henry included drawings produced by his first drawing machine from 1961 adapted from a wartime bombsight computer.

Untitled, Desmond Paul Henry, early 1960s

Untitled, Desmond Paul Henry, early 1960s

The show features other important works from the 1960s through the 1980s by pioneering artists like Vera Molnar, Nicolas Schoeffer, Frieder Nake, and Manfred Mohr.

We have several generative works from the early 1990s by John Maeda, former president of the prestigious Rhode Island School of Design (2008-2014) and associate director of research at MIT Media Lab. Though Maeda is an accomplished generative artist with works in major museums, his greatest contribution to generative art was perhaps his invention of a platform for artists and designers to explore programing called "Design By Numbers."

Casey Reas, one of Maeda’s star pupils at the MIT Media Lab, will share several generative sketches dating back to the early days of Processing. Reas is the co-creator of the Processing programing language (inspired by Maeda’s “Design By Numbers”) which has done more to increase the awareness and proliferation of generative art than any other singular contribution. Processing made generative art accessible to anyone in the world with a computer. You no longer needed expensive hardware, and more importantly, you did not need to be a computer scientist to program sketches and create generative art.

This ten minute presentation introduces the Process works created by Casey Reas from 2004-2010.

Among the most accomplished artists to ever use Processing are Jared Tarbell and Manolo Gamboa Naon, who will both be represented in the exhibition. Tarbell mastered the earliest releases of Processing, producing works of unprecedented beauty. Tarbell’s work appears to have grown from the soil rather than from a computer and looks as fresh and cutting edge today as it did in 2003.

Substrate, Jared Tarbell, 2003

Substrate, Jared Tarbell, 2003

Argentinian artist Manolo Gamboa Naon - better known as “Manolo” - is a master of color, composition, and complexity. Highly prolific and exploratory, Manolo creates work that takes visual cues from a dizzying array of aesthetic material from 20th century art to modern-day pop culture. Though varied, his work is distinct and immediately recognizable as consistently breaking the limits of what is possible in Processing.

aaaaa, Manolo Gamboa Naon, 2018

aaaaa, Manolo Gamboa Naon, 2018

With the invention of new machine learning tools like DeepDream and GANs (generative adversarial networks), “AI art,” as it is commonly referred to, has become particularly popular in the last five years. One artist, Harold Cohen, explored AI and art for nearly 50 years before we saw the rising popularity of these new machine learning tools. In those five decades, Cohen worked on a single program called Aaron that involved teaching a robot to create drawings. Aaron’s education took a similar path to that of humans, evolving from simple pictographic shapes and symbols to more figurative imagery, and finally into full-color images. We will be including important drawings by Cohen and Aaron in the exhibition.

AI and machine learning have also added complexity to copyright, and in many ways, the laws are still catching up. We saw this when Christie’s sold an AI work in 2018 by the French collective Obvious for $432k that was based heavily on work by artist Robbie Barrat. Pioneering cyberfeminist Cornilia Sollfrank explored issues around generative art and copyright back in 2004 when a gallery refused to show her Warhol Flowers. The flowers were created using Sollfrank’s “Net Art Generator,” but the gallery claimed the images were too close to Warhol’s “original” works to show. Sollfrank who believes “a smart artist makes the machine do the work” believed she had a case that the images created by her program were sufficiently differentiated. Sollfrank responded to the gallery by recording conversations with four separate copyright attorneys and playing the videos simultaneously. In this act, Sollfrank raised legal and moral issues regarding the implications of machine authorship and copyright that we are still exploring today. We are excited to be including several of Sollfrank’s Warhol Flowers in the show.

 
Anonymous_warhol-flowers, Cornelia Sollfrank

Anonymous_warhol-flowers, Cornelia Sollfrank

 

While we have gone to great lengths to focus on historical works, one of the show’s greatest strengths is the range of important works by contemporary AI artists. We start with one of the very first works by Google DeepDream inventor Alexander Mordvintsev. Produced in May of 2015, DeepDream took the world by storm with surreal acid-trip-like imagery of cats and dogs growing out of people’s heads and bodies. Virtually all contemporary AI artists credit Mordvintsev’s DeepDream as a primary source of inspiration for their interest in machine learning and art. We are thrilled to be including one of the very first images produced by DeepDream in the exhibition.

 
Cats, Alexander Morrdvintsev, 2015

Cats, Alexander Morrdvintsev, 2015

 

The show also includes work by Tom White, Helena Sarin, David Young, Sofia Crespo, Memo Akten, Anna Ridler, Robbie Barrat, and Mario Klingemann.

Klingemann will show his 2018 Lumen Prize-winning work The Butcher’s Son. The artwork is an arresting image that was created by training a chain of GANs to evolve a stick figure (provided as initial input) into a detailed and textured output. We are also excited to be showing Klingemann’s work 79543 self-portraits, which explores a feedback loop of chained GANs and is reminiscent of his Memories of Passersby which recently sold at Sotheby’s.

 
The Butcher’s Son, Mario Klingemann, 2018

The Butcher’s Son, Mario Klingemann, 2018

 

Automat und Mensch takes place at the Kate Vass Galerie in Zürich Switzerland and will be accompanied by an educational program including lectures and panels from participating artists and thought leaders on AI art and generative art history. The show runs from May 29th to October 15th, 2019.

Participating Artists:

Herbert W. Franke

Gottfried Jäger

Desmond Paul Henry

Nicolas Schoeffer

Manfred Moor

Vera Molnar

Frieder Nake

Harold Cohen

Gottfried Honegger

Cornelia Sollfrank

John Maeda

Casey Reas

Jared Tarbell

Memo Akten

Mario Klingemann

Manolo Gamboa Naon

Helena Sarin

David Young

Anna Ridler

Tom White

Sofia Crespo

Matt Hall & John Watkinson

Primavera de Filippi

Robbie Barrat

Harm van den Dorpel

Roman Verostko

Kevin Abosch

Georg Nees

Alexander Mordvintsev

Benjamin Heidersberger

For further info and images, please do not hesitate to contact us at: info@katevassgalerie.com  

3 Comments

Autoglyphs, Generative Art Born On The Blockchain

April 8, 2019 Jason Bailey
Collection of four Autoglyphs

Collection of four Autoglyphs

If you are a regular Artnome reader, you know we are big on blockchain and generative art. So of course I was super excited when my good friends Matt Hall and John Watkinson of CryptoPunks fame gave me a sneak peek of Autoglyphs, their new project which creates old-school generative art that literally lives on the blockchain.

In this post I nerd out with Matt and John about Autoglyphs, grilling them with all kinds of questions including:

  • What are Autoglyphs and how do they work?

  • How do Matt and John manage to actually put art on the blockchain?

  • Did early generative art serve as inspiration for Autoglyphs?

  • Why did they create just 512 out of billions of possible Autoglyphs?

  • What are the differences between Autoglyphs and CryptoPunks?

  • Do Matt and John think of themselves as artists?

  • What makes a good Autoglyph?

Autoglyphs are highly unique because traditionally, the actual image files associated with blockchain art like CryptoPunks, CryptoKitties, or Rare Pepe are stored in a database somewhere “off chain,” meaning off of the blockchain. Artists typically address this “off chain” storage by including a link to the image from the blockchain called a “hash” so that you can locate the image file for your artwork from its record.  For example, even though the image of my CryptoPunk is comprised of relatively few pixels, it actually lives “off chain” on the LarvaLabs server at:

https://www.larvalabs.com/public/images/cryptopunks/punk2050.png

Screen Shot 2019-03-30 at 6.40.36 PM.png

This means the actual artwork does not technically benefit from any of the tamper-proof advantages like “decentralization” or “immutability” typically associated with the blockchain (unless you think of the token itself as the artwork instead of the image). Put another way, there is nothing stopping someone from altering, moving, or removing the image from the location the hash is pointing to. If that were to happen, all you would be left with is an immutable record stating that you own an artwork, with no way of actually seeing it.

Perhaps you are thinking, “Why not just store the image on the blockchain? It is, after all, a database, right?” Well, blockchain is great for a lot of things, but storing large image files is not one of them. Unless you can make art with a super tiny footprint, it is impractical to store traditional image files like JPEG or PNG on the blockchain.

This is what makes Autoglyphs so damn cool. Matt and John decided to accept the storage limitations of the blockchain as a challenge to see what they could create that could actually be stored “on chain.”

Michael A Noll, Computer Composition with Lines, 1964, digital computer and microfilm plotter

Michael A Noll, Computer Composition with Lines, 1964, digital computer and microfilm plotter

Piet Mondrian, Composition With Lines, second state, 1916-17, oil on canvas, ©Rejikmuseum Kröller-Müller


Piet Mondrian, Composition With Lines, second state, 1916-17, oil on canvas, ©Rejikmuseum Kröller-Müller

I love this idea because it is a throwback to the compute and storage challenges that the earliest generative artists like Michael Noll and Ken Knowlton faced when trying to create art on computers in the early 1960s. As you will see, this is not lost on Matt and John, who are huge fans of early generative art and decided to embrace the aesthetic and run with it. With that, let’s jump into the interview.

Autoglyphs - An Interview with Matt Hall and John Watkinson

glyph41.png

Jason Bailey:  Thanks for chatting, guys. I have a bunch of questions, but I’m happy to start with the easy one. What was the impetus or inspiration behind Autoglyphs?

John Watkinson: There is a lot of talk of art on the blockchain. With the CryptoPunks, all of the ownership and the provenance is permanently and publicly available, and those rules are set and fixed. And yet there's still a bit of an imperfection there in that the art comes from outside of the blockchain and stays out there, and it's just referenced by a smart contract. We don't have any complaints about the CryptoPunks, but it felt like there was an opportunity to go further. With Autoglyphs, we asked ourselves, “Can we make the entire thing completely self-contained and completely open and operating on the blockchain?”

JB: So the decision to literally store the artwork on the blockchain comes with some pretty hardcore restrictions, right? What sort of parameters are you now boxing yourself into once you make that decision?

JW: You have to have very small and efficient code generating the work. The actual output of the work has to be a very small amount of data or text because you can't have a large amount of data on the blockchain. So a small amount of efficiently running code, and fairly small, efficient output.

Those were the constraints, and they were pretty extreme. For a while thought we couldn't do it, or couldn't do it in a way that was satisfying for us. I was sort of exploring various generators and trying to make them more efficient, just binary image generators. I got to one that I thought was pretty good and I then experimented with it, trying to turn it into a smart contract, and I just couldn't get it to work. It was just hitting limits and wasn't working at all.

Then I tried it a few months later and just pushed it a little further and just got there. Still, the transaction fee of making an Autoglyph is going to be about half of an Ethereum block. So an Ethereum block is about eight million gas, so that's how much computation can happen in one mined block of Ethereum, and this is going to be three million gas, so it's almost half a block.

That means that the transaction fees will be relatively expensive - between one and two dollars - depending on the price of gas. So it's a pretty hefty transaction. If we went much more than that, we would already be outside of feasibility. If we went over eight million, it would be completely impossible, you wouldn't be able to do it.

JB: Got it. Dumb question: Does the code for generating the image live on the blockchain? Or is there actually an image on the blockchain?

JW: The code lives on the blockchain, and in fact, when you ask the blockchain for the image, it will just generate it again for you. That part happens on a end node, so that doesn't cost any actual money or gas. But whenever you say, “Give me the image for Autoglyph five", it will just generate it again for you based on the seed information that was created in the transaction.

Matt Hall: It's probably also worth making the distinction between the image and the instructions to generate different representations of it. The actual image you see on the website is not generated on the blockchain. The art, the instructions for how to write it are on the blockchain, but we make an SVG or PNG file on the web server. If that was your question, then no, the actual image data doesn't come off the blockchain, but there's an ASCII representation of exactly what that is on there. It's an ASCII art representation of the glyph.

Screen Shot 2019-04-01 at 11.22.33 AM.png

JB: Nice. That was going to be my next question. I love ASCII art, and I assumed that it was generating some sort of ASCII format. So the ASCII art version of the image is an image made out of text and is actually on the blockchain. But in addition to that, you're generating PNGs or JPEGs for end user convenience that you've got hosted at Larva Labs? Is that a fair way to put it?

Screen Shot 2019-04-01 at 11.25.14 AM.png

JW: Yes. We're generating the image, and we basically created instructions on how to do that. So in the source code for the actual smart contract, if you scroll down a little bit below that big ASCII art “Autoglyphs,” you'll see that there are these little instructions. For every ASCII art character, it tells you how to draw it. We generate image files that way. But the idea is that anyone can generate it - kind of like a Sol LeWitt instruction set for creating a drawing. If you own a glyph, then you can make it at any scale, with any materials you want. You can make your Autoglyph using these instructions.

lewitt_49_instructions.jpg

JB: Great. That was going to be my next question. Is it a bit like a Sol LeWitt, where essentially if Larva Labs, God forbid, goes out of business and you decide that you no longer want to support the interface people will have everything they need built within this little blockchain code to infinitely generate these Autoglyphs? Will Autoglyphs outlive us all?

Sol LeWitt, Wall Drawing 87, June 1971

Sol LeWitt, Wall Drawing 87, June 1971

JW: Yes, that's the idea. They'll be able to make their Autoglyphs and follow these instructions to render them. We have a little pen plotter, we're going to make some of our Autoglyphs physically rendered with that, which is kind of just for fun. It's well set up for plotting that way.

MH: We were kicking around different versions of this and then we saw this show at the Whitney. It is a retrospective of a bunch of digital art. They have early generative art and all sorts of different stuff. There was this big Sol LeWitt piece, and they were explicit about how this piece had been executed by an assistant at the gallery, but that's in keeping with the intention of the artwork and the instructions of the artist. We thought that was good, it was perfect, because we can't do a lot of things we want to do directly on the blockchain, but we can have the spirit of it be completely self-contained.

Sol LeWitt (1928-2007), Wall Drawing #289, 1976

Sol LeWitt (1928-2007), Wall Drawing #289, 1976

By providing them with the instructions on the blockchain, now the art can be rendered very large and detailed. For example, we could have stored these as tiny pixel graphics, graphs, something like that, but then you're limited to that. This way they can operate at any scale and in any material.

JB: It does feel like a throwback to some of the early generative art. I'm thinking like Ken Knowlton and Michael Noll. Other than Sol LeWitt, were there other artists who inspired the Autoglyphs? Or do they just look like old-school generative art due to the storage limitations of the blockchain?

Ken Knowlton, from the pages of Design Quarterly 66/67, Bell Telephone Labs computer graphics research.

Ken Knowlton, from the pages of Design Quarterly 66/67, Bell Telephone Labs computer graphics research.

JW: A little bit of both. We definitely needed to clamp down the parameters pretty hard because of the technical requirements, but we'd been getting into the early pioneering digital art of the '60s and early '70s stuff. It's definitely an homage to Michael Noll and Ken Knowlton and that kind of stuff, which we really love. Only once we got to this digital art world via the CryptoPunks did we really realize how much of all this stuff had been explored in the '60s. It’s almost humbling how much ground was covered so quickly in digital art in the '60s and early '70s.

JB: Yeah, I love early generative art. It looks like from the Autoglyphs site that the algorithm, while it had to be simple by definition, is capable of generating billions of unique artworks, but then there are 512 that ultimately will be produced before it stops, right? So how do those 512 get selected among the billions of possible works? And second part of the question, why 512?

MH: Good question. They're going to be randomly seeded. There's a random seed that goes into the algorithm to generate them, and if you operate the contract manually, you can specify the seed manually - but you can't reuse an existing seed that's already been used to make a glyph. We debated whether to limit it or not, whether to make it so that everyone and anyone can come and get their glyph. There are a few arguments in each direction, but ultimately when you make generative art like this, the generator kind of is the artwork a little bit, and there's so much it can express.

It's basically a very tiny generator. If you scroll down in that source code, the core of the generator is the draw function, which is only about forty lines. So we said, “At what point does a generator kind of play itself out, where you've seen everything?” You could make more, but it's just going to be like, “Oh, that's similar to that one, that's similar to that one,” so how much surprise and variety can it really deliver? So we found that threshold.

We made it a power of two just to keep it nerdy. But that was the around the threshold where we said, “This is about the right amount of these things in order to fully explore the generator but not make them all worthless because there's a myriad of other ones similar. This should be enough to discover cool surprises and get a sense of what it can generate and have a good collection out there, but not hit it too hard and destroy all the mystery of it.”

Drawing code for Autoglyphs

Drawing code for Autoglyphs

JB: Sweet. And then you mentioned on the site that 128 of the Autoglyphs are already claimed, so who claimed them?

JW: We're going to claim those. We want to have a decent chunk that we can explore and mess around with, and we want to display them in large groups together. That's how many we're taking for ourselves and the rest are going to be up for grabs.

MH: It's a similar model to the CryptoPunks, where we wanted to convert ourselves into the same kind of relationship to the artwork as everyone else. So we just become owners after the thing is launched, and we like how that sort of played out on CryptoPunks. People ask, “Why don't you take a cut of all the sales?” Well, we didn’t take a cut of the CryptoPunks, so we want to just be the same as everybody else. We felt that that still was the right way to go with this.

JB: Right. It's experimental and you're along for the ride with the same level of risk as everybody else, right?

JW: Yes, exactly. That informs the sale price for the rest of them and where that money goes, and then we don't feel like we need to claim the sale price of those things. We can donate that because we have a portion of the artwork.

JB: Got it. No, that totally makes sense, and I'll come back to the charity stuff, too. For me, at least, CryptoPunks was sort of stealth generative art, meaning that most people don't know what generative art is, and they didn't need to in order to love Cryptopunks, right? I think part of the appeal of CryptoPunks was that anybody could look at them and get it and fall in love with them, like, “Oh, cool, look at all these different cool characters.”

You also received interest from art nerds like me and you were in that awesome show with theKate Vass Galerie. Are you worried at all that the Autoglyphs may not have the same broad appeal? Or maybe you didn't even assume that there's going to be a broad appeal for CryptoPunks, either, kind of going back to your assumption of these things sort of being experiments?

A collection of CryptoPunks

A collection of CryptoPunks

JW: Yes, I think that's what it is. We didn't expect it with the CryptoPunks; we don't really expect that here. We know people like you and the other people we've met who are into this stuff, and we know that there will be at least a narrow appreciation of this for the same reasons why we dig it. But no, we don't necessarily expect it to have as broad appeal as the CryptoPunks, just because they were a little more consumer-friendly, just easier to engage with, easier to understand. You didn't necessarily need to know that they were generative, you just liked them, like, “I want one that looks like me.” You're not going to find an Autoglyph that looks like you, so…

MH: If you do, that'd be cool!

JB: I like that challenge — that's the first thing I'm going to do when I get off the call.

Autoglyph #130 -Autoglyph I believe most looks like my inner self

Autoglyph #130 -Autoglyph I believe most looks like my inner self

JW: Yeah, it's more of like a Rorschach image.

MH: You see your true self in the Autoglyphs.

JW: Exactly. Yeah, you see your emotional self. We took the attitude, “Let's not worry about that; let's just kind do experiments that we like and we think are cool and resonate with us.” But there's no doubt that we were like, “Let's keep the size small here, because the audience might just be smaller, and that's fine.” It doesn't need to be as big or as wide a variety of people owning it or as high a transaction volume as the CryptoPunks.

JB: Got it.

MH: I think it's fair to say that we're starting to think a little longer term about these things, too, now that we're coming up on two years of the CryptoPunks launch. We thought CryptoPunks might be just a blog post, a couple weeks of interest and the end of it — and it's still going strong. And then seeing this generative art from the '60s and having some similarities with the very limited computing ability we have to work with, it just felt like, “There's cool stuff to explore here that could have appeal long term.” It's okay even if doesn't have the broad appeal at the moment, it's fine.

JB: What are you guys? Do you think of yourselves as artists, and had you in the past, or has that changed in the last few years?

JW: It's funny you ask the “what are you guys” question, because we've been looking at each other the last couple weeks asking the same question. What are we, what are we doing here? We're quite a wide variety of things, and this is one of them.

And obviously it’s almost a loaded term: We're artists now, I guess. And especially looking at generative art from the ‘60s with fresh eyes. There were a lot of people working at Bell Labs and just experimenting and trying things out. Then in hindsight we can look back at that and be like, “Man, that's really cool art that really predates this whole digital art thing.” And they were just engineers, they were nerds just expressing themselves. I think we put ourselves in that camp happily, so not claiming that we're career artists or that's what we’re trying to promote ourselves into, but claiming the ability to express ourselves and make things just like anyone else.

I don't know, Matt. Is that how you feel about it?

MH: Yeah. I felt more comfortable with that term when I found out the history of technicians becoming recognized as artists because they have the skills necessary to operate something new.

JW: And they were thinking about it more than anyone else.

MH: Yeah, just familiar with it, and would see the limitations and the strengths in how they're utilized. So I feel pretty comfortable in that category.

JB: So the CryptoPunks were initially free. Autoglyphs are coming in at like $27.69 with proceeds going to the charity350.org. Could you maybe share a little bit of the thinking behind that? Why 350.org?

Screen Shot 2019-04-07 at 1.59.55 PM.png

MH: Even with the CryptoPunks, where we gave away 9,000 of them, a large number of them went to a few early people that just got on it and automated that process, so we wanted to avoid that. We wanted to a have a better distribution of people, so we felt like the best way to have that was some price associated with generating them .

JW: So then the solution there was, “Let's donate that money to charity,” and then if the whole set sells out, then it will be a pretty good total.

MH:  So if we can sell out of these things it'll be about $10k to 350.org, which is a good organization for trying to move power generation over to renewables. It felt like the right fit in all of those dimensions.

JB: Great, yeah. A softer ball question, so from each of you, what makes a good Autoglyph?

JW: I think with a generator you kind of get a sense of what it makes and then you get surprised by a few things. So I always like the ones that are just like, “Woah, that's not what I expected.” Once you look through 40 or 50 of them, you can always tell which ones are crazy or weird looking, and it’s always fun when it kind of breaks out of expectation. Those are the ones for me that I like. I like ones with diagonal lines. For some reasons, those are the most appealing, ones that are just made out of diagonal lines.

MH: I think we both like the ones where, because the symbol sets are simple, it’s cool when you get the sense that there is a pattern there that's not actually there. There are ones that look like there are curves in them, but there aren't. I like that a lot. I also like ones that look different at different scales. So when they're zoomed out, they look like one thing, and then as you zoom it in, it dissolves. It’s something we're trying to figure now when we're working on physical representations of them, how thick should the lines be, what's the ideal viewing distance, where do these patterns resolve? I think that's my answer.

JB: Cool. And then anything you want to share on the launch process? I think you mention the date in the email, but are there plans to show the physical works anywhere specific?

JW: Yeah. We're just going to launch them first just on the web and on blockchain, and then we'll figure that out next. I think we do want to show a bunch of the glyphs that we claimed for ourselves, maybe one of the art shows in New York in May. We're going to figure out which one's the best one to do that for. We haven't totally figured that out yet. We first just want to put it up, we still want it to be an experiment that pops up on the internet and not have it be a gallery-type launch or anything like that.

JB: Thanks for your time guys! I think Autoglyphs are awesome and can’t wait to add some to the Artnome collection!


3 Comments

Why Is AI Art Copyright So Complicated?

March 27, 2019 Jason Bailey
Left, GANbreeder image by Danielle Baskin. Right, GANbreeder image painted on canvas commissioned by Alexander Reben

Left, GANbreeder image by Danielle Baskin. Right, GANbreeder image painted on canvas commissioned by Alexander Reben

Despite claims that machines and robots are now making art on their own, we are actually seeing the number of humans involved with creating a singular artwork go up, not down, with the introduction of machine learning-based tools.

Claims that AI is creating art on its own and that machines are somehow entitled to copyright for this art are simply naive or overblown, and they cloud real concerns about authorship disputes between humans. The introduction of machine learning as an art tool is ironically increasing human involvement, not decreasing it. Specifically, the number of people who can potentially be credited as coauthors of an artwork has skyrocketed. This is because machine learning tools are typically built on a stack of software solutions, each layer having been designed by individual persons or groups of people, all of whom are potential candidates for authorial credit.

This concept of group authorship that machine learning tools introduces is relatively incompatible with the traditional art market, which prefers singular authorship because that model streamlines sales and supports the concept of the individual artistic genius. Add to that the fact that AI art - and more broadly speaking, generative art - are algorithmic in nature (highly repeatable) and frequently open source (highly shareable), and you have a powder keg of potential authorial and copyright disputes.

The most broadly publicized case of this was the Edmond Belamy work that was sold by the French artist collective Obvious through Christie’s last summer for $432k. I have already explored that case ad nauseum (including an in-depth interview with the collective). I cite it here only to point out that there were a large number of humans that were involved in creating a work that was initially publicized as having been created by a machine.

In this article we look in detail at the recent GANbreeder incident (which we outline below) that has received some attention in the mainstream press. This is another case where the complexity of machine learning has driven up, not down, the number of humans involved with the creation of art and led to a great deal of misunderstanding and hurt feelings.

For this article I spoke with several people involved in the incident:

  • Danielle Baskin, the artist who alleges that Alexander Reben used her and other people’s images from GANbreeder

  • Alexander Reben, the artist accused of using other people’s GANbreeder images

  • Joel Simon, the creator of GANbreeder

I was also lucky enough to speak with Jessica Fjeld, an attorney with the Harvard Cyberlaw Clinic, who has written about and researched issues involving AI-generated art relative to copyright and licensing. She is the first lawyer I have spoken with who truly understands the nuances of law, machine learning, and artistic practice.

The GANbreeder Incident

Danielle Baskin’s GANbreeder Feed including a time stamp for the image in question

Danielle Baskin’s GANbreeder Feed including a time stamp for the image in question

GANbreeder is the brainchild of developer Joel Simon. Simon created a custom interface to Google’s BIGgan so that non-programmers can collaborate on generating surreal images that combine pictorial elements of the user’s choosing to “breed” child images. If you are not sure what GANs (generative adversarial networks) are, you can check out this earlier article we wrote covering the topic.

Let’s look at a super simple GANbreeder example here. I clicked a few buttons in the GANbreeder interface and chose to cross an agaric mushroom with a pug. GANbreeder then outputs 6 images with varying degrees of influence from both the mushroom and the pug. Results below:

Screen Shot 2019-03-22 at 8.23.36 PM.png
Screen Shot 2019-03-22 at 8.24.04 PM.png

You can get more sophisticated and breed many things against each other in combinations, but the tool is dead simple (thanks to Joel Simon’s great design) and literally anyone can use it in seconds without training.

It was Simon’s vision that people would collaborate using GANbreeder and expand the tool through other creative uses. Along those lines and with Simon’s support, conceptual artist Alexander Reben wrote a scraper for GANbreeder that automatically grabbed images and stored them locally to his PC. Once local, Reben applied a custom selection algorithm that would choose images that Reben liked or disliked based on his body signals.

Reben believed the images he scraped from GANbreeder were randomly generated (as he states in this early interview with Engadget). He then sent the images selected via his body signals to a painting service in China where anonymous artists created painted versions on canvas. He called the project amalGAN.

amalGAN, Alexander Reben

amalGAN, Alexander Reben

Reben then shared the painted images widely on social media in support of his upcoming gallery shows. This triggered an avalanche of anger and frustration from other GANbreeder users. They began to complain that Reben had stolen images that they had created using the GANbreeder system.

Screen Shot 2019-03-23 at 10.12.35 AM.png

Reben acknowledged that he did not realize the images were being created by humans. It was his understanding that the images were automatically generated at random by the algorithm.

At the time of my interview, Reben could not confirm that his scraper had not included exact images by other artists, but he believes a tiny percentage (3 out of 28) of his images were subtle variations of works that other artists had created.

The first person to call Reben out on this on Twitter was artist and serial entrepreneur Danielle Baskin. Baskin is a GANbreeder power user who often stayed up until 5:00 a.m. breeding images. She even started a service called GANvas where people could select images on GANbreeder and she would print them on canvas and ship them to customers around the world.

Volcano Dogs, Danielle Baskin on GANbreeder

Volcano Dogs, Danielle Baskin on GANbreeder

When I spoke with Baskin about her experiences with GANbreeder, she was careful to state that she felt she was “discovering” images on GANbreeder vs. “creating” them.

I feel like I am discovering them, not creating them. They all exist; you’re finding them. That is why I view the image as having an intelligent force behind it. It’s like I am discovering someone’s works.

Then why get so upset with Reben for “discovering” a similar image? Baskin explained the source of her frustrations with Reben’s work.

I thought that the whole project was so awful. Like, it was just so bad that it couldn’t have been real, but that it was a statement. Then I learned that it was real and I was like, “F*ck this project.”

Not that it is a competition or something. But he sort of took all the things I had in progress and had been thinking about for a long time and was immediately able to get a gallery show and sell work and stuff. And he didn’t present a clear story as to what he was doing. So that upset me. All these things were on my mind because I was so obsessed with GANbreeder.

It’s like you are writing a history book and you have been researching your subject matter for a year, and someone publishes a history book on the same subject matter, but they barely researched it and were able to sell tons of their books on Amazon. Someone took your content and got all this credit for it, but it wasn’t even good.

The gene sequence Danielle Baskin used to create the disputed image

The gene sequence Danielle Baskin used to create the disputed image

It was clear to me that Baskin was not a fan of Reben’s work. But I wondered if she thought he had done anything malicious or with bad intentions. I also wondered if she felt he had resolved the issue. Project aside, what did she think of him as a person?

When I met him in person, I realized Alex has built an incredible community of artists that use technology and he is a great person. It’s funny because I hate his art, but I like him - but I don’t like him as an artist.

In giving him the benefit of the doubt and in talking with him, I think he genuinely didn’t know how it [GANbreeder] worked. He thought when he refreshed the home page it was totally random images from latent space; he had no idea that other people created the images. He knew the creator of GANbreeder, so maybe he thought that Joel would have explained that to him if it were the case that it was created by other people.

I told Reben that it looked to Baskin and others like he was trying to take shortcuts, or was at least trying to remove himself from the work in some aspects. He partially objected and explained:

There was still a lot of work with me training the data sets on the art that I like and I didn’t like. The real idea was that all of the work was done before the art was made. And the actual art making process was just two simple steps back and forth. Everything involved with that is complicated, involving servers and building computers and learning algorithms and all that sort of stuff.

The interesting thing is that a lot of effort and knowledge came in to the code making. A lot of the creativity was compressed into that code, whereas now that the code is made, it is now a tool for me to… Like I said in one of the reports, I can now lay in a hammock and look at a screen and be able to just use this system to produce output.

I asked him specifically what the amalGAN project was about.

The project was, to me, about human/machine collaboration and how many steps and layers of abstraction I could add. On my website I have like seven steps of human to machine, human to machine, back and forth. I had the idea that the final step is basically the machine giving a human somewhere else the activity of using their brain to upscale the image, using their brain to interpret how to turn pixels into paint. It is basically like the machine using human knowledge to execute rather than being printed out on a printer. To me, that is conceptually interesting because it has to do with that human/machine collaboration.

I then asked Reben how he felt about the issue with Baskin and others in the GANbreeder community and what, if anything, he had done to reconcile it.

I’m sorry this happened out of mostly my ignorance of the system. I probably should have done a bit more research. When I learned there was an issue, I changed my system so it would never happen again. I’m sorry people feel this way. I think I did as much as I could at the time to get permission from Joel and to address as many concerns as I could by inviting people over to discuss. I do have the disclaimer on my website and again in my talks that some images may have come from the GANbreeder community. I have no way to verify that because there are no records of who made what.

I think most reasonable people at this point, including Baskin, acknowledge that it was done unknowingly. However, it could have become more serious - Baskin shared with me that she had considered sending Reben a cease and desist letter.

This exchange of course opens up all kinds of legal questions, and it is here that I believe things actually become interesting. For example:

  • Does Reben have the legal right to use an image that is either similar to or the same as the one that Baskin created in GANbreeder?

  • Would Reben’s work meet the legal definition of a “derivative work”?

  • How much would Reben need to change the image for it to be considered fair use? Is turning it into a painting enough?

  • What if it was the same image, but he used it as a conceptual component instead of as an aesthetic component?

  • Does it matter that Joel Simon’s intention for GANbreeder was for artists to build on each other’s works?

  • As the developer of the interface/tool, does Simon deserve some ownership over the works?

  • What about the folks who created BIGgan or the folks who designed the graphics cards? Do they deserve credit?

To help navigate all of this, I spoke with Jessica Fjeld, the assistant director of the Cyberlaw Clinic at Harvard Law School. I share the majority of our interview because I believe Fjeld does an excellent job of shedding light on an incredibly murky topic. It is my hope that sharing her explanations might help other AI artists from entering into sticky situations around copyright and authorship moving forward.

The Legal Implications - Jessica Fjeld

GANbreeder_Images_Jason_Bailey

Fjeld patiently walked me through several concepts that helped me to better understand how law interacts with the new AI-generated works. Like me, Fjeld believes that all the talk about whether machines deserve copyright is overblown and distracts from real issues surrounding increased complexity of human attribution. Unlike me, she can explain the reasons why and the implications within our legal system. Fjeld explains:

Mostly the question that gets asked is, “Will AIs get smart enough that they can own their own copyright?” To me that is not that interesting because I think AGI (artificial general intelligence) is a ways out. If we get AGIs and we decide to give them legal personhood the way we give to humans and corporations, then yeah, they can have copyright, and if we decide not to do that, then no, they can’t, end of question.

In the meantime, what we really have are sophisticated, interesting tools that raise a bunch of questions because of the humans involved in collaboration making stuff with them. So we get these complicated little knots. But they are not complicated on a grand philosophical level, like, “Can this piece of software own copyright?” They are just complicated on the level of which of these people involved do [own copyright], and what parts of it.

I asked Jessica what the legal implications were in the GANbreeder incident. Disclaimer: Alexander is a past client of Jessica’s, but she is not currently representing him in relation to the GANbreeder incident.

It is a fascinating question. I have tooled around a little bit with GANbreeder myself, so I can understand it. One thing that is important to note is that copyright protects original expressions that are fixed. So “original,” “fixed,” and “expression” are the key terms here.

Something has to be new, and obviously, much of what is on GANbreeder is. Part of what makes it an exciting website is you get some of these really unfamiliar feelings - sometimes eerie, sometimes funny.

Then the next word we learned about is “expression.” Copyright does not protect ideas; it only protects particular expressions of those ideas. So if someone said I had the idea the put into GANbreeder “dog, mountains, and shell,” and I got an image that was similar to the one that someone else is now using, that is not protectable. The exact image, maybe; but a very similar one, no. And something that is very interesting about GANbreeder, as I was tinkering with it, if you have it create a child on the scale from similar to different, if you say to make it very similar, a lot of the children images that come out are very, very similar. There may be individual pixels or a slight shift in the orientation, but at a casual glimpse, you wouldn’t even necessarily see [the difference].

It’s interesting especially because of the timing of when Alex took these images off, when all the works on GANbreeder were unsigned because there were no accounts. It’s a little hard to say. If you were thinking about pursuing an infringement case, you would really have to prove the exact image had been copied rather than a similar idea where, say, one is orange and one is red.

I asked Fjeld how different a work had to be to be considered original.

In GANbreeder, if you keep making tiny changes, eventually you are going to get something that does have what we would call “originality” in copyright. But it is really hard to say when that happens. And in a lawsuit, it will just be a fact-specific inquiry: Is this the same or is it not? And we have this concept of derivative works for works that are very similar. It can be an infringement to make something that is extraordinarily similar, but not just a mere reproduction.

I asked Fjeld if it mattered that Joel Simon’s intention for GANbreeder users was to build upon each other’s existing works. Wasn’t Reben simply using the tool as intended? It turns out there is a thing called an “implied license.” Fjeld explains:

The other piece around how GANbreeder encourages folks to draw on other people’s work I think brings up another interesting question, which is that it’s largely settled law, particularly in the Ninth Circuit in the U.S., that you can grant a non-exclusive license to use your work in an implied way, so it doesn’t have to be explicit.

U.S. copyright law does require that if you are going to dispose of your right to the work - so either going to give an exclusive license to someone else, or if you are going to sell your copyright - you have to have a writing. But for implied license, a non-implicit license, you don’t have to have a writing. And at least some courts upheld that it can just be implied - you don’t even have to have a conversation about it.

And when I look at GANbreeder, because of the way it’s set up, because of the way the whole system is architected, it gives you an image created by someone else and encourages you to iterate on it. It certainly looks to me like there is an implied license to do that within the context of the site. Anyone who is creating work there understands that other people are going to use it as a basis to make their own work.

Now, when courts look for implied licenses, it is again a fact-specific inquiry. I think with regard to what Alex did, the question is, did people understand that part of the implied license they were given, not just that you can monkey around with it in the context of the GANbreeder app or you can also integrate it into this other system and have it painted by anonymous painters in China and show it in a gallery. They might not have anticipated that, and that’s probably where the issue comes in.

There was an implied license to do something, but the scope of that implied license wasn’t totally clear. Then that is complicated because it is a site that is architected with a thousand models and images in it, so you are essentially navigating the points in a multi-dimensional space created by that number of models and can have any combination of those thousand images. But it creates a lot of very similar images.

So the combination of the fact that the scope of the implied license wasn’t very clear and the fact that people may have an attachment to their ideas or individual expressions and then may see a very similar one… it is my understanding that Alex’s project shouldn’t have directly just reproduced anyone else’s; it would have started with someone else’s, and then he tweaked it based on his body signals.

I wondered why Reben’s work would not be considered derivative and asked Fjeld if she thought it could legally be considered so.

I would say that yes, there is an argument that Alex’s works could be considered derivative of existing works on the GANbreeder website. There remains the question of the implied license because the derivative work is a copyright infringement, but if the use is licensed, then there is no infringement.

There is also a question of what the damages would actually be, because in copyright, you can get statutory damages if you register your work in a narrow window around its creation or before the infringement happens. If you don’t do that - and to my knowledge, none of the GANbreeder images have been registered - then what you get is actual damages. And it’s not totally clear what the damages would be for folks that anonymously created images on a website and then later found that someone had them painted and displayed them in a gallery.

*I also don’t know if there have been any sales. There is the image that Alex used and whether there is a derivative work in that process, and then he takes this further step and has them painted into oil paintings, which, again, I think is another tweak. So there is a series of manipulations of the underlying content.

*Note: There have not been any sales.

I asked Jessica if she thought these “manipulations” by Reben pointed towards “fair use” (a term I had heard in the past but did not fully understand).

Yes, they do steer me more to think about fair use. As I have heard Alex presenting on this work, he really emphasized that for him, it really isn’t about the outputs; they are not the artwork at all. For him, the artwork is the process by which he had trained this series of systems to produce the artwork, test them against his own preferences, to title them, etc. For him, the interesting thing is the process by which he tried to design a bunch of algorithms to take himself as far as possible out of the creation process. The expression of them is that he ends up putting his name on painted images in a gallery. But even putting his name on them is a little complicated in regards of what he was thinking about in regards to the artwork.

When we think about fair use, one of the main factors that courts consider is how transformative the use is. And I do think there is a strong argument here that because the underlying theme of the work, we could think about it as fair use because we want to incentivize this kind of exploration of the space. The way that Alex talks about it, there is an argument that, ethically speaking, it should be clearer that despite the paintings being up at a show with his name on them, he doesn’t really think of himself as the author of them in a certain way. The use is transformative because it is making this point of how far can we push toward algorithmic authorship.

You could think about the Richard Prince vs. Patrick Cariou case. They are both fine art photographers, but Prince is a conceptual artist, an “appropriation artist,” he calls himself, and Cariou is a more traditional fine art photographer.

left, one of Patrick Cariou’s photographs of Rastafarian’s, and at right, a painting from Prince’s ‘Canal Zone’ series

left, one of Patrick Cariou’s photographs of Rastafarian’s, and at right, a painting from Prince’s ‘Canal Zone’ series

Cariou had gone to Jamaica and taken this book of photographs and done a gallery show all of Rastas. He spent all of this time investing in these relationships to produce these images. Prince used them in a gallery show in which he manipulated them a little bit. One of the classic images that gets shown a lot is a full image of a guy hanging out in a jungle setting, and Prince very roughly cut out an electric guitar and pasted it in on top of him. A lot of the original image was still there with this crude-looking addition on top. And Prince won that case as fair use because the argument he made is that he had transformed the content. Yes, Cariou was also a photographer that had a gallery show, but Prince was using it in this conceptual, imaginary space. I think you could think of Alex’s work in a similar way.

Fjeld shared my belief that the driving force behind some of the confusion in AI-generated art was that more people, not fewer, are typically involved. We talked a bit about the developer behind GANbreeder, Joel Simon, and what rights he had, if any, to the works.

In GANbreeder you can click a button and it’s possible to get the coolest thing that GANbreeder ever produced. And how much do we want to think it is in line with the goals of copyright if someone is just clicking a button and the software is producing it… how much do we want the person to click the button to be the person to get the rights? Do we think that Joel, who set up the system, gets some rights? Do we think the people who put in the work to create these models that took thousands and thousands of hours of computing power should get some rights?

There used to be a doctrine in copyright called “sweat of the brow” where courts had an instinct that they wanted to protect people’s investment of time, and that has been rejected. So the notion that people who spent time to create the model should earn rights in the outcomes isn’t the state of copyright in the U.S. right now. But there is something in there that ethically feels to us like if you just click a button once, you are involved in that creation, but maybe you shouldn’t be the person who gets all the rights.

I found Fjeld’s explanations both fascinating and much needed in this space. It was a welcome reprieve to hear a lawyer talk about these issues that we keep seeing coming up in the AI art space without over-focusing on the red herring of whether the machine deserves copyright.

Conclusion

Regardless of what the law says, we all answer to the court of public opinion, and it hasn’t been particularly kind to Alex Reben over the GANbreeder incident. I think the animosity towards Reben stems from folks not liking that he appears on the surface to be doing less work than other artists, yet getting more attention. A common complaint waged against conceptual artists. But more importantly, I think people can see with their own eyes that at least one of his works looks the exact same as an image created by Danielle Baskin, and a few others are similar to images made by other members of the GANbreeder community.

I like Alex and consider him a friend. I also like Danielle and plan on following her work moving forward. So I thought back to what I learned from Jessica Fjeld about it being important that Alex’s work not be the exact same as Danielle’s. This seemed like a pretty easy thing to figure out, so I compared the two images using James Cryer‘s excellent tool called Resemble.js, which can compare two images and highlight the differences.

GANbreeder image claimed by Alexander Reben

GANbreeder image claimed by Alexander Reben

GANbreeder image claimed by Danielle Baskin

GANbreeder image claimed by Danielle Baskin

Analysis of both images from Reben and Baskin highlighting lack of differences using Resemble.js

Analysis of both images from Reben and Baskin highlighting lack of differences using Resemble.js

Other than a little bit of aliasing (I took a lower-resolution screenshot of Baskin’s image), they look the exact same to me. I shared my new findings with Alex and asked if he would consider removing the image from his website in light of the new evidence. He did one better and called Baskin to discuss the best way to move forward. Reben then crafted the following statement, which he first ran by Baskin for approval.

I spoke to Danielle by phone to work out what she thought would be fair for me to do to move past this issue, given all the information we have at this time. We landed on giving her a credit under the artwork on my website as "Original GANnbreeder image sourced from Danielle Baskin" and to make the credit for GANbreeder more obvious on the page. If any other images happen to arise with a similar issue, I'll have to deal with them on a case-by-case basis. But since the images from the website at that time have no authorship information and may be randomly generated, there may be no other issues apart from the few which were already identified. I'm also only concerned with images which are basically the same, not images which are similar and "bred" from a like set of "seed words," as this use aligns with the spirit of the website. Of primary concern to both of us was to put this issue to rest so that GANbreeder can continue to be used as a creative tool and grow from what was learned.

Score one for the court of public opinion.

As always, if you have questions or ideas you can reach me at jason@artnome.com.

Sign up for the Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
1 Comment

Blockchain Art 3.0 - How to Launch Your Own Blockchain Art Marketplace

February 27, 2019 Jason Bailey
Warhol Thinks Pop, Hackatao - 2018

Warhol Thinks Pop, Hackatao - 2018

The most frequent question we are asked at Artnome is, “How can I get my artwork on to the blockchain?” Finally, with the development of what I am calling blockchain art 3.0, we are seeing the new tools that enable artists to tokenize their own art and sell it on their own marketplace.

In this article I am going to show you how I set up a blockchain-based marketplace in less than an hour without coding. But before we dive into a tutorial on how to use these new applications and speak with the teams behind them, we first look at the evolution that led up to this point.

If you are eager to just learn about the new applications, you can skip the history and go to the part that describes these new offerings and how you can use them to create your own tokenized artwork on both the Bitcoin and Ethereum blockchains.

How We Got To Blockchain Art 3.0

Joe Looney presenting the Rare Pepe Wallet at RareAF, NYC, 2018

Joe Looney presenting the Rare Pepe Wallet at RareAF, NYC, 2018

Before we jump in to blockchain art 3.0, let’s take a look at the evolution of blockchain art that got us to where we are today. While people have been making art “about” the blockchain since its inception, I consider blockchain art 1.0 the period where folks first started exploring “digital scarcity” and gave birth to the idea of selling art on the blockchain.

A big problem with producing and selling digital art is how easily it can be duplicated and pirated. Popular opinion is that once something is copied and replicated for free, the value drops and the prospect of a market disappears. Most collectors feel that for art to have value, it needs to have measurable and provable scarcity.

Blockchain helps solve this for digital artists by introducing the idea of "digital scarcity":  issuing a limited number of copies of an artwork, each associated with a unique token issued on the blockchain. This provable scarcity is the same concept that enables tokens like Bitcoin and Ethereum to function as currency.

Blockchain art 1.0 was the Wild West, as there was no real blueprint yet for artists and technologists to work from. Though often overlooked by the mainstream media, there is no question for me that blockchain art 1.0 started with Joe Looney’s Rare Pepe Wallet. You can read my in-depth description of Rare Pepe Wallet in this earlier post, but for now all you need to know is that Rare Pepe Wallet pioneered the possibilities of buying, selling, trading, editioning, gifting, and destroying digital artworks on the blockchain. Joe and the Rare Pepe community not only conceived of the first such market, they were to first to prove it could work at scale, selling over $1.2 million worth of digital art.

Smooth Haired Pepe, 1/1000

Smooth Haired Pepe, 1/1000

On the immediate heels of the success of Rare Pepe Wallet, we saw the development of several other experimental projects, each trying new things. Crypto Punks, Dada.NYC, and CurioCards were all very different from each other (and from Rare Pepe Wallet), as no real template for how art on the blockchain should work had fully been established. It is noteworthy that all of these blockchain 1.0 solutions were driven by the “decentralized” ethos, developed more as creative communities than by a real business model for making money. For me, this is the Golden Age of blockchain art - the era that attracted the OGs, the weirdos (said lovingly), and the truly creative mavericks in the space who were motivated more by creative experimentation than any obvious financial benefit.

London Tacos, from left: Matt Hall (CryptoPunks), John Zettler (Rare Art Labs), Judy Mam (Dada.nyc), John Crain (SuperRare), Charlie Crain (SuperRare), Jon Perkins (SuperRare), Bea Ramos (Dada.nyc)

London Tacos, from left: Matt Hall (CryptoPunks), John Zettler (Rare Art Labs), Judy Mam (Dada.nyc), John Crain (SuperRare), Charlie Crain (SuperRare), Jon Perkins (SuperRare), Bea Ramos (Dada.nyc)

Blockchain art 2.0 started after CryptoKitties exploded and people saw that there was actually an opportunity to make money with digital art on the blockchain. A half dozen or so blockchain art marketplace startups launched with fairly similar functionality to one another. They were almost all based on Ethereum, featured slick professional interfaces, and streamlined the tokenization of art.

These 2.0 marketplaces were run more like businesses than the experimental grassroots community projects from the blockchain 1.0 days. They often have investors, legal advisors, advertising budgets, and corporate titles within their organizations. However, it seems to be the ones that are capable of building communities and providing a collector base to lesser known artists through unified marketing that are the most successful. Blockchain 2.0 offerings include SuperRare, KnownOrigin, Portion, RareArtLabs, and DigitalObjects. In blockchain 2.0, the offerings were similar enough on the surface that it was really the artists these startups were able to attract that separated them from one another more so than the tech.

Example of a clean Blockchain Art 2.0 interface (SuperRare) which contrast with the DIY 1.0 UX/UI

Example of a clean Blockchain Art 2.0 interface (SuperRare) which contrast with the DIY 1.0 UX/UI

There were several approaches to recruiting artists. The most successful seemed to be dropping gallery commissions for primary sales and adding a commission for artists on the secondary market (SuperRare), and consistent grassroots recruiting (Known Origin). Recruiting collectors, on the other hand, has proven a bit more difficult, as the launch of these blockchain art 2.0 offerings has coincided with the decline of the cryptocurrency markets. Most people have either temporarily jumped out of the cyptocurrency market to stop the bleeding or they are holding on, waiting for the market to recover before spending their currency.

That said, there are certainly more than a handful of highly dedicated cryptoart collectors, and the artists themselves have formed a tight community and frequently collect each other’s work. This is not unlike the behavior we have seen between artists in movements of the past, where bartering and trading artworks was common among them.

Launching a Blockchain Art Marketplace

Some early user created markets from the Pixura platform

Some early user created markets from the Pixura platform

Throughout the development of blockchain art 1.0 and 2.0, many artists have wanted to tokenize their own work and offer it on a blockchain market where they can control the look, messaging, and user experience. This makes sense, as artists work hard to brand their own image, and the promise of the blockchain was supposed to be that they could sell their own work without having a middleman or intermediary.

Until now I pointed artists to a half dozen marketplaces where the artist had no control over how their work was displayed and which artists their work was displayed next to. While some were fine with this and embraced the experiment, the lack of artistic control was a deal breaker for many other artists who cared greatly about the context in which their work was shown. As an example, if an artist is producing what they consider to be serious generative artworks, they may not want their work sandwiched randomly between floral still lifes and a thousand digital images of Bitcoin/Ethereum symbols. For many artists, this type of context matters a great deal.

With blockchain art 3.0, artists can take control over the entire process aided by tools that make it easy to tokenize artwork and do not require coding skills or technical knowledge. I cover two such tools in this article:

  • Pixura Platform (beta) - Allows anyone to issue and sell virtual items on the Ethereum blockchain, including art and rare digital collectibles

  • Freeport.io (pre-alpha) - Allows people to collect, create, and trade cryptogood assets issued on Counterparty using the Bitcoin blockchain

Far from competing with each other, we are lucky to have two complementary solutions that function on the Ethereum and Bitcoin blockchains respectively. What makes these two tools stand out for me is that both were developed by people who have already experienced success at scale in building their own active blockchain-based art marketplaces. It also helps that I know the developers behind both solutions personally, think highly of them, and am comfortable recommending their tools.

If you favor Ethereum and you are looking for something you can use starting today, Pixura is for you. They do charge fees, which may not scale as well for some use cases, but with those fees comes the support from a responsive team of experts working on the project full time.

If you prefer Bitcoin, want to pay zero fees, or want/need a completely open-source solution, FreePort.io is for you. There is some functionality already built in to Freeport, but it might be another month or so before it is fully functional - so that is another aspect to take into consideration.

We go into more details below. Feel free to jump to the solution you think might best apply to you.

Pixura - Tokenize Art and Launch a Blockchain Art Marketplace With Ethereum

The Pixura Platform is the same team and codebase behind the SuperRare marketplace. They have been fast to launch, eager to solicit user feedback, and quick to add meaningful features. As a result, SuperRare is among the fastest-growing platforms from the blockchain art 2.0 era, and artists have earned roughly $100K in the first year of the platform’s existence.

According to a recent interview with Pixura/SuperRare CPO Jon Perkins:

Pixura is a wide open platform – anyone can launch a smart contract and create their own NFTs without writing any code. We’ve already seen a bunch of interesting projects get created in one week, and I expect to see hundreds more by the end of the year. We are also working on some exciting collaborative partnerships, which will be announced later in the year.

I decided to launch my own marketplace on Pixura to see how easy/difficult it would be. I was pleasantly surprised - the entire operation from start to finish took less than an hour (including visual customization) and cost me under $30.

I put together this short tutorial to walk you through the process. The tutorial assumes you already have a MetaMask wallet account and at least $27 in Ethereum in your wallet.

Here are the steps to launch your own blockchain art marketplace on Pixura:

  • First go to the Pixura mainnet link: https://platform.pixura.io/

Screen Shot 2019-02-26 at 10.40.34 AM.png
  • Then click on the “Launch a Collection” button

Pix_1.jpg
  • Choose “Ethereum Mainnet” to launch a functioning marketplace

Pix_2.jpg
  • Sign in to Pixura via your Gmail account

Pix_0.jpg
  • Connect to MetaMask

Pix_3.jpg
  • Launch your smart contract

Pix_4.jpg
  • Pay the $25 fee (plus gas) to launch your marketplace

Pix_5.jpg
  • Confirm smart contract deployment

Pix_6.jpg
  • You can check Etherscan for the transaction details

Pix_7.jpg
  • Click on your project (on the right side of the screen)

Pix_8.jpg
  • Click on “Add New Collectible”

Pix_9.png
  • Name your collectible and add an image

Pix_10.jpg
  • Add as much custom metadata as you like (this is a nice feature)

Screen Shot 2019-02-26 at 11.07.41 AM.png
  • Price and launch your collectible

Screen Shot 2019-02-26 at 11.07.31 AM.png
  • Customize the look of your marketplace

You can see the results of my marketplace here. I have a bunch of ideas for what I actually want to do with my marketplace, but it is just a couple of test images for now.

Hopefully you found the Pixura interface for creating a marketplace to be as user friendly as I did. I think its simplicity is its strong point. I am also a big fan of the ability to add new properties, and I know that the Pixura team provides great tech support.

While I like that Pixura gives me more branding autonomy than putting my work directly into SuperRare, there is still a sense that my marketplace is one of many marketplaces within Pixura. This is similar to how I might have my own Etsy shop, but it still lives next to all the other Etsy shops on the Etsy parent site. In some ways this is a plus because people coming to see other Pixura marketplaces have a higher likelihood of stumbling onto my marketplace.

But what if I want a marketplace with 100% branding control where nobody else’s logo shows up and I am not clustered with other marketplaces? Pixura assured me a feature to run a completely white labeled version is on the road map in the near future. But there are some other options as well.

If you are a little more technical, looking for complete autonomy from branding, want to avoid paying any fees, prefer an open-source solution, and can get by without a lot of tech support, then you may want to wait a month or so to explore Freeport as an option.

Freeport.io - Tokenize Art and Launch a Blockchain Art Marketplace With Bitcoin

Screen Shot 2019-02-25 at 3.03.51 PM.png

At the time of this writing, Freeport is pre-alpha and has about a month to go before it will be ready for marketplace creation. I decided to include it anyway because it offers a really nice counterpart to the Pixura platform.

Freeport is the brainchild of Joe Looney, the developer behind Rare Pepe Wallet. Joe is creating Freeport as a completely open-source solution (MIT License), so if you are technical, you can use all the code to do whatever want with it. But Freeport is specifically designed with less technical people in mind. With just a little Bitcoin, from a single interface you will be able to:

  • Create your asset (CounterParty)

  • Upload your art (Imgur)

  • Attach it to the asset (CounterParty)

  • Search a directory (DigiRare.com)

  • Put orders up to sell through the DEX (decentralized exchanged)

Joe has brilliantly structured Freeport to use several existing best-in-class, off-the-shelf solutions, including Imgur, CounterParty, and DigiRare. These decisions were born out of necessity to simplify maintenance and upkeep (Joe is building Freeport for fun in his free time), but this strategy may turn out to be Freeport’s greatest strength. As Joe puts it:

The Bitcoin blockchain is good at creating scarce digital assets (via Counterparty) and then allowing the uncensorable transfer of them. It is not good for storing images. Even with IPFS, any projects utilizing it are generally running their IPFS node for storage. The only way to guarantee that images stored via IPFS are available is to maintain a node and host them yourself, and at that point what are you really even doing? With Freeport, as the developer I don’t need to run any additional software to host images because an image hosting service (Imgur initially) will be hosting them for me. My plan is to also include options to use other hosting services and eventually allow artists to specify custom image locations.

Since I am building Freeport in my free time, I don’t want the responsibility of curating questionable content. One of the problems with something like IPFS or self-hosted storage is that you, the developer, maintain that responsibility. To eliminate that additional work, I’ve leveraged a hosted storage that has its own code of conduct. It also demonstrates that “decentralized storage” is a fun thing to have, but it’s not absolutely necessary. Immutability is achieved by including a hash of the image as well as the image location (Imgur URL) as part of the asset information stored on the Bitcoin blockchain via Counterparty. If Imgur were to become unavailable, the artist has the ability to update the image location, however the hash remains unchained. This means if the artist changes the contents of the image, it is obvious from the record that it’s not the original. Imgur is great at providing the means for everyone to see the image initially and the foreseeable future. However, over time, it becomes the responsibility of the issuer and asset holders to retain the image themselves.

Looney also takes advantage of CounterParty on the back end for token issuance and Bitcorns creator Dan Anderson’s excellent DigiRare site which is designed to provide a directory to view all art and collectibles on the Bitcoin blockchain.

While Freeport is still a few weeks off from launching, you can install the beta as a Chrome browser extension and be among the first to use it when it is ready for prime time.

To install Freeport.io:

  • Download the Chrome extension

  • Go to chrome://extensions/ in your Chrome browser.

  • Make sure “Developer Mode” is selected and click on "Load Unpacked"

  • Select the directory "Chrome Extension"

Be sure to follow Looney on Twitter at @wasthatawolf  for updates on the additional functionality in Freeport as it becomes available.

Summary

Hopefully you found this article/tutorial helpful and you are off to the races building your own marketplace and tokenizing your own art and collectibles. I don’t think you can really do wrong by going with either Pixura or Freeport. Hopefully I have outlined the differences between the two enough that you know which one is right for you. Here is a quick summary:

  • Availability

    • Pixura is live and you can launch a marketplace today

    • Freeport is in alpha and will be ready in roughly a month

  • Blockchain

    • Pixura lives on the Ethereum blockchain

    • Freeport lives on the Bitcoin blockchain

  • Support

    • Pixura provides support to paying customers

    • Freeport: Joe provides support when he can (this is his side project)

  • Fees

    • Pixura charges $25 to launch a market, $1 to launch a collectible, and takes a 3% fee for all transactions on your marketplace

    • Freeport is a community project with zero fees

  • Architecture

    • Pixura utilizes the same proprietary code used on SuperRare

    • Freeport leverages a combination of solutions (Bitcoin, CounterParty, DigiRare, Imgur) and is open source under the MIT license

Conclusion

It is a really exciting time for those of us that have been following the development of art and collectibles on the blockchain. You no longer need to understand the complexities of writing your own smart contracts to launch your own art digital collectibles marketplace, and that should be huge in driving mainstream adoption for creators.

However, I believe the next big problem is going to be growing the number of collectors. One of the great advantages of participating in a marketplace like SuperRare as an artist is they do all the marketing for you. I think some artists may realize that putting their art “on the blockchain” does not necessarily translate to more sales. You still need to find someone interested in buying/collecting your work. And the number of people who know how to buy art using cryptocurrency is even smaller than the number of people who know how to buy art with fiat (regular currency). An increase in the number and variety of digitally scarce objects we can collect could bring in new collectors to the market, but it could also flood the market and reduce demand.

I’m optimistic that an increase in “scarce digital goods” in the gaming market could help drive adoption and understanding for the blockchain art market as well. At least in the short term, I think we’ll see a spike as people explore these new tools and innovate in ways that nobody has thought of yet. And hopefully we’ll see a bit more of the weird blockchain 1.0 spirit come back to the community.

Thanks for reading, as always if you have questions or ideas you can reach out to me directly at jason@artnome.com.

Subscribe to the Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
2 Comments

AI Artist Robbie Barrat and Painter Ronan Barrot Collaborate on “Infinite Skulls”

February 6, 2019 Jason Bailey
Ronan-Robbie-27-22-cm_2.JPG

It is early in the year, but the most compelling show for art and tech in 2019 may already be happening. AI artist and Artnome favorite Robbie Barrat has teamed up with renowned French painter Ronan Barrot for a fascinating show that lives somewhere in the margin between collaboration and confrontation.

The L'Avant Galerie Vossen emailed Robbie last April after seeing his AI nude portraits and asked if he would be willing to fly out to Paris to work with Ronan. Robbie agreed and flew out last July to meet with Ronan, and the two have been working together ever since. The show titled “BARRAT/BARROT: Infinite Skulls“ opens Thursday, February 7th, and literally features an “infinite” number of skulls.

Robbie-22-27-cm_3.JPG

Why Skulls?

For the last two decades, it has been artist Ronan Barrot’s tradition to use the remaining paint on his palette to paint a skull each time he stops, interrupts, or finishes a painting. As it was explained to me, the skulls are like a side process of the main painting, it’s like when you clean out your motor after driving for miles and miles. Ronan now estimates that he has painted a few thousand of these, and this massive visual data set of painted skulls was perfect for AI artist Robbie Barrat to use in training his GANs (generative adversarial networks).

GANs are comprised of two neural networks, which are essentially programs designed to think like a human brain. In our case, we can think of these neural networks as being like two people: first, a "generator," whom we will think of as an art forger, and second, as a "discriminator," whom we will think of as the art critic. Now imagine that we gave the art forger a book with 500 skulls painted by Ronan as training material that he could use to create a forgery to fool the critic. If the forger looked at only three or four of Ronan’s paintings, he may not be very good at making a forgery, and the critic would likely figure out the forgery pretty quickly. But after looking at enough of the paintings and trying over and over again, the forger may actually start producing paintings good enough to fool the critic, right? This is precisely what happens with GANs in AI art.

Ronan-Robbie-27-22-cm_4.JPG

In Robbie’s words:

I trained the network on the skulls. They are all the same shape, the same size, the same orientation, and they are all looking the same way. The results were good, but they were very similar to Ronan’s original skulls. We have the show chopped up into different epochs, and that is Epoch One, training directly on his skulls.

For Epoch Two I thought about how the coolest part about using GANs is that your getting a weird machine viewpoint of artwork.  But feeding in all the skulls with the same layout is sort of like you are telling the machine how to look at the paintings. You’re giving it a very fixed perspective and a very normal perspective that we have already seen before.

So for Epoch Two, I basically played around with feeding the machine the skulls completely independent of any rotation or perspective, so the machine sees skulls that are all flipped around and stretched out. I’m using the same model, but the number of skulls in the training set jumped from 500 to 17,000 skulls. And the results are really, really good. It makes these really strange images that you would never expect. You can tell that they are skulls, but they really are not familiar. Ronan really loves those. He really likes to correct some of the skulls. He’ll say something like, ‘I like this one but it’s not right,’ or ‘There is never an image I am completely satisfied with,’ so he corrects it. He also does interpretations of them.

I also think that the Epoch Two skulls raise very interesting questions about authorship - since the network has learned exclusively from Ronan, but the outputs don't strongly resemble his work.

I asked Robbie about Ronan’s initial reaction to his work and how the relationship played out.

We are like opposites. He does not like the fact that my work is digital. He said the pixel is sad. And he really was skeptical about it. And right after I visited Paris, he was a little bit hesitant if he wanted to do the show because the French painters have the conception of technology and capitalism being the enemy. But now he is really excited about the show. But I think what is important to remember is that this is more like a confrontation than a collaboration. There are collaborative parts of it, but we really are sort of at odds.

Ronan explained to me that at first, he could not see where Robbie was making any decisions in the AI process. Like many, he thought that the “AI” and the “machine” was doing all the work and making all the choices. But quickly after working with Robbie and seeing that there is “choice and desire” in his work, he decided “the pixel is no longer sad.” But adds Ronan:

Of course it is not the same, I am not expecting the same thing from AI as I am from a painting. Both worlds are contiguous, but not the same. They are not the same rules. I hate the very idea of naturalism. As if everything was equivalent to everything else. I love the idea that there are two sets of rules, which allow us to play differently.

RonanCorrection-on-Robbie_3.JPG

Ronan also pointed out that he does not keep all of his skull paintings. He curates them and many times he paints over the ones he does not like. He sees this curation process as not entirely unlike Robbie’s process of choosing which of the AI skulls to keep from the nearly unlimited number he can produce using GANs.

While the two have come to understand and respect each other’s working methods, there is a lot of interesting dialogue between them on what is an actual painting vs. something that is just an image of a painting. According to Ronan:

There is always difference between a painting and an image of a painting. And now [using GANs] there is an image of a painting that does not exist.

Sometimes I dream about the painting I want to do, and when I have done it, it is completely different. This indicates the direction, but you have to make your own way. And that is why the paintings will be presented as one by Ronan and then one by Robbie. Because then they become a mirror. And the question is, who is mirroring who? Originally they were skulls, but they become real vanities because of this idea of the mirror. With traditional vanities there is always a skull in the mirror which gives you the idea of time passing. Originally when I showed my skulls, each one was a painting on its own. But when paired with the works by Robbie, it creates a kind of double.

Simon Renard de St. André, Vanitas. Unknown.

Simon Renard de St. André, Vanitas. Unknown.

Interestingly, Robbie agrees with Ronan that the individual images being produced by the GAN are just images of paintings (and in some cases, images of paintings that do not exist). But Robbie adds that he sees the trained GAN itself as the artwork. According to Robbie:

Ronan is right when he says that the AI skulls are "images of artwork" instead of artworks themselves. In my opinion, the actual artwork is the trained GAN itself, and the outputs are really just fragments or little glimpses of that (the trained GAN is almost just a compressed version of all the possible AI skulls).

Ronan-Robbie-27-22-cm_3.JPG

Robbie often compares his process of working with GANs to that of the artist Sol LeWitt who is famous for writing out “rule cards” or algorithms for humans to execute to create his drawings.

Sol LeWitt Rule Card

Sol LeWitt Rule Card

Robbie explains:

The Sol LeWitt metaphor applies in multiple ways in GAN art. The data set is like the rule card, with rules created through curation - and the network interprets these to make art. But additionally, the network itself is also like the rule card, and the individual generations are just different interpretations/executions of those rules. This is in line with the idea that the individual works are just "tokens" of something larger - they're shadows of the network, the actual artwork.

At the same time, if the network itself is the piece of art, it's a very strange one, since it cannot be viewed or comprehended entirely (unlike the set of rules responsible for traditional generative artworks). We can only get small glimpses of it at a time. I'm not aware of any other type of art where this is true.

Robbie-Ronan-27-22-cm_1.JPG

I have a lot of admiration for Ronan and his work - it seems almost unfair to Ronan to compare his work to the "images of artwork" output by the network. There's something present in the process of a traditional painter that I feel I'm missing as an artist - I'm not sure if it's dedication, rigor, the use of simple tools and not some complex machine, or something else entirely. Without being overly dramatic, there is something very honorable about how a very traditional painter operates; especially today when everything else is surrounded by technology. In short, I think that if I had to choose between the two types of skulls regardless of process or context, I would choose Ronan's skulls as my favorite. At the same time the Epoch Two AI skulls raise so many questions that I'm interested in - so including process/context, I'm more interested in them.

I’m an artist, I make work. But I am not the best at art history, I don’t have any traditional training, I don’t know how to paint or sketch or anything like that. I definitely do sympathize with Ronan’s view of digital work. Maybe he has seen a lot of low-quality digital work or he just doesn’t like the medium. It makes me wish that I was better at non-digital art.

I asked Ronan if he sees Robbie’s work as art or as inspiration for art.

Robbie introduced his own decisions and desires and changed the training images and the algorithms to make the work closer or further from the work I have done so far. It’s always interesting to bring something from outside the box into the realm of art. In the beginning, that can be seen as a threat. But in the end, it helps whatever is going on. If there is choice, if you can dream a little, it’s art. The skulls lend themselves well to AI and art because of the idea of the vanity of death. They therefore remain in ambiguity. And it is a disturbing ambiguity, the uncanny. Some will say it is about death and some will say it is about whatever, but I like maintaining this ambiguity in art. In the beginning I was worried that it was not possible to be free with AI. You can never say “that is not art, it is only a tool.” You have to find how to be free every time.

000035.JPG

I asked Robbie how he finds this “freedom” in GANs and what makes good GAN art. He shared:

I really don’t like work that relies too heavily on the medium, like a watercolor painting where the whole interesting thing about the painting is that it is a watercolor and it relies on watercolor effects. My mom always called those “medium turds” or “watercolor turds.” I think the same applies to GANs where if it is reliant on the medium and the medium is the cool thing, then that’s not really art - it’s more like a tech demo. I think that the people that are making really cool work with GANs are using it in ways that are not obvious.

For example, in the show we have a box with a peephole in it, and when you look in, it will generate a skull and it will display it for like five seconds and then it will add an input vector to the “do not use list.” So basically you are going to be the only person to ever see that skull… ever. I think that is cool because it’s different and it’s new and it’s not too reliant on the GAN just being a GAN.

You Can’t Hand Someone an Apple and Call Yourself a Chef.

RonanCorrection-on-Robbie_2.JPG

Not only is the artwork from Infinite Skulls of higher quality than anything I have seen from AI so far, the confrontation between the two artists and the resulting work forged through their conflict are the perfect visual symbol for the clash between AI and the traditional world at large.

I rarely anthropomorphize artificial intelligence and machine learning and prefer to think of these new technologies as augmenting human capabilities rather than replacing humans. But others have pushed me, asking, “Who is augmenting who?” in the relationship between AI and artists. If the relationships between AI and humans is symbiotic, then who is the host and who is the parasite? Though it may sound harsh, I think it is natural that people should ask themselves a similar question of the relationship between Ronan and Robbie, even if there is no clear answer.

000085.JPG

While the two artists end up getting along and respecting each other’s methods in the end, each has to see the other as fuel or a raw material or ingredient to consume for their own artistic self-preservation. In both cases the artists are actively consuming the others work into their own as an ingredient, which is a different relationship than mere inspiration.

Ronan frames Robbie’s work as “photos of paintings that do not exist yet”, ostensibly because he himself has yet to create them, emphasizing that he is not happy with any of the works Robbie’s GAN produces until he “corrects” them. Note that Ronan also called Robbie and his AI “a guest in the studio” several times during our interview, which suggests a more passive role than an that of an equal in artistic collaboration. To further explain this relationship, Ronan explains, “It was like having a new guest in a jazz club,” again casting Robbie as a guest, or a “muse”, and not as a member of the band on the stage.

Similarly, Robbie has to treat Ronan’s 500 hundred skull paintings like unrefined wheat, grinding them down and further refining them to sufficiently anonymize them. He writes a program to randomize Ronan’s painting by stretching and flipping them to generate a less recognizable set of 17K training images from the initial 500 works before he can create art that is sufficiently different from Ronan’s to call it his own. Both must make a sacrifice of the other to produce their own work.

Bugs-Bunny_SHort.gif

Ronan is rightfully proud to have painted two thousand skulls in the last two decades, but Robbie and his GAN can produce billions of skulls seemingly overnight, transforming Ronan into a sympathetic, man vs. machine, John Henry-like character.

It’s tempting to cast the story as two artists who overcome their many differences (age, language, tools) and some initial friction to collaborate on works that are as much by one as by the other. But to ignore the dynamic tension between the two artists is to miss much of what is interesting in the work. It is fitting that they landed on the theme of the skulls as vanities (traditional artworks designed to remind us of our own mortality) as it serves as an excellent thematic umbrella. After all, we all eventually return to the soil, only to become the ingredients in someone else’s narrative.

Subscribe to The Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
4 Comments

2019 Art Market Predictions

January 27, 2019 Jason Bailey
President Barack Obama, Kehinde Wiley, 2018

President Barack Obama, Kehinde Wiley, 2018

Feel free to continue reading our 2019 predictions but please note that we have also recently published our 2020 art market predictions.

I’m a little late on my art market predictions this year, but I had too much fun with my 2018 art market predictions to keep my crystal ball in the closet. This year, I go deep on two trends that I think will dramatically transform the art market, not only in 2019, but for the next decade: first, increased diversity/inclusion in the art world, and second, the digital transformation of art.

I believe we are on a massive collision course between populations that are becoming increasingly diverse and an art history and art world that is still very white and very male.

Many have told me that nothing changes fast in the conservative art world. However, I am predicting nothing short of a “Moore’s Law” of diversity in art. I believe 2019 will bring double the protests and market shift towards equality from what we saw in 2018, and this doubling will continue annually until we reach visible signs of parity. I theorize that continued pressure on art museums will drive rapid cultural change which will then trickle down and transform the art market.

Equally radical, I believe rapidly evolving technology, specifically digitization, is shaping human lives faster and more dramatically than any other series of events in history. I predict that digital transformation of the art world will lead to the beginnings of the dematerialization of art (as is already happening with books and music). And I argue that rather than a rise in the commoditization of art, we are actually seeing the early beginnings of a move away from ownership by traditional definitions.

I predict that museums, galleries, and auction houses will realize improving diversity/inclusion and focusing on the rapidly shifting intersection of art + tech is the key formula for increasing interest, engagement, and participation in the arts.

The rest of this post dives into why I hold these two beliefs. I try to take a first-principles look at art and its function in society, including its use in museums and private collections. I then take a look at what I believe are two important macro trends — a strong push for diversity and inclusion, and the digital revolution — and make predictions around the impact of those trends on art, its value, and its function in society.

Museums Are Serving an Increasingly Diverse Population

Source: William H. Frey Analysis of the the U.S. Census population projections released March 13, 2018, and revised September 6, 2018

Source: William H. Frey Analysis of the the U.S. Census population projections released March 13, 2018, and revised September 6, 2018

According to the Brookings Institution, the U.S. population is projected to become “minority white” by 2045. Additionally, Europe’s population as a percentage of the global population has been shrinking, moving from 28% in 1913 to 12% in present day, and is predicted to be just 7% by 2050.

As minority groups increasingly move towards forming a collective majority in the U.S. and Europe, it becomes increasingly important for museums to evaluate their collections and hiring policies to make sure they reflect the public they serve. This includes not only diversity in race, but working towards correcting longstanding gender inequalities, as well. There are many signs that there is still a lot of work to be done on both fronts and increasing pressure to get it done faster.

  • 85% of artists in major U.S. museums are white

  • Work by women artists makes up only 3–5% of major permanent collections in the U.S. and Europe

  • Less than 3% of museum acquisitions over the past decade have been of work by African American artists

  • Among museum curators, conservators, educators, and leaders, 84% are white, 6% are Asian, 4% are African American, 3% are Latina/o, and 3% have a mixed-race background

  • 46% … of U.S. museum boards are all white

  • 93% of U.S. museum directors are white

  • The top three museums in the world — the British Museum (est. 1753), the Louvre (est. 1793), and The Metropolitan Museum of Art (est. 1870) — have never had female directors

Sadly, but not surprisingly, the art market reflects these same biases.

  • 80% of the artists in NYC’s top galleries are white (and nearly 20% are Yale grads)

  • 77.6% of artists in the U.S. making a living from their work are white

  • Only five women made the list of the top 100 artists by cumulative auction value between 2011-2016

  • The discount for women’s art at auction is 47.6%; even removing the handful of “superstar” artists that skew the data, the discount is still significant at 28%

  • There are no women in the top 0.03% of the auction market, where 41% of the profit is concentrated

  • Overall, 96.1% of artworks sold at auction are by male artists

Despite the art world being disproportionately white, we are seeing trends of increased engagement across all minority groups in attending U.S. museums and galleries between 2012 and 2017.

Source: National Endowment for the Arts, The 2017 Survey of Public Participation in the Arts

Source: National Endowment for the Arts, The 2017 Survey of Public Participation in the Arts

Rather than back away from them out of frustration, people who feel like museums are not representing the public they serve are increasingly taking the fight into the museums. Here are just a few of the protests that were held in museums in 2018 alone:

  • Brooklyn museum hiring a white woman as chief curator for its African collection

  • Artist Michelle Hartney put up alternate wall labels at the MET highlighting Picasso’s and Gauguin’s poor treatment of women

  • Protests of the MET changing their admission policy as classist and nativist

  • Demonstrators filling the Whitney to protest its vice chairman’s ties to a tear gas manufacturer

  • Artists in a protest art show asked to have their work removed from the exhibition when the museum rented out their atrium to a defense contractor

  • Protests at the British Museum over an exhibit sponsored by BP

  • Digital artists held a guerrilla AR (augmented reality) exhibit in the MoMA making a statement against elitism and exclusivity

  • Photographer Nan Goldin led the charge to shame the Sackler family for its role in getting people hooked on OxyContin by staging protests in the Sackler wings of several museums, leaving pill bottles and staging “die-ins”

Nan Goldin and P.A.I.N. (Prescription Addiction Intervention Now) protesting the Sackler involvement with the Harvard Art Museums

Nan Goldin and P.A.I.N. (Prescription Addiction Intervention Now) protesting the Sackler involvement with the Harvard Art Museums

If it’s not bad enough that the majority of work in art museums is by white males, much of the work that is not by white males was stolen during colonization. A recent report estimates that 90% of African art is outside of the continent.

Between the 1870s and early 1900s, Africa faced European colonization and aggression through military force which included mass looting of African art and cultural artifacts. This art was brought back for display in museums in European countries, as well as in the U.S. There has been increased pressure to return the stolen art back to Africa, and in 2018, we saw several protests on this front. The group Decolonize This Place took the protest to the Brooklyn Museum with signs that read, “How was this acquired? By whom? For whom? At whose cost?” and protestors at RISD demanded a sculpture looted from the Kingdom of Benin be returned.

Decolonize this Place activists protesting in the Brooklyn Museum

Decolonize this Place activists protesting in the Brooklyn Museum

French President Emmanuel Macron set a new precedent when he commissioned research on how to handle France’s ~90,000 artworks from Africa. The result was a 109-page report recommending that France give back to Africa all works in their collections that were taken “without consent” from former African colonies.

France, of course, was not alone in colonization. Hundreds of thousands of African artifacts are housed in the U.K., Germany, Belgium, and Austria. The British Museum alone has over 200,000 items in its African collection. I predict pressure to return these artifacts (in the cases where they were ill-gotten) will only increase. I don’t expect people will settle for the “long-term loans” of works back to Africa that many museums are proposing in lieu of complete repatriation.

When Museums Signal Inclusion and Diversity, Good Things Happen

Museums have a lot of work to do to increase diversity and inclusion, but good things happen when they do, even when it is just symbolically.

In early 2017, Beyonce and Jay-Z shot a video for their track Apeshit in the Louvre. Before you write this off as insignificant, you should know that it had an immediate and enormous impact, with Louvre officials crediting the video for increasing their attendance by 25% from 2017 to an all-time record of ten million visitors in 2018.

No doubt Beyonce and Jay-Z resonate strongly with a young and diverse audience (with over 100 million albums sold combined), and their video likely brought some fresh faces to the Louvre.

Similarly, many of the most-heralded art exhibitions of 2018 featured female artists, suggesting a strong appetite for some diversity in our museums and galleries. These include:

  • Hilma af Klint - Guggenheim

  • Tacita Dean - National Gallery, National Portrait Gallery, and Royal Academy

  • Adrian Piper - MoMA

  • Berthe Morisot - Barnes Foundation

  • Anni Albers - Tate Modern

  • Vija Celmins - SFMOMA

  • Tomma Abts - Serpentine Sackler Gallery

Museums that want to see growth in attendance should follow the example set by the Louvre and others by finding public ways to signal that they are open to both artists and visitors of all races and genders, even if they still have work to do in diversifying their collections and staff. Showing some self-awareness can go a long way while en route to solving the problems long term.

Continued Pressure on Art Museums Will Drive Rapid Cultural Change That Will Transform the Art Market

The relationship between museums (as culture drivers and tastemakers) and galleries and collectors is highly interdependent. We know from studies that artists see major boosts in the market for their work when they are shown in major museum exhibitions.

“Auctions of valuable pieces tend to coincide with successful exhibitions.” Ahmed Hosny, Machine Learning For Art Valuation: An Interview with Ahmed Hosny

“Auctions of valuable pieces tend to coincide with successful exhibitions.” Ahmed Hosny, Machine Learning For Art Valuation: An Interview with Ahmed Hosny

Given this dependency, I believe that once museums accelerate the diversification of the work they show (under pressure from an increasing number of protests), we will see the value of the art rise dramatically in the market.

We are already seeing some early signs of the market correcting for its indefensible biases. In 2019, Kerry James Marshall broke the record for top-selling work by a living African American artist when his piece Past Times sold for $21.1M at Sotheby’s last May.

Past Times, Kerry James Marshall, signed and dated '97

Past Times, Kerry James Marshall, signed and dated '97

Likewise, Jenny Saville set the record for most expensive work sold by a living female artist for her 1992 painting Propped, which sold for $12.4M.

Propped, Jenny Saville, 1992

Propped, Jenny Saville, 1992

I believe these two records falling in the same year is just a very small signal of a massive market correction that will happen over the next two decades as we mature as a society and learn to see people as equals, regardless of race or gender. Those who move quickly to increase diversity will flourish, and those who don’t will risk losing their audience and becoming irrelevant.

Digital Transformation and the Dematerialization of Art

Phantom 5, 2018, Jeff Bartell

Phantom 5, 2018, Jeff Bartell

"Art is an experience, not an object." - Robert Motherwell

The second major force that I believe will shape the art world in 2019 (and for the next decade to come) is a strong trend towards embracing art + technology, and specifically around the digital transformation of art.

Source: International Telecommunications Union

Source: International Telecommunications Union

We are living during arguably the most dramatic technological transformation in human history, and with half the world online now, I believe the future of art is inevitably digital. With music, we saw the evolution from physical media like cassettes and CDs move to dedicated hardware like iPods and MP3 players, and then finally to streaming services like Spotify and Pandora.

Source: IFPI Global Music Report 2018

Source: IFPI Global Music Report 2018

We saw the same trend in publishing, with physical books losing market share to e-books and e-readers. Those devices were just an intermediary step to streaming audiobooks, which is now the the fastest-growing sector of publishing by far.

Source: APA (Audio Publishers Association)

Source: APA (Audio Publishers Association)

Despite rapid shifts towards digitalization in other fields, most of us still think of canvas on a wall when we hear the word “art.” This is ironic given the fact that Americans spend an average of 11 hours a day looking at screens and almost no time looking at the walls of their homes.

Source: TEFAF Art Market Report Online Focus 2017

Source: TEFAF Art Market Report Online Focus 2017

Galleries are struggling or closing down precisely at a time when interest in art is rising on Instagram and at international art fairs. But increased interest does not always mean increased sales. Writer Tim Schneider captured this shift in his review of Art Basel Miami last year when he asked:

…if the fastest, and perhaps only, organically growing audience for art is more interested in being around it for a week, a few days, or even a night at a time rather than in owning it for a high price for much longer, what does that mean for everyone else?

I think we are seeing some early signs that art consumption is shifting away from physical ownership, as we saw with books and music, and toward the experiential, ushered in by the digital.

For centuries, physical ownership of art was required to enjoy it. Art was a sign of wealth and power, and collecting art was about saying “I own this art.”

LUIGI FIAMMINGO – Portrait of patron Lorenzo de’ Medici, called The Magnificent, c. 1550

LUIGI FIAMMINGO – Portrait of patron Lorenzo de’ Medici, called The Magnificent, c. 1550

With the increase in availability of the internet, we have seen a rise in social media consumption of art. Sharing selfies at museums and art fairs on social media signals your taste and sophistication without having to own physical artworks. And while a few dozen people may see the art you purchased at a gallery and hung on the walls of your home, hundreds to thousands of people instantly see the art selfies you share on your social profile. This has enabled art appreciation to be less about saying “I own this art” and more about saying “I like this art.”

Me at the Boston ICA hoping some of Albert Oehlan’s “coolness” will transfer to me in this selfie posted on Instagram

Me at the Boston ICA hoping some of Albert Oehlan’s “coolness” will transfer to me in this selfie posted on Instagram

I believe as we become increasingly digital, the new message we send going forward will be “I support this artist.” As with the previous stages of “owning” and “liking,” “supporting” publicly links you back to the art and artists who you enjoy in a highly visible way. And having methods for supporting artists that do not require you to purchase or commission whole works of art greatly expands the pool of potential participants.

Few of us show off our CD collections these days; instead, we consume music through streaming and go to concerts where we take selfies and buy t-shirts that we share on social media as patronage proof points. I expect art to move in that direction, and would argue that it already has.

Physical possession of works that are created digitally provides no real advantage. Again, it is the same dynamic of dematerialization we are seeing in music and books. I gain very little by having a physical CD for every album I have access to on Spotify or a physical book for every story I have access to on Audible. Neither would be practical. With streaming, what I have lost in fetishizing tangible objects, I have gained with access to a number of albums and books nobody could have dreamed of 25 years ago.

Does an art streaming service in the mode of Audible or Spotify sound ludicrous? Well, generation one of art streaming has been around for almost a decade and has over one billion users.

Source: Instagram

Source: Instagram

I’m talking about Instagram, of course. But Instagram is really just the Napster of art streaming, as it falls short of supporting most artists. Nevertheless, it is a solid proof point of our insatiable appetite for the digital consumption of art. I predict we will soon see a combination of the proven distribution and consumption model of Instagram paired with patronage models like Patreon and Kickstarter.

Last year, we saw a lot of experimentation around new models for funding artists from several promising startups exploring blockchain. Dada.nyc, where I am an advisor, has over 160K registered artists in their community. Over 100K drawings have been produced in their social media platform where artists communicate with each other through drawings.

I started this drawing when I was in bed with Lyme disease in my knee. Artists from around the world responded. The conversation continues months later.

The Dada team is carefully working through how to create a market that does not just duplicate the current physical art market. They want to avoid building a system where only a few can afford to collect and an even smaller number of people are rewarded for their creative work. Dada dislikes the collecting of art as speculation and is constantly evaluating new models of patronage that can enable artists to focus on creating their work. Their goal is for the entire community to benefit each time patrons provide monetary support and to blur the lines between “patrons” and “artists,” as they believe creating art is beneficial for everyone.

Another blockchain art market that experienced significant traction and growth in 2018 is SuperRare. They provide a new revenue stream for artists (which helps fund creative projects) while giving patrons the ability to discover, buy, sell, and collect unique digital creations by artists from around the world.

Screenshot of my digital art collection in SuperRare

Screenshot of my digital art collection in SuperRare

SuperRare completed almost 6,000 transactions, generating 602.76 ETH to date (over $70K) in less than a year since their launch.

https://www.dapp.com/dapp/SuperRare

https://www.dapp.com/dapp/SuperRare

Sure, these are not Instagram numbers just yet, but having a handful of startups (others notables include, Portion.io, Known Origin, R.A.R.E. Art Labs, and Digital Objects) prove out the model is an important first step in building out any new market. It is also telling that despite the cryptocurrency crash, which devastated the majority of blockchain companies, all of these blockchain art markets are still in business and experiencing growth.

There are two important things to note about digital art markets like the ones above:

  • You don’t need to own the work to experience it. When I buy a work on SuperRare, I see the same image that everyone else can see for free.

  • Because of this, the joy in collecting digital art does not derive from denying other people access to art, but instead, in increasing access to art and artists you enjoy and want others to appreciate, as well.

Digital art is highly replicable and transmissible, so there is no benefit to keeping it to yourself. In fact, the value of the work (as with all art) only goes up as you share it more broadly. The message with collecting in a digital age is less and less “I want you to know how powerful I am - I own this thing that nobody else can own” and is instead “I want you to know I support this artist because their work is awesome, and I’m excited to share it with as many people as possible.”

So why buy digital art if everyone else can see the same image for free? It’s simple: Because you can’t expect artists to continue creating if you don’t support them. I believe it’s not the art itself that we should revere, but the people making it. Too often we celebrate and fetishize individual works of art long after the geniuses that created them have died penniless. Rather than cater to speculative or extrinsic values, I predict we will see several new digital art streaming services built on the intrinsic pleasure we derive from art. We’ve learned we don’t need to own a physical, re-sellable book in order to enjoy a great novel, or a piece of vinyl to appreciate music. The same will be true for art.

It is important to remember that ownership and speculation on the part of collectors is not a necessary ingredient to producing great art. That is just one way of making sure artists have enough money to survive and continue working, and there is a good chance it is not the most effective (nor the best) model for artists.

Of course, many artists will choose to continue to work in traditional physical media like painting and sculpture. We will always have amazing museums full of physical artwork, and I couldn’t be more thankful for that. There will always be galleries for buying and selling physical art, same as we still have brick-and-mortar bookstores and music stores. But I think the digital transformation of art is inevitable and coming faster than most people expect. I also strongly believe that this shift is healthy and presents an opportunity to reframe (pun intended) how we treat artists and consume art.

Summary and Conclusion

Lots of people I know don’t like making or reading predictions. The primary complaint I hear is that predictions are either “boring and accurate” or “entertaining but outrageous.” I, on the other hand, love making predictions. I feel like the process is similar to making art in that I can use both my imagination and my powers of observation and reasoning to show the world how I see things as they could be.

As a pseudo-futurist and techno-optimist, I relish the idea that we can build a world where an increasing number of people can participate in the joys that art has given me in my life. I believe the macro forces and trends that I am seeing in the world support that idea.

But I don’t want to let myself off the hook too easily here, so what am I actually saying that is measurable here in terms of predictions?

  • First, a “Moore’s Law” in diversity in art. Inclusion and diversity in art will double annually until we reach parity, as measured by:

    • An increase in the price of works sold by women and minorities at auction;

    • An increase in the number of women and minorities in positions of power at museums and in the art trade;

  • Second, an increase in the number of people interested in art without a corresponding increase in the number of collectors;

  • Third, the launch of at least one art streaming service in 2019 and a shift towards this model over the next decade;

  • And fourth, a shift from artists, art journalism, and art fairs to diversity and tech as the key topics for all of 2019.

I hope you enjoyed this year’s predictions! Whether you agree or disagree, I am always excited to hear from Artnome readers. Leave your thoughts in the comments below or hit me up on Twitter at @artnome or e-mail me at jason@artnome.com.

Subscribe to the Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
3 Comments

How Rembrandt and Van Gogh Mastered The Art of the Selfie

January 13, 2019 Jason Bailey
An average of every self-portrait painted by Rembrandt

An average of every self-portrait painted by Rembrandt

I recently read that the average millennial will take an astronomical 25,000 selfies in their lifetime — almost one per day. This got me thinking about the history of selfies. Before the invention of the camera, artists were the only ones capable of making selfies (I know, what a tragedy, right?). So in a weird way, you could argue Rembrandt — known for painting an enormous number of self-portraits — was the Paris Hilton of his day. Sound crazy? It’s not.

Self-portrait in a hat with white feathers, Rembrandt, 1635

Self-portrait in a hat with white feathers, Rembrandt, 1635

Portrait in a hat with pink feathers, Paris Hilton, 2018

Portrait in a hat with pink feathers, Paris Hilton, 2018

Though the number is somewhat contentious, Rembrandt was known to have created close to 100 self-portraits (over 40 of them as paintings). That may not sound like much by today’s selfie standards, but it’s huge when compared to the painters of his day — especially when you consider that it accounts for 10% of his total artistic output. Think about it: he is a painter by trade, and 10% of his time on the job was spent making paintings of himself. If that’s not some Paris Hilton-level selfie action, then I don’t know what is.

The Old Masters Loved Showing Their Bling

Self-Portrait with a Sunflower, Anthony van Dyck, 1663 (note the flexing of the bling he received from his patron, the English monarch Charles I)

Self-Portrait with a Sunflower, Anthony van Dyck, 1663 (note the flexing of the bling he received from his patron, the English monarch Charles I)

Maybe you are thinking, “Jason, Rembrandt is the greatest painter of all time. Surely he was motivated by something more noble than the vanity that motivates today’s selfie-snapping celebrities.” Well, actually, not so much. Consider this: old master portrait artists were often given gold necklaces from their wealthy patrons. This became such a big deal that the painter Titian decided to bling out his selfies by including the gold chains he received from his patron, the Emperor Charles V. It kicked off a fad among portrait artists including Van Dyck, Vasari, Reubens, Bandinelli, and others.

Self-Portrait, Titian, 1546 (wearing the golden chain that was given to him by the Emperor Charles V in 1533)

Self-Portrait, Titian, 1546 (wearing the golden chain that was given to him by the Emperor Charles V in 1533)

Self-Portrait, Baccio Bandinelli, 1530 (wearing a gold chain with a pendant bearing the symbol of the chivalric Order of St. James)

Self-Portrait, Baccio Bandinelli, 1530 (wearing a gold chain with a pendant bearing the symbol of the chivalric Order of St. James)

Not unlike today’s hip hop artists, these chains were a status symbol — they showed that an artist had “arrived” and was at the top of their game. Unfortunately, Rembrandt had no wealthy patrons when he was first starting out. Undeterred, he decided to “fake it till he made it” and painted imaginary gold chains on his self-portraits to suggest that he had a higher status and more power than he actually did. Talk about doctoring a selfie for purposes of vanity and status.

Self-Portrait with Beret, Gold Chain, and Medal, Rembrandt, 1640

Self-Portrait with Beret, Gold Chain, and Medal, Rembrandt, 1640

Jay-Z: Portrait with Ball Cap, Gold Chains, and Brooch

Jay-Z: Portrait with Ball Cap, Gold Chains, and Brooch

The Dutch Love Their Selfies

Selfie with Van Gogh’s 1888 Self-Portrait Dedicated to Paul Gauguin (on a research visit to Harvard Art Museum)

Selfie with Van Gogh’s 1888 Self-Portrait Dedicated to Paul Gauguin (on a research visit to Harvard Art Museum)

“They say—and I am willing to believe it—that it is difficult to know yourself—but it isn’t easy to paint yourself, either.” - Vincent van Gogh

In terms of selfies, Van Gogh was not far behind Rembrandt, having painted 35 self-portraits in just one short decade of activity. That is roughly 4.2% of his total output and more than three self-portraits a year!

Van Gogh believed that painting could be reinvented through portraiture and fantasized about building a colony of artists working together. He also knew that Japanese wood block printers often exchanged prints among each other and encouraged his besties Gauguin and Bernard to exchange self-portraits with him.

As Van Gogh wrote:

It clearly proves that they [Japanese wood block printers] liked one another and stuck together, and that there was a certain harmony among them [. . .] The more we resemble them in that respect, the better it will be for us.

Van Gogh had essentially come up with an old-school social network where people could share and comment on each other’s selfies, not unlike Instagram or Snapchat. He wrote to his brother Theo sharing his thoughts on the self-portraits he received from Gauguin and Emile Bernard. Here is what Van Gogh’s comments would have looked like in Instagram (using actual quotes from correspondence between the artists).

VanGogh_Instagram.png

Sadly, Van Gogh and Gauguin’s friendship famously soured, and Gauguin sold the portrait Van Gogh painted for him for about three hundred francs after making a few restorations.

Van Gogh’s portraits function like a visual diary. While his early works do not feature gold chains, they are painted in a dark Rembradt-esque palette and feature conservative clothing and a pipe, suggesting Van Gogh may still have been at least a little preoccupied with keeping up appearances.

Self-Portrait with Dark Felt Hat, Vincent Van Gogh, Paris, 1886

Self-Portrait with Dark Felt Hat, Vincent Van Gogh, Paris, 1886

Self-Portrait with Pipe, Vincent Van Gogh, Paris, 1886

Self-Portrait with Pipe, Vincent Van Gogh, Paris, 1886

Self-Portrait, Vincent Van Gogh, Paris, 1886

Self-Portrait, Vincent Van Gogh, Paris, 1886

By 1887 (just one year later), we see Van Gogh rapidly exploring self-portraits in the style of other artists, including influences from Impressionism, Pointallism, and Japanese woodblock prints. I believe we are also seeing a shift from portraits focused on external appearance towards portraits capturing his own psychological inner life.

Self-Portrait, Vincent Van Gogh, Paris, 1887

Self-Portrait, Vincent Van Gogh, Paris, 1887

Self-Portrait with Bandaged Ear, Vincent van Gogh, Arles, 1889

Self-Portrait with Bandaged Ear, Vincent van Gogh, Arles, 1889

Self-Portrait, Vincent van Gogh, Saint-Rémy, 1889

Self-Portrait, Vincent van Gogh, Saint-Rémy, 1889

Averaging Rembrandt and Van Gogh Self-Portraits

It dawned on us that with so many selfies, Rembrandt’s and Van Gogh’s self-portraits comprised a pretty cool data set. After brainstorming with Artnome data scientist Kyle Waters, we decided it would be cool to create “average” self-portraits of each artist by combining their paintings into a single image. Kyle settled on using a similar approach to the technique he employed in averaging Van Gogh’s paintings to show why Van Gogh changed his color palette.

We started by importing all the self-portrait images for Rembrandt and Van Gogh from the Artnome database and set them all to be the same 400 x 400 pixels in dimension. I know, kind of a sin to change the aspect ratios of famous paintings, but this made it easier for us to "add" each image together, i.e., taking the red value of the top left pixel in Self-Portrait with Bandaged Ear and adding it with the red value from the top left pixel of Self-Portrait Dedicated to Paul Gauguin, and so on. 

We then calculated the simple arithmetic average by dividing out the sum of pixels by the total number of paintings.

unnamed-13.png

We were pretty psyched with the results. You can check them out below.

The average Rembrandt self-portrait

The average Rembrandt self-portrait

The average Van Gogh self-portrait

The average Van Gogh self-portrait

While there is a lack of detail, I actually love the results. You can definitely make out which is Rembrandt and which is Van Gogh. Rembrandt’s composite features an earthy brown color palette, while Van Gogh’s yellows and blues average a greenish hue, with patches of orangey-red where his hair and beard were most commonly depicted. It is also clear from the composites that Rembrandt preferred to paint himself looking to our right, whereas Van Gogh most often looks to our left.

We thought the effect was pretty cool, so tried it on a few other thematic subcategories, including Van Gogh’s portraits of Madame Ginoux and his sunflowers, respectively, creating an average-based image for each.

Visual average of every portrait of Madame Ginoux by Van Gogh

Visual average of every portrait of Madame Ginoux by Van Gogh

Visual average of the sunflower paintings by Van Gogh

Visual average of the sunflower paintings by Van Gogh

As with the self-portraits, these were visually interesting, as you can still make out some of the features and shapes without one clear painting dominating.

Conclusion

Next time someone gives you a hard time for spending 15 minutes fussing with filters on your selfie, remind them that Rembrandt spent a full 10% of his career perfecting selfies. Who knows? Maybe there is even a market for an “old masters” app that lets people add gold chains to their selfies.

As always, thanks for reading. As we mentioned, this was inspired by Kyle Waters’ excellent work in averaging Van Gogh’s paintings for our post about the shifts in his color palette. We have a third post in the series that will focus on establishing “the average” Van Gogh work using several different techniques. Sign up for our newsletter and we’ll be sure to alert you when it goes live.

If you have questions or suggestions you can always reach me on Twitter at @artnome or email me at jason@artnome.com.

Subscribe to the Artnome newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
1 Comment

Painted Portraits Inspired By Neural Net Trained on Artist’s Facebook Photos

January 9, 2019 Jason Bailey
Crazy Eyes, Liam Ellul, 2018

Crazy Eyes, Liam Ellul, 2018

I’ve come to learn that if a person can run neural networks and has deep interest in art, they are probably a pretty creative and interesting person. Australian artist Liam Ellul is no exception.

Ellul recently shared a new portrait series he working on called Just Tell Me Who To Be on Twitter. His portraits are simultaneously of nobody in particular and yet also everybody in his life. The series explores identity through four 12”x12” acrylic-on-cotton paintings. Each painting was painted directly on printouts of images Ellul created by training a GAN (generative adversarial network) on 10k photographs from his Facebook account.

Without going into a full description of how GANs work (you can find that here), the process involves a neural network inventing new images based on a set of training images provided by the artist. So in Ellul’s case, he is essentially asking the GAN, “If I give you photos of all the important people and moments in my life, can you go and invent me some new people and moments?” After training, the GAN outputs a large number of potential images and stores them in “latent space,” which you can see in the animated GIF below.

Small snippet from the Interpolation video from Ellul’s GAN trained on his personal Facebook and Google photos.

Small snippet from the Interpolation video from Ellul’s GAN trained on his personal Facebook and Google photos.

Ellul then short listed a dozen of faces seen in the GIF above, printed them out, and laid them around his apartment for a few days. According to Ellul:

After extracting all the faces from my archives — the data preparation was somewhat manual — I found myself looking at thumbnails most of the time. Pre-processing in this context reminded me of mixing colors on a pallet, but instead of colors, I was mixing forms. It became pretty clear which ones had the strongest hold on me — then I painted them. The common thread was that the selected outputs I chose gave me an impression of something I identified with in a really deep way. Like, out of the latent space, it touched on something that I couldn’t have represented unless I saw it first.

Like Dropping Your Family Photo Album Into a Blender

Self Portrait (now alive), Liam Allul, 2018 - GIF alternating between the GAN printout and the finished painting

Self Portrait (now alive), Liam Allul, 2018 - GIF alternating between the GAN printout and the finished painting

Self Portrait (now alive), Liam Allul, 2018

Self Portrait (now alive), Liam Allul, 2018

Self Portrait (now alive), Liam Allul, 2018 - (reference image)

Self Portrait (now alive), Liam Allul, 2018 - (reference image)

Though some AI artists make their own training image sets — notably, Anna Riddler, with her painstaking photographic collection of tulips, and Helena Sarin, who trains GANs on her drawings and paintings — it is rare. For practical reasons (scale and availability), most AI artists select large public data sets to train GANs. However, because these public data sets are widely available, as are the GANs used to process them, there are signs that the results are becoming increasingly homogenous.

Ellul bucks this trend by not only using his own materials, but by using the most personal materials possible: photographs from his own life’s relationships, experiences, and memories, which are no doubt loaded with personal meaning and associations. He owns the material, in the truest sense of the word, as he has quite literally lived it. From Ellul:

It was a surprising realization just how much data I have created over my life and how effectively it can be harnessed in the creative process. Some look like me physically, but the face and expression I would never pull in a photo — it’s this surreal look that captures a feeling and encourages me to express it. Others look like a blend of me and a friend with similar surreal expressions.

Once I was happy with the outputs of the model, I spent a long while just watching the waves of eerily familiar faces that it produced. Often, I’d recognize a face as my own or fused with a close friend – despite never being captured with that expression – certain frames would perfectly resonate with a part of me when I saw them.

Number 3, Liam Ellul, 2018

Number 3, Liam Ellul, 2018

Number 3, Liam Ellul, 2018 (Source image from GAN)

Number 3, Liam Ellul, 2018 (Source image from GAN)

Fascinated by Ellul’s use of GANs as a departure point or inspiration for creating physical paintings, I asked him about his both his artistic and technical background.

Ellul shared that he has been creating portraits as a sort of visual journal since his grandfather first taught him to draw with charcoal when he was 10 years old (though he later switched to painting in acrylic). He initially went to school for law but realized “it wasn’t something I wanted to do professionally,” and he eventually shifted his focus to a rapidly growing interest in analytics. This led to Ellul and a friend launching “a small company focused on agricultural crop analysis and research.” It was there that Ellul learned about neural networks while testing predictive models for plant growth. Again from Ellul:

The first time I saw a GAN was 2017 in Alex Radford’s GitHub repo where he showed the generation of bedrooms, faces, and album art. My brain broke. Then mid-last year I saw the incredible high resolution faces you could get with GANs — something clicked in my brain and I felt compelled to do this portrait series.

Self Portrait (With My Friends), Liam Ellul, 2018

Self Portrait (With My Friends), Liam Ellul, 2018

Self Portrait (With My Friends), Liam Ellul, 2018 (Source image from GAN)

Self Portrait (With My Friends), Liam Ellul, 2018 (Source image from GAN)

Ellul now works in strategy and product development at Microsoft and creates his artwork on the side. I asked Ellul if he has any upcoming projects and if so what was next:

Yes! I love the adventurous nature of this area and the experience of running through a personal gauntlet to get these the paintings out! In terms of what’s next, I have two ideas bubbling away that are very much still coming together. Network design and exploring ways they can be linked together is something I will put more time into as I develop my approach. I am going also see if I can make the switch from acrylics to oils!

Conclusion

While the purist in me loves seeing work created digitally staying digital, I suspect we will increasingly see artworks executed in a variety of media as GANs come into their own as a tool for augmenting creativity (imagine what a GAN-inspired sculpture might look like). I think this is an interesting direction, and I’m encouraged by the exploration and work of artist/technologists like Ellul and his recent portraits.

As always, feel free to reach out to me at jason@artnome.com with any questions or suggestions. You can also hit me up on Twitter, my social media of choice, at @artnome.com.

Subscribe to the Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!


Comment

DeepDream Creator Unveils Very First Images After Three Years

January 2, 2019 Jason Bailey
Cats, one of the first DeepDream images produced by its inventor, Alex Mordvintsev

Cats, one of the first DeepDream images produced by its inventor, Alex Mordvintsev

In May of 2015, Alex Mordvintsev’s algorithm for Google DeepDream was waaay ahead of its time. In fact, it was/is so radical that its “time” may still never come.

DeepDream produced a range of hallucinogenic imagery that would make Salvador Dali blush. And for a month or so, it infiltrated all of our social media channels, all of the major media outlets, and even became accessible to anyone who wanted to make their own DeepDream imagery via a variety of apps and APIs. With the click of a button, I turned a photo of my wife into a bizarre gremlin with architectural eyes and livestock elbows.

Image I made using a DeepDream app in August, just three months after it was invented by Alex Mordvintsev

Image I made using a DeepDream app in August, just three months after it was invented by Alex Mordvintsev

And then — “poof” — DeepDream just kind of disappeared. It is the nature of art created with algorithms that when the algorithms are shared with the public, the effect quickly hits a saturation point and becomes kitsch.

I personally think DeepDream deserves a longer shelf life, as well as a lot of the credit for our current fascination with machine learning and art. So when Art Me Association, a non-profit organization based in Switzerland, recently asked if I wanted to interview Alex Mordvintsev, developer behind the DeepDream algorithm, I said “yes” without hesitation.

And when Alex shared that he recently found the very first images DeepDream had ever produced and then told me that he had never shared them with anyone, I could hardly contain myself. I immediately asked if I could share them via Artnome. Well, to be honest, I first asked if I could buy them for the Artnome digital art collection (collector’s instincts), but it turns out Google owns them and has let Mordvintsev share them through a Creative Commons (CC) license. Something tells me that Google probably doesn’t need my money.

Father Cat, May 26, 2015, by Alexander Mordvintsev

Father Cat, May 26, 2015, by Alexander Mordvintsev

For me, Mordvintsev’s earliest images from May, 2015, are as important as any other image in the history of computer graphics and digital art. I think they belong in a museum alongside Georg Nee’s Schotter and the Newell Teapot.

Custard Apple, May 16, 2015, by Alexander Mordvintsev

Custard Apple, May 16, 2015, by Alexander Mordvintsev

Why do I hold so much reverence for the early DeepDream works? DeepDream is a tipping point where machines assisted in creating images that abstracted reality in ways that humans would not have arrived at on their own. A new way of seeing. And what could be more reflective of today’s internet-driven culture than a near-endless supply of snapshots from everyday life with a bunch of cat and dog heads sprouting out of them?

I believe DeepDream and AI art in general are an aesthetic breakthrough in the tradition of Georges Seurat’s Pointillism. And to be fair, describing Mordvintsev’s earliest DeepDream images as “just a bunch of cat and dog heads emerging from photos” is as about as reductive as calling A Sunday on La Grande Jatte “a bunch of dots.”

That Mordvintsev did not consider himself an artist at the time and saw these images as a byproduct of his research is not problematic for me. Suerat himself once shared: “Some say they see poetry in my paintings; I see only science.” Indeed, to fully appreciate Mordvintsev’s images, it is also best to understand the science.

I asked Mordvintsev about the origins of DeepDream:

The story behind how I invented DeepDream is true. I remember that night really well. I woke up from a nightmare and decided to try some experiment I had in mind for quite a while at 2:00 AM. That experiment was to try an make a network to add details to some real image to do image super resolution. It turns out it added some details, but not the ones I expected. I describe the process like this: neural networks are systems designed for classifying images. I’m trying to make it do things it is not designed for, like detect some traces of patterns that it is trained to recognize and then trying to amplify them to maximize the signal of the input image. It all started as research for me.

I asked Alex what it was like to see his algorithm spread so quickly to so many people. I thought he might have regretted it getting “used up” by so many others, but he was far less shallow than me in this respect and took a broad-minded view of his impact:

I should probably have been involved in talking about it at that moment, but I was more interested in going deeper with my research and wanted to gain a deeper understanding of how things were working. But I can’t say that after three years of research that I understand it. So maybe I was over-excited in research at the moment.

I think it is important that everyone can participate in it. The idea that Iana, my wife, tries to convey is that this process of developing artificial intelligence is quite important for all the people and everyone can participate in it. In science, it isn’t about finding the answer, it is more about asking the right question. And the right question can be brought up by anybody.

The way I impacted society [with DeepDream] is that a lot of people have told me that they got into machine learning and computer vision as a result of seeing DeepDream. Some people even sent me emails saying they decided to do their Ph.D.s based on DeepDream, and I felt very nice about that. Even the well-known artist Mario Klingemann mentioned that he was influenced by DeepDream in an interview.

Indeed, I reached out to artist Mario Klingemann to ask him the significance of DeepDream for him and other prominent AI artists. He had this to say:

The advent of DeepDream was an important moment for me. I still remember the image of this strange creature that was leaked on reddit before anyone even knew how it was made and knew that something very different was coming our way. When the DeepDream notebook and code was finally released by Google a few weeks later, it forced me to learn a lot of new things; most importantly, how to compile and set up Caffe (which was a very painful hurdle to climb over), and also to throw my prejudices against Python overboard.

After I had understood how DeepDream worked, I tried to find ways to break out of the PuppySlug territory. Training my own models was one of them. One model I trained on album covers which, among others, had a "skull" category. That one worked quite nicely with DeepDream since it had the tendency to turn any face into a deadhead. Another technique I found was "neural lobotomy," in which I selectively turned off the activations. This gave me some very interesting textures.

Where I had seen sharing the code to DeepDream as a mistake, as it quickly over-exposed the aesthetic, Mordvintsev saw a broad and positive impact on the world which would not have been possible without it being shared. Mordvintsev also took some issue with my implication that DeepDream was getting “old” or had been “used up.” It turns out that my opinion was more a reflection of my lack of technical abilities (beyond using the prepackaged apps) than a reflection of DeepDream’s limitations as a neural net. He politely corrected me, saying:

Maybe you played with this and assumed it got boring. But lately, I started with the same neural network, and I found a beautiful universe of patterns it can synthesize if you are more selective.

I was curious why so many of the images had dog faces. Alex explained to me that he was using a pretrained network called ImageNet, a standard benchmark for image classification that was established around 2010. ImageNet includes 120 categories of dog breeds to showcase “fine-grained classification.” Because ImageNet dedicates a lot of its capacity to dog breeds, it triggers a strong bias in the data. Alex points out that others have applied the same algorithm to MIT’s Places Image Database. Images from the MIT database tend to highlight architecture and landscapes rather than the dogs and birds favored in the ImageNet database.

I asked Mordvintsev if he now considers himself an artist.

Yes, yes, yes, I do! Well, actually, we are considering my wife and I as a duo. Recently, she wanted to make a pattern for textiles and wanted a pattern that tiled well, and I sat down and wrote a program that tiled. And most generative art is static images on screen or videos, and we are trying to get a bit beyond that to something physical. We recently got a 2.5D printer that makes images layer by layer. I enjoy that a lot. But our artistic research lays mostly in this direction: moving away from prints into new mediums. Recently, we had our first exhibition with Art Me at Art Fair Zurich and we had sponsorship from Google. We are interested in showing our art to the world and trying to explain it to a wide audience.

Alex and Iana Mordvintsev Prepping to show their latest work at Art Fair Zurich

Alex and Iana Mordvintsev Prepping to show their latest work at Art Fair Zurich

While I appreciated DeepDream from the beginning, I felt it became kitsch too quickly as a result of being shared so broadly. Speaking with Alex makes me second guess that. It’s now clear to me that Alex did the world a service by making his discovery so broadly available and that he still sees far more potential for the DeepDream neural net (and he would know). There are some critics who just don’t “get” AI art, but as Seurat said: “The inability of some critics to connect the dots doesn't make Pointillism pointless.”

Above: Alex Mordvintsev’s NIPS Creativity Art Submission

As always thanks for reading! If you have questions, suggestions, or ideas you can always reach me at jason@artnome.com. And if you haven’t already, I recommend you sign up for the Artnome newsletter to stay up to date with all the latest news.

Subscribe to The Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
1 Comment

How Artnome Stumbled Into Writing About The Three Biggest Art Stories of 2018

December 31, 2018 Jason Bailey
Cover.jpg

Reading all the “2018 art year in review” articles over the last few days has really helped hammer home for me that the three most talked-about stories in art in 2018 were:

  • Blockchain and art

  • The Banksy shredding

  • The AI art sold at Christie’s

While I don’t think of Artnome as a traditional art news site, we had some of the earliest stories on all three topics. We also ended up among the top of Google’s search results for all three stories. How did this happen?

Well, it was mostly dumb luck. But I always enjoy and appreciate it when blog authors and entrepreneurs candidly share the stories and the data from behind the curtain. So this post will blend our 2018 year in review with a back stage look at Artnome, warts and all.

Part One: Blockchain, Art, and “Being There”

Someone really needs to remake the movie Being There because it is a great cultural touchstone, but most people are too young to get the reference these days. In this rags-to-riches movie, a simple-minded, sheltered gardener named Chance, who knows only about gardening and what he’s learned from daytime TV, is forced to leave his home and enter the great big world after his employer passes away. Through a series of hilarious twists, Chance the gardener becomes “Chauncey Gardner” after he is mistaken as an upper-class gentleman. The story culminates with Chance giving the president of the United States basic gardening advice, which the president reads into as sage wisdom on the nation’s economy.

In 2018, I felt like the Chauncey Gardner of the art world, only instead of learning all I know from television, I had the internet. It started at the very end of 2017 when I spent half a day researching and writing about blockchain and art. I published a short blog post called The Blockchain Art Market Is Here and went to bed thinking nobody would ever read it. The next day I searched “blockchain art” on Google, only to discover my article had somehow stumbled to the top of the search results (where it stayed for most of the year). Within days I was getting dozens of emails with detailed questions about art and blockchain.

Traffic for the my article The Blockchain Art Market is Here followed a similar trajectory to the Cryptomarkets… down and to the right

Traffic for the my article The Blockchain Art Market is Here followed a similar trajectory to the Cryptomarkets… down and to the right

Then came interview requests and invitations to speak on panels at conferences all around the world as an expert in blockchain and art. I had somehow become an accidental expert despite knowing very little. To fix this, I went to a lot of conferences and spoke with a lot of folks that were much smarter than I am. I wrote a few more articles, started a podcast, and I learned enough to be able to moderate two panels in London at Christie’s Art + Tech event. It helped that my panels were loaded with brilliant, dynamic, cutting-edge thinkers. All I really had to do was get out of their way.

Blockchain had exploded in the art world, and I had just written the right article at the right time. Whether you cared about blockchain or not, you were forced to express your opinion or risk being left out of the conversation.

Then the cryptocurrency market started to tank, and the only people left talking about blockchain and art either truly believed in it and were building really cool stuff or were late to the party and did not realize the hype train had left the station.

With no more requests for speaking engagements, I went back to doing what I enjoy most: writing about the crazy stuff at the intersection of art and tech that I find fascinating.

Part Two: AI Art Gets Awesome

Robbie Barrat, AI Generated Nude Portrait #1, 2018

Robbie Barrat, AI Generated Nude Portrait #1, 2018

Early in 2018, I became obsessed with the Twitter feed of @DrBeef_ , a hyper-creative teenage artist named Robbie Barrat from West Virginia. Back in April, we became friends after I interviewed him and purchased some of his AI Nudes. I was a huge fan, and Robbie was (and still is) really generous in helping me better understand how artists are using GANs (generative adversarial networks) to make really cool new art.

However, almost nobody read my interview with Robbie (AI Art Just Got Awesome) in the first week, so I didn’t really think much of it. Then two months after I initially published the article, I noticed a spike in the number of people reading the interview.

Traffic for the my interview with Robbie Barrat, AI Art Just Got Awesome

Traffic for the my interview with Robbie Barrat, AI Art Just Got Awesome

Two things had happened. First, a bunch of other media outlets had picked up on Robbie’s work. Second, Christie’s was heavily promoting that it was going to be the first to sell an AI artwork at auction. The work they were selling was by a French art collective called Obvious, whom I’d also been friendly with on Twitter.

Portrait of Edmond Belamy, 2018, Obvious

Portrait of Edmond Belamy, 2018, Obvious

Unfortunately, Obvious had made some poorly thought out public claims about the AI being responsible for making the art, implying no real human involvement. The media ate that up and ran like crazy with it, undercutting the brilliant work that many AI artists had been doing for years by further suggesting humans had no role in creating AI art. Additionally, Obvious had borrowed heavily from the work of Robbie Barrat and they did not do a great job of crediting him. This made them a pariah among the AI art community.

Still, I sympathized with Obvious. There was no way they could have predicted that they would end up on the world stage having their every word and action scrutinized. So when Hugo Caselles-Dupré, the tech lead from Obvious, confided in me that the media’s version of the story was out of control and he wanted to come clean with the real story to smooth things over with the AI art community, I obliged. The interview, initially published under the title The AI Art At Christie’s Is Not What You Think, went for well over an hour and was the first article where Obvious acknowledged that they borrowed heavily from Robbie Barrat.

Traffic from my interview with Obvious technical lead Hugo Caselles-Dupré titled The AI Art At Christies Is Not What You Think

Traffic from my interview with Obvious technical lead Hugo Caselles-Dupré titled The AI Art At Christies Is Not What You Think

The interview drew more attention when it was cited by many other outlets, including Verge, The Art Newspaper, Artsy.net, and Smithsonian.

In less than a year, I had stumbled from becoming an accidental expert in blockchain at the exact right time to becoming an accidental expert in AI art at the exact right time. When I had first started writing about AI art and GANs in April, I had no reason to believe anyone in the mainstream would care. Now I am headed to Bahrain this March to moderate two panels on AI and creativity with a bunch of the artists I really admire, including Robbie Barrat. Life is strange and unpredictable.

Part Three: Myth Busting Banksy

Banksy (3).jpg

The self-shredding Banksy was the perfect “art” story for the mainstream media. If you only know one or two living artists, chances are Banksy is one of them. And sadly, the most popular stories surrounding art typically focus on two areas: first, works that are either intentionally or accidentally destroyed; and second, works that sell for far more or far less than expected. So when the Banksy painting that was at auction at Sotheby’s went up in value by shredding itself during an auction, we had the perfect storm.

I felt a bit like an ambulance chaser writing about the Banksy shredding, but as a blogger interested in art and tech, it felt natural for me to write about the device and how I thought it worked (or didn’t work). This was a quick article - I polled my father and brothers (who are all engineers) on their thoughts and pumped an article out in an hour or two, and it became the most popular Artnome article of the year with over 43K page views.

Traffic from my article Myth Busting Banksy

Traffic from my article Myth Busting Banksy

You’ll notice this article did not have the SEO staying power of the others. Seemingly everyone weighed in on it for about a week and then forgot about it. In this case, the enormous spike in traffic was largely due to other better-known outlets picking up our story and linking back to it, most notably, Boing Boing and the AV Club (for which we are always grateful).

Part 4: To The Moon! …Maybe

Around early October, I started thinking I was on my way to 100K visitors a month, which felt mind blowing for a blog that averaged one post a month and mostly focused on data and art history (the above-mentioned stories notwithstanding). I fantasized about the traffic growth I might get if I wrote with more frequency. To find out, I stayed up later on weeknights and wrote on both weekend days instead of just one. Aaaaand… my traffic came crashing back to earth.

Pageviews by month across all of Artnome since the site started in June of 2017

Pageviews by month across all of Artnome since the site started in June of 2017

I had made two mistakes: A) I misread three good months of increasing traffic as a solid trend, and B) I assumed more content would automatically mean more traffic. This is a pretty bad mistake for a guy who has spent almost two decades in digital marketing for his day job. It’s much easier to see what happened if you look at it from a weekly or even daily view instead of monthly.

Pageviews by week across all of Artnome since the site started in June of 2017

Pageviews by week across all of Artnome since the site started in June of 2017

Pageviews by day across all of Artnome since the site started in June of 2017

Pageviews by day across all of Artnome since the site started in June of 2017

As becomes obvious on the daily pageviews chart, the bulk of my record-breaking months for traffic came from one or two days - not a sign of smooth and steady growth, just a few outliers. But I fell victim to seeing what I wanted to see rather than what was there.

Part 5: Onward… The Good News

So buried under the outliers, there is actually some really solid growth for Artnome in 2018. And it is built not on the huge number of people chasing stories about shredded Banksy paintings, but instead by really intelligent and creative people looking to learn more about art, tech, and data.

In fact, six of the top seven Artnome posts this year really had nothing to do with news at all. In particular, two articles I wrote on generative art, Why Love Generative Art and Generative Art Finds Its Prodigy, performed extremely well and continue to drive traffic.

Screen Shot 2018-12-31 at 2.46.52 PM.png

In many ways this is a relief. If the secret sauce to growing Artnome was to race against thousands of other news outlets to write a high volume of short articles about artworks getting damaged, I couldn’t compete (and wouldn’t want to).

Instead, I think there is a large audience of folks who want articles that go a bit deeper into tech and art, whether it is:

  • Using data to highlight new discoveries like our recent post on Van Gogh’s shift in color palette

  • Providing an in-depth look at the arms race for compute power in AI art

  • Drawing attention to the need for better data and analytics on art

  • Showing how forgery and misattribution flourish in the absence of good data

  • Highlighting innovators who are trying to make the world better for artists

  • Sharing stories about artists and art movements who don’t get nearly as much attention from the art world as they deserve

As for growing Artnome, I think I will listen to the sage wisdom that Chauncey Gardner gave to the president of the United states on growing the economy in the movie Being There.

Being_There.jpg

Thanks for reading. If you have thoughts or questions, you are always welcome to hit me up at jason@artnome.com. Here’s to wishing you and your family a happy and productive 2019!

Register for the Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
1 Comment

Artist Cryptograffiti Sets Auction Record For Least Expensive Art

December 28, 2018 Jason Bailey
Du5CSpCVYAAAnPf.png

“I’m excited about a future where micropayments are omnipresent. Artists paid by the view, writers by the poem, musicians by the listen.” - Cryptograffiti

In a recent auction designed to sell to the lowest bidder, artist “Cryptograffiti” sold his elegant work Black Swan, a collage made from a single dollar bill, for $0.000000037, making it the least expensive artwork ever sold at auction. To understand why, we recently spoke with the artist.

Like Banksy and Shepard Fairey, Cryptograffiti’s origin story begins with street art, only his has a uniquely Silicon-Valley twist. Around 2011, Cryptograffiti left a job at Apple to launch a startup inspired by a Myspace feature called “Top Eight.” His startup’s product allowed you to share your favorite photos in a tangible piece of hardware:

The “Top Eight” was really fascinating to me because this was back when social networks were really picking up steam. The psychology behind why that was such a popular feature was really interesting. I thought that if I could make a product that encapsulated that, then the product would also be popular… kind of a modern take on lockets. You can wear a photo, and then there was an app so you can also tell people which photo you were wearing and why.

It turned out that the person helping Cryptograffiti develop the app was really into Bitcoin and was interested in being paid in cryptocurrency. This got Cryptograffiti looking at Bitcoin and blockchain much closer.

It was pretty clear to him that blockchain and cryptocurrency would eventually make the old banking system obsolete, so he began exploring this idea of making art using materials from the dying banking system (old credit cards and paper money) to “help explain this new era that was coming” ushered in by currencies like Bitcoin.

Cryptograffiti then had an “a ha” moment in late 2012 when he learned about micropayments.

I started hearing about micropayments as being the future: essentially being able to pay for things in little bits that wouldn’t be possible otherwise because of the minimum fees that come with credit cards…

…I have artists in my family and I was aware of the trials and tribulations that they had in the traditional art world. So I started to think of different ways that crypto and micropayments could be used specifically for artists as new revenue channels, and that got me to thinking about doing street art with the QR codes attached, and if people liked the work, then they could send over some Bitcoin. There were a number of different things that made me want to go all in, and in 2013, my startup was only doing “okay,” and it just didn’t seem as fascinating as this new world that was laid out before me. So I just decided to really jump in. It was super risky, but I’m really glad that I did.

Seattle, example of Cryptograffiti’s street art made from credit cards and using QR codes to accept tips in Bitcoin

Seattle, example of Cryptograffiti’s street art made from credit cards and using QR codes to accept tips in Bitcoin

I asked Cryptograffiti to help me understand micropayments a little better, because while I love the idea in theory, I couldn’t understand how it would work if the transaction fees associated with making payments using cryptocurrencies would exceed the amount of money being spent with a micropayment.

A lot of it depends on how overloaded the system is. Back in 2012 and 2013, there was just not as much congestion going on. But if you look like a year ago at December, 2017, the fees were sky high for Bitcoin because there were a lot of transactions happening and miners could pick who they wanted to work with, and so settlement times were slower and the fees were higher. A lot of this was coming down to a scaling issue, but there are solutions in the works. That’s part of why I wanted to do something with the Lightning Network with my art, because there is so much talk about price in the mainstream media and really not much discussion outside the crypto circles about some of the solutions that people are working on.

Which brings us back to Cryptograffiti’s Black Swan setting the auction record for least expensive artwork sold at auction. The auction was designed to reward the lowest bidder (instead of the highest bidder) to draw attention to the increasing viability of micropayments now made possible by the Lightning Network. The Lightning Network speeds up Bitcoin transactions while reducing transaction costs. As Cryptograffiti describes it:

The “Black Swan” was a fun idea I had knowing that it would not be lucrative, to help spread awareness about the Lightning Network. For those that don’t know, the Lightning Network is a payment channel layer on top of Bitcoin to help alleviate some of these payment scaling issues. So essentially, you can open up a channel with someone, make payments with them, and it will get settled up with the blockchain later on when the channels are closed, and so it helps with the congestion. There are no fees and it’s very quick. It’s really groundbreaking stuff. If it works, then it is going to bring about this era of micropayments that I yearned for from the beginning. Doing things like paying for reading an article or paying by the song or by the view for an artwork, these are all interesting ideas that haven’t been able to happen yet because of the payments to middlemen like credit cards.

Cryptograffiti’s Black Swan shown next to a tiny potted plant.

Cryptograffiti’s Black Swan shown next to a tiny potted plant.

The Black Swan itself is clever and aesthetically appealing. It’s a tiny work measuring in at 1.44 in x 1.75 in (3.55 cm x 4.44 cm) and features Cryptograffiti’s signature style of collage using older forms of physical currency, in this case, a single dollar bill. But I find the larger performance to be the most engaging part of the work.

For example, the video Cryptograffiti created to promote the artwork reminds me of a bootstrapped version of the massive marketing campaigns put out by Sotheby’s and Christie’s to promote and elevate works they bring to auction.

The special protective case, the white glove treatment, and the soundtrack (Mozart’s Eine Kleine Nachtmusik) all create a hilarious parody of the exclusivity and seriousness with which we treat important artworks at auction, which in turn drives home the absurdity of selling Black Swan for as little as possible in an auction designed as a race to the bottom.

Beyond the brilliant marketing campaign, I see the auction itself as an essential part of the artwork. For me, Black Swan is a conceptual or performance art piece with the swan itself serving as just one part of the performance.

Cryptograffiti’s Black Swan selling at auction for $0.000000037

Cryptograffiti’s Black Swan selling at auction for $0.000000037

In a world where it is nearly statistically impossible for artists to be “discovered” and self-shredding Banksys and controversial AI art capture headlines, artists seeking a larger audience for their work could learn from Cryptograffiti. In many ways, Cryptograffiti’s savvy Black Swan marketing campaign is indistinguishable and inseparable from the artwork itself.

Subscribe to the Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
Comment

New Data Shows Why Van Gogh Changed His Color Palette

December 24, 2018 Jason Bailey
Vincent Van Gogh, Wheatfield With a Reaper, September, 1889

Vincent Van Gogh, Wheatfield With a Reaper, September, 1889

When most people think of Van Gogh, the first color that comes to mind is a warm, radiant, golden yellow. Yellow sunflowers, yellow fields of grain, even the yellow moon in Starry Night.

But Van Gogh’s paintings did not start out that way. As late as 1885, roughly halfway through the short decade Van Gogh spent painting, he was still in his Dutch period, painting works like The Potato Eaters, which feature dark, muddled, grays, browns, and greens.

Vincent Van Gogh, The Potato Eaters, 1885

Vincent Van Gogh, The Potato Eaters, 1885

Curious about this shift from dark to light, we decided to use data visualization techniques to better isolate the moment of transition to a bright yellow color palette.

As a first step, we sorted every Van Gogh painting by year and then calculated the simple arithmetic average by dividing out the sum of pixels by the total number of paintings. In layman’s terms, we created the “average” Van Gogh painting for each year he was active. We were hopeful that this could pick up some of the subtleties in his shifting color palette over time. There were too few works to get a solid average in the first two years, so we shortened the range to 1882-1890.

1882

1882

1885

1885

1883

1883

1886

1886

1884

1884

1887

1887

1888

1888

1889

1889

1890

1890

In looking at the series of images, there is an unmistakable shift towards a lighter yellow palette starting in 1888.

We are not first to notice this. There are two popular theories around why Van Gogh shifted his color palette.

  • Illness/medication leading to “yellow vision”

  • Influence from the French Impressionists while working in Paris

We will briefly look at both theories below and then offer up our own.

Did Van Gogh Suffer From “Yellow Vision”?

One popular theory behind the shift in Van Gogh’s color choices is that he might have suffered from xanthopsia, or “yellow vision.” Xanthopsia is a “color vision deficiency in which there is a predominance of yellow in vision due to a yellowing of the optical media of the eye.” When caused by glaucoma, this can also include halos and flickering, which many think explains why Van Gogh depicts light as radiating outward, as in The Night Cafe (1888) and The Starry Night (1889).

Vincent Van Gogh, The Night Café, 1888

Vincent Van Gogh, The Night Café, 1888

Vincent Van Gogh, The Starry Night, 1889

Vincent Van Gogh, The Starry Night, 1889

Others believe that Dr. Gachet, the physician who treated Van Gogh in his final months at Auvers-sur-Oise, may have treated Van Gogh’s seizure’s with digitalis extracted from the plant fox glove, which is also known to cause yellow-blue vision and halos as a side effect.

Vincent Van Gogh, Portrait of Dr. Gachet, 1890 (note the fox glove plant shown in the portrait)

Vincent Van Gogh, Portrait of Dr. Gachet, 1890 (note the fox glove plant shown in the portrait)

Another frequently cited reason for the shift in Van Gogh’s color palette was his move to Paris in 1886. It is generally assumed that he was inspired by the bold use of color by the French Impressionists.

We were not convinced by the medical reasoning behind the shift in Van Gogh’s color palette and we could not think of any French Impressionists that painted with colors nearly as bold as Van Gogh, so we decided to take a look at some other possibilities.

Did Van Gogh Use More Yellow Because He Moved to a Sunnier Climate?

Van Gogh was a restless soul and moved around quite a bit. He also spent a lot of time painting outdoors, especially in his later years. As someone who famously struggled with mood swings, we thought location, and more importantly, weather patterns may have impacted his use of color.

To test this, we created composite images averaging every painting Van Gogh created from each of the major locations he worked from and compared them to weather patterns from those regions. We think the results are quite remarkable.

The Hague

The Hague

The Hague

The Hague

Arles

Arles

Arles

Arles

Nuenen

Nuenen

Nuenen

Nuenen

SAINT-REMY

SAINT-REMY

SAINT-REMY

SAINT-REMY

Paris

Paris

Paris

Paris

Auvers-sur-Oise

Auvers-sur-Oise

Auvers-sur-Oise

Auvers-sur-Oise

Look at the spike in sunshine in Arles as compared to all previous locations! We feel pretty confident that it was the warm weather and bright colors of southern France that influenced Van Gogh’s shift towards bolder colors, not “yellow vision” or exposure to the French Impressionists as previously thought.

Not only did Van Gogh literally see the world bathed in yellow sun while in Arles and Saint Remy, he was also able to get outside more often as there were literally more sunny days. I don’t think it is unreasonable to assume exposure to the sun and outdoors may also have lifted his mood, causing him to brighten his palette in response, as well.

Charting the average painting by location also brought some interesting things related to timing to our attention. To make these timing issues even clearer, we created the chart below. Each bar is colored using the average color for paintings created in that specific region. The chart then shows the order in which Van Gogh lived in each region and the total number of paintings produced there.

VanGogh_Artnome_Chart.png

Note that from this chart we can very clearly see that Van Gogh’s palette shifted after, not during, his time spent with the French Impressionists. Let’s call that myth busted.

It is also clear from the chart that Van Gogh’s palette turned yellow years before he became a patient of Dr. Gachet at Auvers-sur-Oise. Second myth… also busted.

This leaves our theory of increased exposure to sunlight, which the data has shown to support. Of course correlation doesn’t necessarily imply causation, and we can’t say for sure that the weather caused Van Gogh’s palette to switch. But we feel it is a stronger hypothesis than the ones currently out there and Van Gogh certainly left us plenty of clues to support our belief that increased sun was his inspiration for using more yellow. Our favorite:

“How wonderful yellow is. It stands for the sun.” - Vincent Van Gogh

Vincent Van Gogh, Vase with 15 Sun Flowers. 1888

Vincent Van Gogh, Vase with 15 Sun Flowers. 1888

Conclusion

At Artnome, we are big believers that data and new analytical tools can and should be used to provide new context for important art and artists. In this case, we feel relatively confident that by using data visualization, we have ruled out the two most popular explanations for Van Gogh’s shift in color palette: first, illness/medication; and second, Impressionism. We have also used weather data and geo-data to propose a more reasonable theory behind the switch to a bright yellow palette: When Van Gogh moved to the south of France, he experienced a significant increase in sunny days. His world became dramatically brighter and more yellow; he simply painted what he saw (while adding his own artistic license).

This article would not be possible without the hard work and analysis of Artnome data scientist Kyle Waters. He has made an enormous contribution to Artnome in a short amount of time, and we are excited to continue working with him. In an upcoming article, he will go into more detail around how he calculated the average images for Van Gogh in detail.

Also, if you like this article, you may also enjoy our other nerdy art articles on:

  • Inventing the Future of Art Analytics

  • Quantifying Abstraction

  • Searching All 1800+ Of Munch’s Paintings With Machine Learning

As always, thanks for reading and for your support. Feedback is always welcome and you can contact us at jason@artnome.com



Register for the Artnome newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
14 Comments

Machine Learning Art: An Interview With Memo Akten

December 16, 2018 Renée Zachariou

Memo Akten, Learning to see: We are made of star dust (#2), 2017

“A deep neural network making predictions on live camera input, trying to make sense of what it sees, in context of what it’s seen before. It can see only what it already knows, just like us. Trained on images from the Hubble telescope. (not 'style transfer'!)”


“If we do not use technology to see things differently, we are wasting it.”
- Memo Akten

I met Memo Akten before he grabbed the train to London, where he is currently developing some exciting projects and pursuing a PhD in machine learning.

R: Memo, I first contacted you for an article I was writing on artificial intelligence and the art market (you can read it here). The timing was too tight, though, so I’m glad we’re meeting today to discuss your art practice more broadly! But let’s start with AI anyway: as an artist who has long been active in this field, I am curious to have your analysis on the field as it is now. Can you briefly explain who you are and what you do?

M: Broadly speaking, I work with emerging technologies as both a medium and a subject matter, looking at its impact on us as individuals, as an extension of our mind and body, and its impact on society, culture, tradition, ritual, etc.

Simple harmonic motion #12 for 16 percussionists . Live at RNCM

These days I’m mostly thinking about machines that learn, machines that think; perception, cognition, bias, prejudice, social and political polarization, etc. The current rise of big-data-driven, so-called ‘AI’ acts as a rather apt mechanism through which to reflect on all of this.

I generally try to avoid using the term ‘AI’ - unless I’m specifically referring to the academic field - as it’s very open to misinterpretation and unnecessarily egregious disagreement over terminology. Once, after a panel, I had a member of the audience approach me, and rather angrily explain to me that AlphaGo (DeepMind’s software which beat the world champion Go player) could not be considered ‘AI’ because it had no ‘sense of self,’ which is okay, I guess. But it’s also why instead I say these days I work with machine learning, a term that’s easier to define – a system which is able to improve its performance on a particular task as it gains experience. More specifically, I work with deep learning, a form of machine learning which is able to operate on vast amounts of ‘raw,’ high-dimensional data, to learn hierarchies of representations. I also think of it as the process of extracting meaningful information from big data. A more encompassing term which can refer to what we usually mean by ‘AI’ these days is ‘data-driven methods or systems,’ and specifically ‘big-data-driven methods or systems.’ 

R: So what you’re interested in is not the technology itself, but the effect on society? If, let’s say, pigeon catching was the latest tech revolution, would you be working on that instead? 

M: If it impacted our world in such a massive way as the current big-data-driven systems do, I probably would. For example, I’m also very interested in the blockchain, but I do not feel it is as urgent a topic. Maybe it will be in a few years… (especially with the energy consumption!).

R: AI-generated art surely feels like a hot topic right now with the recent market hype around the Obvious sale at Christie’s [an AI generated painting that fetched $432,000 in October, 2018]. What do you make of it? 

M: First, I’d like to set the context for this discussion by bringing to attention the fact that the art market is a place where, with the right branding, you can sell a pickled shark for $8 million. The art market is ultimately the purest expression of the free, open market. The price of an object is determined by how much somebody is willing to pay for it, which is not necessarily related to its cultural value.

I decided not to talk about this before the auction because I feel the negative press and pushback from other folks in the field created too much controversy and fueled the hype. Articles came out daily with opinions from experts, and I’m sure all of this hype inflated the price [the painting was initially estimated at $8-10 thousand].  

There’s a spectrum of approaches to the practicalities of making work in this field with generative deep neural networks:

  • Train on your own data with your own (or heavily modified) algorithms

  • Train on your own data with off-the-shelf (or lightly modified) algorithms (e.g. Anna Ridler, Helena Sarin)

  • Curate your own data and use your own (or heavily modified) algorithms (e.g. Mario Klingemann, Georgia Ward Dyer)

  • Curate your own data and use off-the-shelf (or lightly modified) algorithms

  • Use existing datasets and train with heavily modified algorithms

  • Use existing datasets and train with off-the-shelf (or lightly modified) algorithms (this is what Obvious has done)

  • Use pre-trained models and algorithms (e.g., most DeepDream work, the recent BigGAN, etc.)

Personally, I think it is possible to make interesting work around each of these poles (and I have tried every single one!). But as you get towards the end of the spectrum, you’ll need to work harder to give it a unique spin and make it your own. And I think a very valid approach is to conceptually frame the work in a unique way, even if using existing datasets, or even pre-trained models.

Robbie [Barrat], a young artist, was very upset that Obvious stole his code (which was open source with a fully permissive license at the time). It’s true that they used his code, especially to download the data. But it’s important to remember that the code which actually trains and generates the images is from [ML developer/researcher] Soumith Chintala, which Robbie had forked [copied] from. And the data is already online and open (in fact, I had also trained the exact same models on the exact same data, and I know others did, too). What actually shapes the output and defines what the resulting images look like is the data - which is already out there and available to download - and the algorithm - which, in this case, is a Generative Adversarial Network (GAN) implemented by Chintala. Anybody who puts that same data through that same algorithm (whether it’s Chintala’s code, or other implementations, even in other programming languages) will get the exact same (or incredibly similar) results.

I’ve seen some comments suggesting that the Obvious work was intentionally commenting on this issue of authorship, perhaps in a lineage of appropriation art, similar to Richard Prince’s Instagram Art, etc. But I don’t think that is the case, judging by Obvious’ interviews and press release. Instead, Obvious seems to be going down the ‘can a machine make art?’ angle, which is a very interesting question. Lady Ada Lovelace was already writing about this in 1843, and there have been countless debates, writings, musings, and works on this since then. So personally, I would look for a little bit more than just a random sample from a GAN as a contribution to that discussion. Like I mentioned, what somebody is willing to pay for an artifact is not necessarily related to its cultural value. If a student were to make this work, I would try to be very positive and encouraging, and say, “Great work on figuring out how to download the code and to get it to run. Now start exploring and see where you go.”

On a side note, I’m not a huge fan of the label ‘AI art,’ because I’m not a fan of the term ‘AI,’ but beyond that, because the term ‘AI art’ is somehow infused with the idea that only the art being made with these very recent algorithms is ‘AI art’, whatever that means. I definitely do not consider myself an ‘AI artist.’ If anything, I’m a computational artist, since computation is the common medium in all of my work. People make art by writing software, and have done for 60 or so years (I’m thinking John Whitney, Vera Molnar, etc), or even more specifically, Harold Cohen was making ‘AI art’ 50 years ago. In a tiny corner of the computational art world, Generative Adversarial Networks (GANs) are quite popular today, because they’re relatively easy to use, and for very little effort, produce interesting results. Ten to fifteen years ago I remember delaunay triangulation to be very popular, because again, for relatively little effort, you could produce very interesting and aesthetically pleasing results (and I’m guilty of this, too). And in the ‘80s and ‘90s, we saw computational artists using Genetic Algorithms (GA), e.g., William Latham, Stephen Todd, Karl Sims, Scott Draves, etc. (On a side note, GA is a subfield of AI. So technically they are all AI artists, too.) Computational art will continue, it will grow, the tool palette available to computational artists will expand. And it’s fantastic that new algorithms like GANs attract the attention of new artists and lure them in. But I will just avoid the term ‘AI art’ and call them computational artists or software artists or generative artists or algorithmic artists. 

R: That’s it for market sentiment, then. Let’s focus on your practice again. What projects are you currently working on?

M: There’s a few angles that I’m pursuing, all very research-oriented. First is a theme that I’ve been investigating for a while now, which is looking at how emerging technologies – in this case, deep learning – can augment our ability to creatively express ourselves, particularly in a realtime, interactive manner with continuous control - analogous to playing a musical instrument, like a piano. How can I create computational systems, now using deep learning, that give people meaningful control and enable them to feel like they are able to creatively and even emotionally express themselves?

From a more conceptual angle, I’m interested in using machines that learn as a way to reflect on how we make sense of the world. Artificial neural networks [systems of hardware and/or software very loosely inspired by, but really nothing like, the operation of neurons in biological brains] are incredibly biased and problematic. They’re complicated, but can be very predictable, as well. Just like us. I don’t mean artificial neural networks are like our brain. I mean I just like using them as a mirror to ourselves. We can only understand the world through the lens of everything that we’ve seen or heard or read before. We are constantly trying to make sense of everything that we experience based on our past experiences. We see things not as they are, but as we are. And that’s what I’m interested in exploring and exposing.  Some of my work tries to combine both of these (and other) themes. E.g., my Learning to See series tries to do this, as both being a system for realtime expression, a potential new form of filmmaking and digital puppetry, but also ultimately demonstrates this extreme bias. One who has only ever seen thousands of images of the ocean will see the ocean everywhere they look.

As a more distilled version of this perspective, in 2017 I made a Virtual Reality (VR) piece FIGHT!. It doesn’t use neural networks or anything like that, actually. It uses the technology of VR, but is about as opposite to VR as is possible, I think. In the headset, your eyes are presented with monocularly dissimilar (i.e., very different) images. Your brain is unable to integrate the images together to create a single cohesive 3D percept, so instead the two rival images fight for attention in your conscious awareness. In your mind’s eye, you will not see both images blended, but the two rival images flicker back and forth as they alternate in dominance. In your conscious experience, your mind will conjure up animated swipes and swirly transitions – which aren’t really there. And this experience is unique and different for everybody, as it depends on your physiology. Everybody is presented with the exact same images, but everybody “sees” something different in their mind. And it’s impossible for me to know or see or ‘empathize’ with what you see. And of course, this is actually always the case, not just in this VR experience, but in daily life, in everything that we experience. We just forget that and assume that everybody experiences the world in the same way we do.

While I’m interested in these themes from a perceptual point of view, the underlying motivation with these kinds of subjective experiences is to expose and investigate cognitive bias and polarization. I come from Turkey, which is currently torn in two over our current president. In the UK, where I’ve been living for 20 years, the Remain/Brexit campaign has also radically split society. There seems to be a trend where people in one camp attribute the other camp’s political views to them being ‘stupid.’ E.g. I’m very much for remaining in the EU, but it disturbs me when I see other ‘remainers’ believe that the only possible explanation that somebody might have to have voted to leave the EU is because they’re either stupid or racist (or both). I can’t see the world in such simple black-and-white terms. I’m sure many (or at least some) leavers have a line of reasoning which may be more intricate than just being ‘stupid’ or ‘racist,’ even if I don’t agree with it. And if we refuse to acknowledge that, we can’t have a discussion, we’ll never be able to reconcile our differences. We’ll be driven further apart, and ultimately things will only get worse.  

R: Can you tell a bit more about the PhD you’re currently doing at Goldsmith’s University? Is it purely technical?

M: My idea going into the PhD was very ambitious. I wanted to weave together art, neuroscience, physics, information theory, control theory, systems theory, perception, philosophy, anthropology, politics, religion, etc., but that turned out to be a bit ambitious, at least for a first PhD. Now it’s narrowed down to being more technical. And like I mentioned before, for the past few decades, I have been trying to create systems that enhance the human experience, particularly of creative expression. What I’m interested in are realtime, interactive, closed feedback loops with continuous control.

This is also how we sense the world. E.g., our eyes are constantly scanning, receiving signals, moving, receiving signals, moving. And the brain integrates all of that information, and that’s how we perceive and understand the world. This is also how we embody our tools and instruments, through action-perception loops. This is how we can embody something like a bicycle or a car, or from a creative self-expression point of view, it’s how we embody something like a piano: we hit a key, hear a note, feel it and respond to it. Eventually, we get to a stage where we don’t think about what we’re playing, we just feel it, it becomes an extension of the body, and the act of playing becomes an emotional act in itself. I don’t feel a tool like Photoshop has that level of immediacy or emotional engagement, once you click on the menu dropdown, etc…

I am looking to use deep learning in that context, to achieve meaningful, expressive continuous control. The way generative deep learning mostly works right now is, for example, you run training code on a big set of images, then you run the generation code, and it generates images. It’s like a black box where you can only press one button: ‘generate something.’ Of course, there are some levels of control you could have. You can control the training data you feed it, you can pick an image and tell the code to create similar images. And in recent years, there have been more ways of controlling the algorithm. But very few of these methods are immediate, realtime closed feedback loops with continuous control. This is both a computational challenge and a system design challenge, as current systems are simply not built with this in mind (though it is a growing field, so that’s very exciting).

R: We’ve talked a lot about machine learning, how about we flip that on its head: can machines teach us something? 

M: Yes, definitely! We can look at today through an anthropological timescale: what’s happening in 2018 is not disconnected from what happened 100 or 10,000 years ago. When Galileo took a lens and made a telescope to look at the stars, he literally allowed us to look at the world in a whole new light. We cannot be the same after that. Well, that would have worked better if the Church hadn’t stepped in. If we do not use technology to see things differently, we are wasting it.

Take word embeddings, for example [a set of techniques that maps words and phrases to vectors of real numbers]. There’s a well-known model trained on three billion words of Google News. The program does not know anything to begin with, it doesn’t know what a verb is, it has no idea of grammar, but it eventually creates semantic associations.  So it learned about gender, for example, and you can run mathematical operations on words like king – man + woman => queen. It’s learnt about the prejudices and biases encoded in three billion words of news, a reflection of society. Who knows what else is in that model. I wrote a few twitter bots to explore that space, actually. @wordofmath and @wordofmathbias

But even Google autocomplete is a really powerful way of looking at what our collective consciousness is thinking or feeling. I wrote a poem about this in 2014. It’s a collaboration with Google (the search engine, not people working at Google), the keeper of our collective consciousness. And actually it’s more a collection of prayers.

A very powerful project in this realm I really like is by Hayden Anyasi. He was disturbed by the way newspapers selected images to accompany news stories, so he created an installation that takes a picture of your face and then creates a news story about you, based on the data it was trained on: a large dataset of newspaper articles. So if you’re an attractive young white woman, the story generated might be about winning some contest or something. If you’re a young black man, the story is more likely to be about crime. Some people might think that this just reflects reality, but unfortunately, that expectation is exactly the problem, as there are situations where images have been selected to accompany stories not because they are related to the story, but simply because that’s what the expectation was. In Hayden’s own words: “A young man's face was used as the lead image in a story about a spate of crimes. Despite being cleared of any involvement, his picture was later used again anyway. Did his face meet the standard expectations of what a criminal should look like?” It’s easy to dismiss these things when you’re not affected, but when you see it like this, this kind of art punches you. 

R: Speaking of scary things, there’s a lot of anxiety around technology these days. Would you say you’re a techno-optimist?

M: I’m definitely not very optimistic.  I’m not worried about the singularity or the ‘intelligence explosion’ or robots taking over. To me, that seems more like a marketing trick that’s good for business, to sell books, and to get funding from people who are so rich that the only thing which scares them are so-called ‘existential risks’ which will affect all of humanity, even people as rich and powerful as themselves. On a related note though, autonomous weapons are indeed a major genuine concern, and algorithmic decision-making systems are already in use and proving to be hugely problematic. I do believe algorithms could have the potential to be less prejudiced and fairer than humans on average, but they have to be thoroughly regulated, open source, open data, and preferably developed by non-profit organizations who are doing it only because they believe they can develop fairer systems which will be beneficial to everybody. And by ‘they’ I am referring to not just computer scientists, but a diverse team of experts across many disciplines, backgrounds, and life experiences who collectively have a much greater chance of thinking about and foreseeing the wider impact of these systems once deployed. Closed source, closed data systems developed by for-profit companies which are not well regulated is an absolute recipe for disaster. 

But I worry more about “unknown unknowns” that can come out of nowhere and have a huge impact. Here’s a dystopia for you: what if, in the future, the link between genotype and phenotype [how a particular trait is coded in our DNA and expressed through environmental conditions] was mastered (it is something that is being heavily researched right now)? And imagine that combined with CRISPR (or its successor), there was a service which allowed you to boost your baby’s IQ to 300+. And imagine that this service was incredibly expensive, something which only a select few could afford. What kind of world would that be? I don’t necessarily believe that this exact scenario will happen, but I’m sure we will face similar situations.

On the other hand, if we are ever to cure Alzheimer’s or leukemia, it will undoubtedly be with the help of similar data-driven methods. Even the recent discovery of gravitational waves produced by colliding neutron stars is a massive undertaking in data analysis and extracting information (the detection of a tiny blip of signal) in a massive sea of background noise. Machine learning encompasses the act of extracting meaningful information from data, and so any breakthrough in machine learning will impact any field which is data-driven. And in this day and age, everything is data-driven: physics, chemistry, biology, genetics, neuroscience, psychology, economics, and even politics. So it’s impossible to predict the unknown unknowns. Who knows, maybe someday we’ll be able to photosynthesize!

But I do have a streak of optimism. However, what I'm optimistic about is not technology, but us, and a potential shift in values. If we look at the overall evolutionary arc of human morals going back thousands of years, it seems there is a trend towards expanding our circle of compassion to be more inclusive. We used to live in small tribes, and neighboring tribes would be at war. We've now expanded those tribes to the size of countries. This is still far from perfect, especially with the current rise of nationalism, but the overarching long-term trend is a positive one, if it carries on in the same direction (and that is a big open ‘if’). We've now legally recognized that half of the population - women - are the equal of men and deserve the same rights, whether it be for voting, working, healthcare, education, etc. It’s quite shocking that this has only happened so recently, in the last hundred years or so. And so the effects have unfortunately not yet fully permeated into our culture and day-to-day lives, but I think it's inevitable that it will happen. Likewise, we’ve abolished slavery, we legally recognize all humans to be equal. Again, unfortunately, this has happened shockingly recently, so we are absolutely nowhere near being at a level where the day-to-day practice is satisfactory. But again, hopefully, the overall long-term trend is moving in a desirable direction. And this last century has even seen massive efforts to include non-human animals in our circle of compassion, whether it be vegetarianism or veganism or animal rights, in general. 

So while I’m not overly optimistic, the only glimmer of hope that I am able to potentially see for the future is not via any particular technology saving us, but hopefully a gradual shift in values which will head towards prioritizing the well being of all living things, as opposed to just a select few at the massive expense of others. The big open question is, apart from whether this will happen or not, is how soon will this happen. And how much damage will we have inflicted before we realize what we've done.

R: Thanks a lot for our chat! To wrap up, do you have any reading recommendations to dig deeper into machine learning art?

M: A few years ago I collated a list of resources which I had used to get up to speed.

At the time, there weren’t many introductory or beginner-friendly materials. It was more academic books and full-on online university courses. But in the past few years, as deep learning became really popular, loads of new ‘beginner-friendly’ materials came online. So this is probably quite out of date, but for those willing to invest time, I’m sure a lot of this will help build a strong foundation.

But since collating that list, a fantastic resource that is now available is Gene Kogan’s Machine Learning for Artists. It’s full of amazingly useful, beginner-friendly info. And another resource which I have not personally used, but I’ve heard very good things about, is Fast.ai.

Comment

AI Artists Expose “Kinks” In Algorithmic Censorship

December 11, 2018 Jason Bailey
Tom White posing with four abstract NSFW artworks presented at the ARTificial visual arts exhibit

Tom White posing with four abstract NSFW artworks presented at the ARTificial visual arts exhibit

Tumblr recently announced that they no longer tolerate adult content on their sites. This is problematic for artists and art historians because social media and blogging platforms have proven to be lousy at discerning nudity in art from nudity in pornography. As a result, platforms like Facebook are famously flagging important cultural art and artifacts like the Venus of Willendorf as adult content and removing them from their sites (essentially erasing our history).

While it is problematic, I can see how Facebook’s system could accidentally see the Venus, an object actually intended to represent the nude human form, as potentially being adult content. However, Tumbler’s artificial intelligence is flagging all kinds of bizarre things as adult content. And it fails to capture many things that actually do contain nudity. So if you were worried that machines and AI would eventually outsmart us, steal our jobs, and then steal our boyfriends/girlfriends, fear not. The machines are just not that into us.

Mustard Dream, Tom White, 2018

Mustard Dream, Tom White, 2018

So what do AI-censoring algorithms find sexy? Well, AI’s definition of nudity is actually pretty hilarious. For example, AI artist Tom White cooked up this sexy number he calls Mustard Dream, which was immediately flagged as “adult content” by Tumbler’s AI censor.

Tom White’s Mustard Dream, flagged by Tumbler as “adult content”

Tom White’s Mustard Dream, flagged by Tumbler as “adult content”

Apparently Tom’s milkshake brings all the droids to the yard as Mustard Dream also also scored a near perfect score of 92.4% on AWS (Amazon Web Services) for “explicit nudity.”

Mustard Dream scores a 92.4% on AWS (Amazon Web Server) Explicit Nudity detector

Mustard Dream scores a 92.4% on AWS (Amazon Web Server) Explicit Nudity detector

It is no coincidence that White’s art is scoring so high on the various AI “adult content” filters. For several years, White has been learning to see things the way machines see them. His work combines the minimal elements required for an AI to read an object in a image. He has famously done this for objects below, which were recognized as a fan, a cello, and a tick (left to right).

Fan, Tom White, 2018

Fan, Tom White, 2018

Cello, Tom White, 2018

Cello, Tom White, 2018

Tick, Tom White, 2018

Tick, Tom White, 2018

But now White has moved on from mundane, everyday objects and become the Hugh Hefner of AI, cornering the market for saucy AI pinups. White wondered how his robo-porn would hold up to the more sensual works of modern human artistic masters. To satisfy his curiosity, he analyzed the 20,000 screen prints that the MoMA (Museum of Modern Art) has in their collection.

Mustard Dream, Tom White, 2018

Mustard Dream, Tom White, 2018

Tobacco Rose, Mel Ramos, 1965, published 1966 (image courtesy of MoMA)

Tobacco Rose, Mel Ramos, 1965, published 1966 (image courtesy of MoMA)

It turns out White’s Mustard Dream print would be content blocked by Google and Amazon before all 20,000 prints in the MoMA. Though, according to White, “Mel Ramos Tobacco Rose had a respectable second place.”

Pitch Dream, Tom White, 2018

Pitch Dream, Tom White, 2018

White shared with me that much like Mustard Dream, his Pitch Dream “also scores with high confidence values as ‘adult’ and ‘racy’ by Google SafeSearch and as ‘explicit nudity’ by Amazon and Yahoo.” I can see why, just look at those curves ;-)

Screen Shot 2018-12-09 at 8.42.22 AM.png

But how does White create and optimize these images to titillate the AI adult content algorithms? According to White:

Given an image, a neural network can assign it to a category such as fan, baseball, or ski mask. This machine learning task is known as classification. But to teach a neural network to classify images, it must first be trained using many example images. The perception abilities of the classifier are grounded in the data set of example images used to define a particular concept.

In this work, the only source of ground truth for any drawing is this unfiltered collection of training images.

Abstract representational prints are then constructed which are able to elicit strong classifier responses in neural networks. From the point of view of trained neural network classifiers, images of these ink-on-paper prints strongly trigger the abstract concepts within the constraints of a given drawing system. This process developed is called perception engines as it uses the perception ability of trained neural networks to guide its construction process. When successful, the technique is found to generalize broadly across neural network architectures. It is also interesting to consider when these outputs do (or don’t) appear meaningful to humans. Ultimately, the collection of input training images are transformed with no human intervention into an abstract visual representation of the category represented.

In this case, the training images used were adult in nature but have become abstracted out by Tom’s perception engine in a way that makes the resulting image appear “adult” to a neural network but innocuous to a human.

Not all of the AI art being created to intentionally trigger “adult content” detectors is so wholesome. If Tom White is the Hugh Hefner of AI generated “adult content,” Mario Klingemann may just be the Larry Flynt. Where White’s images are humorous, Klingemann’s are often dark, disturbing, and unsettling.

tumblr_pjf6lnGjpq1y24yujo1_540.jpg
tumblr_pjf9veJMgM1y24yujo1_540.jpg
tumblr_pjf4icKqM21y24yujo1_540.jpg
tumblr_pjf58kv71K1y24yujo1_540.jpg

For this series of images Klingemann is calling “eroGANous,” he intentionally evolved a generative adversarial network called “BigGAN” for “maximum NSFW-ness.” Klingemann points out, “Tumblr's filter is not happy about them, but it looks like they may still show for a few days.” The complete series can be seen here, for now.

Klingemann sees the use of AI to broadly censor content as problematic as it results in “sterile” content. As he shared with me:

When it comes to freedom, my choice will always be "freedom to" and not "freedom from," and as such I strongly oppose any kind of censorship. Unfortunately in these times, the "freedom from" proponents are gaining more and more influence in making this world a sterile, "morally clean" place in which happy consumers will not be offended by anything anymore. What a boring future to look forward to.

Luckily, the current automated censorship engines are more and more employing AI techniques to filter content. It is lucky because the same classifiers that are used to detect certain types of material can also be used to obfuscate that material in an adversarial way so that whilst humans will not see anything different, the image will not trigger those features anymore that the machine is looking for. This will of course start an arms race where the censors will have to retrain their models and harden them against these attacks and the freedom of expression forces will have to improve their obfuscation methods in return.

Klingemann also shared several other projects by artists exploring machine learning and nudity including Artist Jake Elwes’ NSFW machine learning porn, “a 12-minute looped film that records the AI’s pornographic fantasies.” On his website Elwes describes his project as:

A convolutional neural network, an AI, was trained using Yahoo’s explicit content model for identifying pornography, which learnt by being fed a database of thousands of graphic images. The neural network was then re-engineered to generate pornography from scratch.

Both Klingemann and Elwes cite Gabriel Goh’s Image Synthesis from Yahoo's open_nsfw (heads up, it’s also NSFW) as an early example exploring neural networks and nudity.

And then there are the slightly less pornographic nudes from AI artist Robbie Barrat, which where trained on hundreds of classical nude portraits from art history.

AI Generated Nude Portrait #1, Robbie Barrat, 2018

AI Generated Nude Portrait #1, Robbie Barrat, 2018

We have covered Barrat’s Nudes extensively in the past on Artnome and are proud and honored to have several of them in our digital art collection, though I would be curious to see how Barrat’s Nudes rate on the various AI-driven “adult content” scales, as well.

AI Generated Nude Portrait #3, Robbie Barrat, 2018

AI Generated Nude Portrait #3, Robbie Barrat, 2018

Of course, censorship is nothing new for artists. Marcel Duchamp famously explored machine-like nudity with his Nude Descending a Staircase in 1912. The hanging committee of the Salon des Indépendants exhibition in Paris, which included Duchamp’s own two brothers, declined the work stating “A nude never descends the stairs--a nude reclines." Some of the wispy line work in Duchamp’s nude even resemble a bit the line work from Tom White’s Mustard Dream, perhaps just one more way Duchamp was ahead of his time.

Nude Descending a Staircase, Marcel Duchamp, 1912

Nude Descending a Staircase, Marcel Duchamp, 1912

Even Michelangelo’s Sistine Chapel drew criticism from the Pope and Catholic Church for the dozens of nude men it depicted. It was eventually censored, with loin cloths painted onto the figures to protect the prude and the modest.

Sistine Chapel, Michelangelo, 1508

Sistine Chapel, Michelangelo, 1508

One has to wonder how long it will be before an artist like Tom White is asked to add loin cloths to works like Mustard Dream to protect the purity of thought and modesty of artificially intelligent machines. I’m not even sure what the equivalent to a loin cloth looks like to a neural network, but I’m certain that Tom White and Mario Klingemann will figure it out and can find a way around such censorship.

Comment

MIT Replicates Paintings With 3D Printing and Deep Learning

November 29, 2018 Jason Bailey
MIT-RePaint-Flowers-03 - RePaint can reproduce paintings regardless of different lighting conditions (credit MIT CSAIL).png

I grew up in the ‘80s/’90s in a small ranch in a suburb of Boston with several nicely framed high-end reproductions of paintings from the Boston Museum of Fine Arts in my living room. Sure they were just replicas, but I loved them as they were a great way to bring some of the magic of my favorite museum into our home where we could live with them in the context of our everyday lives. They even had a texture to them to suggest brush strokes. But let’s face it - it was clearly not the same as seeing the actual paintings in the museum.

Fast forward 30 years and just down the street from the Boston MFA in Cambridge, a group researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has just released a paper on a new approach for replicating paintings that they claim is 4x more accurate than existing models at recreating exact color shades.

L8LD8j.gif

Traditional printing that most of us are familiar with uses just four inks: cyan, magenta, yellow, and black (also known as CMYK). The team from MIT/CSAIL are using ten inks layered in a stack to achieve more accurate results. To do this, the team developed a deep learning model to identify the ideal mix of colors for the stack.

They also take advantage of the older technique of halftoning (employing spatial modulation - think dot patterns in Roy Lichtenstein paintings) and contoning (combining thin layers of inks) which improved accuracy and provides a smooth appearance.

rRJ5RW.gif

One of the major advantages to the new technique is that the replica reacts to different lighting situations in a similar manner to the original painting. Previous techniques relied on “colorimetric color reproduction“ which would sample colors from a painting under a single lighting condition. But as the paper points out, this can lead to “metamerism, a well-known problem in color reproduction wherein a good reproduction is obtained under one light source, but not under another.”

According to Changil Kim, co-author of the paper and a postdoctoral fellow at MIT CSAIL.

If you just reproduce the color of a painting as it looks in the gallery, it might look different in your home. Our system works under any lighting condition, and shows a far greater color reproduction capability…

The team plans to make a data set publicly available which contains:

20,878 contone ink stack spectra and layouts, spectrally captured oil paintings, together with their optimized layouts using our ink library, and photographs of our printed reproductions under multiple illuminations.

The team at CSAIL believes the system could be used to protect originals from wear-and-tear in museums while also making it possible for people to view replicated versions in their own homes or multiple museums around the globe. According to mechanical engineer Mike Foshey:

The value of fine art has rapidly increased in recent years, so there's an increased tendency for it to be locked up in warehouses away from the public eye. We're building the technology to reverse this trend, and to create inexpensive and accurate reproductions that can be enjoyed by all.

As someone who thinks a lot about art and tech, I think it is good that we are advancing our technology for replicating works. But the romantic in me still wants to believe that there is something magical about the canvas that has been worked by the hand of the artist that will never be replicated.

Left : A photograph of one of the team’s printed sample patches. Each color square is 1 mm × 1 mm. Right: Their spectral acquisition setup.

Left : A photograph of one of the team’s printed sample patches. Each color square is 1 mm × 1 mm. Right: Their spectral acquisition setup.

While the early results are very promising, the new approach is still in its early development and has some significant limitations. At the moment, the time-intensive nature of 3D printing means that the reproductions are much smaller than the originals and it struggles with certain pigments like cobalt blue.

In the future they plan to expand this library, as well as create a painting-specific algorithm for selecting inks. They also can hope to achieve better detail to account for aspects like surface texture and reflection, so that they can achieve specific effects such as glossy and matte finishes. Given the current limitations we likely won’t need to worry about it being used for forgeries right now, but it is easy to imagine it becoming problematic in the future.

1 Comment

Helena Sarin: Why Bigger Isn’t Always Better With GANs And AI Art

November 26, 2018 Jason Bailey
Am I Dali yet?, Helena Sarin, 2018 (Collection of Jeremy Howard)

Am I Dali yet?, Helena Sarin, 2018 (Collection of Jeremy Howard)

AI art using GANs (generative adversarial networks) is new enough that the art world does not understand it well enough to evaluate it. We saw this unfold last month when the French artists’ collective Obvious stumbled into selling their very first AI artwork for $450K at Christie’s.

Many in the AI art community took issue with Christie’s selecting Obvious because they felt there are so many other artists who have been working far longer in the medium and who are more technically and artistically accomplished, artists who have given back to the community and helped to expand the genre. Artists like Helena Sarin.

Sarin was born in Moscow and went to college for computer science at Moscow Civil Engineering University. She lived in Israel for several years and then settled in the US. While she has always worked in tech, she has moonlighted in the applied arts like fashion and food styling. She has played with marrying her interests in programming and art in the past, even taking a Processing class with Casey Reas, Processing felt a little too much like her day job as a developer. Then two years ago, she landed a gig with a transportation company doing deep learning for object recognition. She used CycleGAN to generate synthetic data sets for her client. Then a light went off and she decided to train CycleGAN with her own photography and artwork.

This is actually a pretty important distinction in AI art made with GANs. With AI art, we often see artists using similar code (CycleGAN, SNGAN, Pix2Pix etc.) and training with similar data sets scraped from the web. This leads to homogeneity and threatens to make AI art a short-lived genre that quickly becomes repetitive and kitsch. But it doesn’t have to be this way. According to Sarin, there are essentially two ways to protect against this if you are an AI artist exploring GANs.

First, you can race to use the latest technology before others have access to it. This is happening right now with BigGANs. BigGANs produce higher-resolution work, but are too expensive for artists to train using their own images. As a result, much of the BigGAN imagery looks the same regardless of who is creating it. Artists following the path of chasing the latest technology must race to make their stamp before the BigGAN aesthetic is “used up” and a “BiggerGAN” comes along.

Chasing new technology as the way to differentiate your art rewards speed, money, and computing power over creativity. While I find new technology exciting for art, I feel that the use of tech in and of itself never makes an artwork “good” or “bad.” Both Sarin and I share the opinion that the tech cannot be the only interesting aspect of an artwork for it be successful and have staying power.  

The second way artists can protect against homogeneity in AI art is to ignore the computational arms race and focus more on training models using your own hand-crafted data sets. By training GANs on your own artwork, you can be assured that nobody else will come up with the exact same outputs. This later approach is the one taken by Sarin.

Sarin approaches GANs more as an experienced artist would approach any new medium: through lots and lots of experimentation and careful observation. Much of Sarin’s work is modeled on food, flowers, vases, bottles, and other “bricolage,” as she calls it. Working from still lifes is a time-honored approach for artists exploring the potential of new tools and ideas.

Trick or Treat, Helena Sarin, 2018

Trick or Treat, Helena Sarin, 2018

The Pigeon Pea, Pablo Picasso, 1912

The Pigeon Pea, Pablo Picasso, 1912

Radical Seasonality, Helena Sarin, 2018

Radical Seasonality, Helena Sarin, 2018

Sarin’s still lifes remind me of the early Cubist collage works by Pablo Picasso and Georges Braque. The connection makes sense to me given that GANs function a bit like an early Cubist, fracturing images and recombining elements through “algorithms” to form a completely new perspective.  As with Analytic Cubism, Sarin’s work features a limited color pallet and a flat and shallow picture plane. We can even see the use of lettering in Sarin’s work that looks and feels like the lettering from the newsprint used in the early Cubist collages.

I was not surprised to learn that Sarin is a student of art history. In addition to Cubism, I see Sarin’s work as pulling from the aesthetic of the German Expressionists. Similar to the woodblock prints of artists like Emil Nolde and Erich Heckel, Sarin’s work has bold, flat patterns and graphic use of black. She also incorporates the textures resulting from the process as a feature rather than hiding them, another signature trait of the Expressionist woodblock printmakers.

And Soon I'll Hear Old Winter's Song, Helena Sarin, 2018

And Soon I'll Hear Old Winter's Song, Helena Sarin, 2018

MÜDE, Erich Heckle, 1913

MÜDE, Erich Heckle, 1913

Tingel-Tangel II, Emil Nolde, 1907

Tingel-Tangel II, Emil Nolde, 1907

Woman, Erich Heckel, 1925

Woman, Erich Heckel, 1925

A Little Etching, With Apology to Modigliani, Helena Sarin, 2018

A Little Etching, With Apology to Modigliani, Helena Sarin, 2018

The Snow Queen, Helena Sarin, 2018

The Snow Queen, Helena Sarin, 2018

I think printmaking is a much better analogy to GANs than the oft-used photography analogy. As with printmaking, technology for GANs improves over time. Moving from woodblock to etching to lithography, each step in printmaking represents a step towards more detailed and realistic-looking imagery. Similarly, GANs are evolving towards more detailed and photorealistic outputs, only with GANs, this transition is happening so fast that it can feel like tools become irrelevant every few months. This is particularly true of the arrival of BigGANs, which require too much computing power for independent artists to train it with their own data. Instead, they work from a pre-trained model. This computational arms race has many in the AI art community wondering what Google research scientist David Ha recently put into words on Twitter:

Screen Shot 2018-11-22 at 12.46.22 PM.png

Sarin collected her thoughts on this in the paper #neuralBricolage, which she has been kind enough to let us share in full below.

Will AI art be a never-ending computational arms race that favors those with the most resources and computing power? Or is there room for modern-day Emil Noldses and Erik Heckels who found innovation and creativity in the humble woodblock, long after “superior” printmaking technologies had come along?

Helena Sarin is an important artist who is just starting to get the recognition she deserves. Her thoughts here form the basis for some of the key arguments about generative art (especially GAN art) moving forward.

#neuralBricolage: An Independent Artist’s Guide to AI Artwork That Doesn’t Require a Fortune

Candy store, Helena Sarin, 2018

Candy store, Helena Sarin, 2018

tl;dr With recent advent of BigGAN and similar generative models trained on millions of images and on hundreds of TPUs (tensor processing units), the independent artists who have been using neural networks as part of the artistic process might feel disheartened by the limitation of compute and data resources they have at their disposal. In this paper I argue that this constraint, inherent in staying independent, might in fact boost artistic creativity and inspire the artist to produce novel and engaging work. The created work is unified by the theme of #neuralBricolage - shaping the interesting and human out of the dump heap of latent space.

Hardly a day passes without the technical community learning about new advances in the domain of generative image modeling. Artists like myself who have been using GANs (generative adversarial networks) for art creation often feel that their work might become irrelevant, since autonomous machine art is looming and generative models trained on all art history will soon be able to produce imagery in every style and with high resolution.  So those of us who got fascinated by creative potential of GANs but frustrated by the output of low resolution, what options do we have?

Not that many, it seems; you could join the race, building up your local or cloud compute setup, or start chasing the discounts and promotions of ubiquitous cloud providers utilizing their pre-trained models and data sets - the former prohibitively expensive, the latter good for learning but too limiting for producing unique artwork. The third option would be to use these constraints to your benefit.

Here I share the aesthetics I’m after and the techniques I’ve been developing for generating images directly from GANs, within the constraints of only having small compute and not scraping huge data sets.

Look at it as an inspirational guide rather than a step-by-step manual. 

Setup

In any ML art practice, the artist needs the GPU server, ML software framework, and data sets. I consider my hardware/software setup to be quite typical - I’m training all my GANs on a local server equipped with a single GTX 1080TI GPU. Compute resource constraints mean that you can only use specific models  - in my case it’s CycleGAN and SNGAN_projection, since both can be tuned to do a training from scratch on a single GPU. With SNGAN I can generate images with resolution up to 256x256, further upscaling them with CycleGAN.

Data sets

From the very beginning of my work with GANs I’ve been committed to using my own data sets, composed of my own drawings, paintings, and photography. As Anna Ridler, the ML artist who also works exclusively with her own imagery, rightly suggested in her recent talk at ECCV: “Everyone is working with the same data sets and this narrows the aesthetics.” I covered my approach for data sets collection and organization in my recent blog “Playing a Game of GANstruction”

Process

The implications of BigGAN-type models are widely discussed in the machine art community. Gene Kogan recently suggested that “like painting after the advent of the camera, neural art may move towards abstraction as generative models become photorealistic.” And at least in the short term, the move towards abstraction is in a sense inevitable for those of us working under resource constraints, as training on modestly sized data sets and a single GPU would make the model collapse long before your model is able to generate realistic images. You would also need to deal with the low resolution of the GAN when training/generating images with constrained resources. Not to despair - GAN chaining and collaging to the rescue! Collage is a time-honored artistic technique - from Picasso to Rauschenberg to Frank Stella, there are many examples to draw from for GAN art.

My workflow for GAN output generation and post-processing usually follow these steps where each one might yield interesting imagery:

Step 1: Prepare data sets and train SNGAN_projection. The reason I’m using SNGAN is that projection discriminator allows you to train on and generate several classes of images, for example, flower painting and still life. An interesting consequence of working with images that don’t have obvious landmarks or homogeneous textures as in ImageNet is that it causes glitches in the models expecting ImageNet-type pictures. These glitches cause class cross-contamination and might bring interesting pleasing effects (or might not - debugging the data sets is quickly becoming a required skill for an ML artist). As a result, the data set’s composition/breakdown is the most important factor in the whole process.

The model is then trained till the full collapse. I store and monitor the generated samples per predefined timeout, stopping the training and decreasing the timeout when I start observing the interesting images. This might also prove to be quite frustrating, as I noticed the universal law of GANs is that the model always produces the most striking images in iterations between the checkpoints, whatever the value the saving interval is set to - you’ve been warned.

Step 2: Generate images and select a couple hundred of those with some potential. I also generate a bunch of mosaics from these images using Python scripts. This piece from the Shelfie series or Latent Scarf are some examples.

Shelfie Series, Helena Sarin, 2018

Shelfie Series, Helena Sarin, 2018

Step 3: Use CycleGAN to increase the image resolution. This step involves a lot of trial and error, especially around what images are in the target domain data sets (CycleGAN model is trained to do an image-to-image translation, i.e., images from the source domain are translated to the target domain). This step could yield images to stand on their own, like Stand Clear of the Closing Doors Please or Harvest Finale. 

Harvest Finale, Helena Sarin, 2018

Harvest Finale, Helena Sarin, 2018

Step 4: Many of SNGAN-generated images might have a striking pattern or interesting color composition but lack enough content to stand on their own. The final step then is to use such images as part of the collage. I select what I call an anchor image of high resolution (either from step 3 or from some of my cycleGANned drawings). I also developed a set of OpenCV scripts that generate collages based on image similarity, size, and position of anchor images with SNGAN images setting up the background. My favorite examples are Egon Envy or Om.

Om, Helena Sarin, 2018

Om, Helena Sarin, 2018

This process, as often with concept art in general, carries a risk of getting a bit too mechanical - the images might lose novelty and become boring so it should be applied judiciously and curated ruthlessly. The good news is that it opens new possibilities - the most exciting directions I started exploring recently are using GAN outputs: 

  • As designs for craft, in particular for glass bas-reliefs. Thanks to semi-abstraction and somewhat simplified rendering of often exuberant colors and luminance they might exhibit organic folksy quality. Many generated images could be reminiscent of patterns of the Arts & Crafts Movement. It’s still early in the game to share the results, but I showed images such as in this set to experienced potters and glassmakers and got overwhelmingly enthusiastic responses (Surfaces and Stories).

  • In what I call “computational non-photography” - layering and remixing generated images to create new ones. Indian Summer or Latent Underbrush are examples of this technique.

Latent Underbrush, Helena Sarin, 2018

Latent Underbrush, Helena Sarin, 2018

Conclusion

Even with the limitations imposed by not having a lot of compute and huge data sets, GAN is a great medium to explore precisely because the generative models are still imperfect and surprising when used under these constraints. Once their output becomes as predictable as the Instagram filters and BigGAN comes pre-built in Photoshop, it would be a good time to switch to a new medium.

Subscribe To The Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
Comment

AI Artist Gives “Perfect” TED Talk As Cyborg

November 18, 2018 Jason Bailey
unnamed-5.jpg

There has been a lot of talk about AI (artificial intelligence) in the media over the last few months, but there are still a lot of signs that most people don’t really understand it. For example, 90% of people polled in a recent survey believe that “up to half of jobs would be lost to automation within five years.” Of course, experts in AI find this idea laughable.

In an effort to help people better understand AI, we at Artnome are doing a series of interviews with artists who are exploring AI’s potential through art. In this post, we speak with artist Alexander Reben, whose work focuses on human/machine collaboration using emerging technologies. He sees these technologies as new tools for expression, and most recently trained an AI to give a TED Talk through a robotic mask on stage.

The Perfect TED Talk?

What if you could give the perfect TED Talk? How would you prepare for it? You could try to watch all 2,600 previous TED Talks that have been given on the main stage, but at around 18 minutes each, it would take you a month (watching day and night). Even if you could watch all the talks, as a human it would be hard to find the underlying structure or algorithm that makes a great TED Talk across such a large data set. However, this type of repetitive behavior and pattern identification is exactly what computers and AI do well.

And what does a TED Talk trained on all the previous TED Talks sound like? Well, it is actually pretty hilarious and entertaining.

For this TED Talk, Reben wrote an algorithm to break down the scripts for every past TED Talk given on the main stage into four roughly equal sections. “I basically hoped that the beginnings, middles, and ends of TED Talks would be somewhat similar,” said Reben of his approach. He then used this data set to train an AI to create multiple outputs for each of the four sections, selected the outputs he liked best, and combined them for his final presentation.

My goal was to make a three-minute talk so it would be Youtube length, so that was the constraint in terms of how many slides I was going to use. I wasn’t going to make a full 20-minute talk because you kind of get the point after several slides.

For the images on his slides, Reben used a separate algorithm that read the script for his talk, ran a search based on the content, and then choose relevant images from the internet. Even the positioning of the images on the slides was handled by a computer, as it was done by PowerPoint’s automated slideshow maker.

I asked Reben how he thought the AI performed as a collaborator in developing the script for the TED Talk.

The AI definitely can learn and perceive patterns at a scale that we don’t. This is something that feels like a TED Talk, but is not really a TED Talk. AI are good at making things that “feel” like things in the generative sense. It picks up on the soul of the data set - if we can call it that - and it can reproduce that soul. But it can’t pick up the creativity or make something coherent. I think that is why a lot of people are interested in AI art, because it is like seeing a pattern that is not really there and our brains try to fill in the gaps. It’s Frankenstein-ish.

If Reben’s AI collaboration is Frankenstein-ish, it is closer to Gene Wilder and Peter Boyle in Young Frankenstein than Victor Frankenstein and his menacing (if not misunderstood) monster originally depicted in Shelley’s novel.

Puttin_on_The_Ritz.gif

Watching people try to train machines with limited capabilities to act like humans is… well… funny. Reben knows this very well. He entertains and educates us by essentially collapsing the gap between the public’s perception of AI as “job killer” and its current and actual capabilities as a fledgling tool that is heavily dependent on collaboration and direction from humans to perform even the most basic of tasks.

What Hollywood and the mainstream media have failed to teach us is that AI is currently like an infant: growing fast, but not very knowledgeable or skilled. Where most people play up (or down) AI’s capabilities, Reben has put AI out there in all its awkward glory on the most important stage of our time - the TED stage.

That said, Reben’s AI does a more respectable job at identifying patterns in TED Talks than we may pick up on with a single viewing. Let’s look closer at some of the more subtle, but repetitive and formulaic elements of a TED Talk that it was able to surface.

Screen Shot 2018-11-15 at 3.08.10 PM.png
  • Start with a shocking claim: “Five Dollars can Save Planet Earth.”

Screen Shot 2018-11-15 at 3.10.36 PM.png
  • Establish personal expertise and credibility: “I’m an ornithologist.”

Screen Shot 2018-11-15 at 3.12.32 PM.png
  • Strike fear into the heart of the audience: “Humans are the weapons of mass destruction.”

Screen Shot 2018-11-15 at 3.16.04 PM.png
  • Offer a solution based on novel technology: “A computer for calibrating the degree of inequality in society.”

Screen Shot 2018-11-15 at 3.28.21 PM.png
  • Make a grandiose and unfounded proclamation: “Galaxies are formed by the repulsive push of a tennis court.”

Screen Shot 2018-11-15 at 3.31.25 PM.png
  • Use statistics and charts to back it up: “Please observe the chart of a failed chicken coop.”

Screen Shot 2018-11-15 at 3.33.07 PM.png
  • Give a simplified and condescending example for the “everyman”: “It’s just like this, radical ideas may be hard for everyone to register in their pants. Generally. And there’s a dolphin.”

Screen Shot 2018-11-15 at 3.35.20 PM.png
  • Talk about the importance of your idea moving forward: “This is an excellent business because I actually earn one thousand times the amount of life.”

Screen Shot 2018-11-15 at 3.37.03 PM.png
  • Describe potentially discouraging setbacks and roadblocks: “Let’s look at what people eat. That’s not very good food.”

Screen Shot 2018-11-15 at 3.39.39 PM.png
  • Give promising evidence these barriers can be overcome: “We will draw the mechanics and science into the circuitry with patterns!”

Screen Shot 2018-11-15 at 3.41.55 PM.png
  • Provide the audience an important takeaway question: “What brain area is considered when you don’t have access to certain kinds of words?”

Screen Shot 2018-11-15 at 3.43.55 PM.png
  • End with a pithy and folksy conclusion: “Sometimes I think we need to take a seat. Thank you.”

Don’t get me wrong. I’m a TED Talk addict. In fact, I use many of the devices and formulas in TED Talks in my own writing. But following formulas in writing does not make me a great writer any more than learning the Macarena makes me a great dancer. Reben’s talk is giving us a cautionary tale: formula is the enemy of great public speaking, not the recipe for it.

Reben’s performance embodies this by delivering a presentation that is literally comprised of every TED Talk ever given, but which stylistically breaks from all known Talk formulas through the use of a machine-written script and the delivery through a robotic mask. Reben is not giving a TED Talk to give a TED Talk; he is co-opting the TED stage and TED brand as cultural material for creating his own art.

The results are hilarious, but there is also a double-edged realization that there is something fundamental about great public speaking that cannot be reduced into easy-to-learn formulas and thus replicated by AI. You may become a better speaker with practice, but it is human creativity and charisma that make a great presentation (even if many of us wish it were more formulaic and therefore attainable).

Reben’s TED Talk is only the latest in a long line of compelling works where he trains machines to perform human-ish activities that are ultimately void of real meaning or substance. His machines are like small children going through the motions of baby talk, making noises and gestures without quite understanding what they mean yet, and it is both haunting and beautiful to watch.

We see this duality in Reben’s Synthetic Penmanship project for which he put in thousands of samples of handwriting into a model and then got a robot with pens to try to make language. This model knows nothing about language, but you can see that it tries to create shapes that look like characters used in written English.

NkXvl56 - Imgur.gif

As Reben describes it:

All it knows is shapes. Here it tries to make this pseudo language thing, which is endlessly fascinating to me. It’s a computer trying to be human, but it doesn’t know anything about humanity or why it’s making these shapes. It just becomes so beautiful. The action of the robot doing it with these real pens is just this meditative human/machine spiritual connection.

20171209_164958-1.jpg

My favorite work by Reben extends this idea of human-esque communication by training AIs on celebrities’ voices, and in some cases, adding them to video. Reben fed celebrity voices into an algorithm, training it to make sounds that sounded like celebrities, but similar to his synthetic penmanship project, they had no content of English in them.

The celebrities include John Cleese, George W. Bush, Stephen Colbert, Barack Obama, and Bob Ross. According to Reben, Ross in particular had a very “familiar deep tone to his voice.” He found it extremely interesting how the machine could mimic the soul of someone’s voice without understanding language whatsoever. “I could have fed it in the sounds of train horns and it would have done the same thing,” but this did it with language.

At that same time, Reben had been investigating the Deep Dream algorithm which people had been applying to photos and videos. It dawned on him that combining his AI speech patterns from celebrities with Deep Dream video could be pretty interesting. He chose Bob Ross because the act of someone painting was itself a creative process. Not only that, but as you are painting, you are building up an image, and that is something the Deep Dream code can really mess with. “As you add more and more shape to this image, the things it was seeing would get more and more complicated,” said Reben.

When I first saw Reben’s Deeply Artificial Trees go viral last year, it immediately hit me that Reben was collaborating with algorithms to produce an homage to the most prolific but oft misunderstood algorithm artist of all time. The high-art world shuns paintings by Bob Ross and his students because they are formulaic by design. Ross sought to make it easy enough for anyone to feel like a painter, and did so by breaking it down into simple steps. But like Reben’s AI TED Talk and Synthetic Penmanship project, Ross’ paintings have the formula of painting down. However, they lack the depth of thought and exploration we seek in works by master painters. They are in some senses robotic.

Ross is rejected by the traditional art world as irrelevant because if his paintings can be made so simply that anyone can produce them, then they are no longer special. This same line of thinking is of course important when considering generative or algorithmic art made in collaboration with AI. The problem being that if a computer can produce a work in a certain style, nothing should be stopping it from producing thousands or millions more similar works in short order.

For example, when I first saw a Deep Dream image, I was in love. I thought this algorithm rivaled the work of great surrealists like Salvador Dali. But then I discovered an app where I could easily make my own Deep Dream image from any photograph. At first it was intoxicating, and I made like twelve in the first few hours. But once I realized how easy it was the achieve the Deep Dream effect and how everyone now had access to it (it was all over Instagram), it lost its novelty.

Reben points out that he was exploring Deep Dream pre-appification of the algorithm, but also maintains a healthy attitude towards it.

Back then there weren’t as many tools out there as there are now to use Deep Dream. It really was a tool kit that was extremely interesting to me, like a new type of paintbrush. And yeah, the first time paint was invented, if you did anything with paint it was probably pretty amazing, but after a while, paint just became a tool like any other. Deep Dream is now available as a push-button filter, it is just a tool like any other and you can use it to do boring stuff or interesting things. The first few people who use any tool will get a lot of attention because it is so novel and new, but after that first wave, you have to do something really interesting with that tool set to move forward.

The brilliance of Reben’s Artificially Deep Trees is that it really is a human/machine collaboration rather than just the machine itself doing its thing. Reben creates the audio and curates the imagery resulting in something artful and engaging (as evidenced by its popularity). The magic for me here is that Reben combines two formulas for producing mass-produced kitsch imagery, Deep Dream and Bob Ross, and somehow creates a unique and compelling artwork from them. It is Reben’s human contributions to framing, curating, and editing the work, and not the execution of Ross’s or Deep Dream’s algorithms, that makes this a relevant and timeless work of art. As with Reben’s TED Talk, Deeply Artificial Trees highlights the idea that human creativity is irreplaceable, essential, and currently undervalued as we slowly march into a world of increased automation. As Reben describes it: 

AI is really a tool that a human is using rather than the machine itself having all the creative power to it. My curation of the material Deep Dream is applied to, my curation of the voice, my curation of the edit — all those things were human, that was my side. Then there is a computer side, as well. Everyone is worried that AI is going to replace humans, but I think we are really going to see a human/computer collaborative future.

While I see several clear threads running through all of Reben’s work, I was eager to capture more of his thoughts, so I asked Reben to help me summarize the ideas, themes, and explorations behind his projects, starting with the TED Talk:

It has a structure of a pattern, the same way that the Stephen Colbert sounds like him but has no meaning, the handwriting makes English that has no meaning. This thing made a TED Talk that has no meaning. It has the structure of what it is. Our brains are pattern-recognizing machines. When there are patterns, that is soothing to our brains, but when there’s no content, it creates a dichotomy. I think that is what pulls people in and then scares them a bit. It’s a little bit grotesque.

I followed up by asking Reben if there are certain things that will always be in the domain of human creativity, like the charisma required to give a great TED Talk.

The real question is can you find a data set of “charisma.” What does that look like? It may not even be a tech problem so much as it is a training problem of a model. With a good enough data set, who knows what you could do, especially given how fast this stuff is moving. I think we’ll probably get good enough at some point to fake a lot of that, is my guess. So then maybe the more philosophical question is, is it better to have a human to do it than a machine if you can’t tell the difference? Is there inherent value of humanity in creativity versus something that’s just algorithmically made? If there is a distinction, that is a very human distinction; it’s not a very practical distinction. Meaning, if you can’t tell the difference, then it doesn’t actually matter.

An interesting thought experiment I came up with when I was talking to a philosopher was, “We can make a GAN to generate, say, comedy. Can we make a GAN to generate a new genre that we’ve never thought about before?” If you zoom out, “Can AI invent, say, a new academic topic that our brains as humans have never thought of before?” Keep zooming out and ponder if AI can create a new way of thinking. So if we don’t make an AI that could be, say, charismatic… maybe an AI will come up with something that we can’t actually understand which is a description of the world that is as complicated as charisma but is something that is completely different and unique to a computer. But yeah - a lot of stuff right now comes down to good data sets too.

What I love about Reben’s response here is that it highlights a quality that all of my favorite AI artists share: a sober and realistic appraisal of the current AI capabilities which does not dampen their imaginations for a fantastic future for AI that is near limitless.

In addition to his initial curated AI TED Talk, Reben is working on 24-hour autonomous TED Talks that will use text to speech. Be sure to keep an eye out for them. Other new works from Reben that I encourage you to check out include his AI Misfortunes (some great examples shown below). These fortunes are made by an AI which learned from fortune cookies which Reben describes as producing a type of “artificial philosophy.” He then curates phrases from an AI and chooses typography and colors to help amplify and emphasize the message before producing physical posters as the final step of the collaboration.

AI Fortunes - Alexander Reben

AI Fortunes - Alexander Reben

AI Fortunes - Alexander Reben

AI Fortunes - Alexander Reben

AI Fortunes - Alexander Reben

AI Fortunes - Alexander Reben

Also check out this sneak peak of Reben’s balletic three-hour piece of people training AI by showing their web cams their shoes. The ultimate in human/machine collaboration. It is really a 3D scan, which is why they are rotating everything, but Reben took the color images and re-assembled them back into videos. “It struck me as a type of performance people were doing to train an AI,” shared Reben. “I did not quite get it at first, but an hour later I found myself hypnotized.”

Reben is represented by the Charlie James Gallery in LA and has several shows coming up in 2019. If you are near any of these venues, I highly encourage you to check out his new work in person and meet Alexander if you have the chance.

  • Vienna Biennale, Vienna, Austria

  • stARTup Art Fair Special Project, Los Angeles, CA

  • V&A Museum of Design, Dundee, Scotland

  • MAK Museum, Vienna, Austria   

  • Museo San Telmo, San Sebastián, Spain

  • MAAT, Lisbon, Portugal

  • MAX Festival, San Francisco, CA

  • Boston Cyberarts Gallery, Boston, MA

You can learn more about Reben’s latest works at areben.com. And as always, if you have questions, feedback, or ideas for articles for Artnome, you can always reach me at jason@artnome.com.

Subscribe

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!













































Comment

Inventing The Future Of Art Analytics

November 12, 2018 Jason Bailey
Heatmap, Artnome, 2018

Heatmap, Artnome, 2018

This week, Christie’s will be auctioning one of the most important collections of American art, the Barney A. Ebsworth collection. The collection is valued at $300M and is brimming with work by artists like Georgia O’Keeffe and Edward Hopper, whom most of us feel like we know pretty well. But what do most of us really know about these artists from a quantitative perspective? The answer is not very much.

In this article on inventing new art analytics, we:

  • Outline a new approach to descriptive art analytics using Artnome’s database of artists’ completes works.

  • Chart a never-before seen view of Georgia O’Keeffe’s full body of work.

  • Share Artnome’s data scientist Kyle Waters’ early approach to predictive analytics using a random forest machine learning model.

  • Make predictions on four works from the Ebsworth collection going to auction this week.

  • Share artists’ price history, performance, and comps provided by our good friends at MutualArt.

Descriptive Art Analytics

Traditionally, art analytics are derived exclusively from auction databases, but only a fraction of an artist’s complete works ever make it to auction. What about the rest of the works? Should we pretend like the majority of works by an artist don’t exist when doing art analytics simply because it is convenient? I don’t think so.

In art, scarcity and uniqueness of a work drive much of its value. But without a database covering many artists’ complete works, neither scarcity nor uniqueness can be calculated. Most experts would be hard pressed to tell you how many oil paintings a popular artist like Georgia O’Keeffe made, fewer could tell you if that number is high or low compared to other artists, and none could tell you the average size of her paintings.

Made with Flourish

Over the last three years, Artnome has spent thousands of hours and tens of thousands of dollars building the world’s largest database of complete works by blue chip artists to help answer these (and other) questions. In this article, we pull from that database to provide descriptive statistics that paint a macro view of the artists’ complete works. This “big picture” in turn allows us to quantify what makes any singular work “unique” or “scarce” relative to the full body of work.

For example, Georgia O’Keeffe has 2,076 works listed in her official catalogue raisonne. The graph below gives of a breakdown of O’Keeffe’s complete works by primary media. It then breaks down the 810 oil paintings by substrate. We then look at all the oil paintings by whether they are listed in public or private collections. We can speculate works by artists with a low percentage of privately held work are less likely to come to auction and are therefore more scarce.

Made with Flourish

Below we show O’Keeffe’s complete oil paintings by width and height. We then further break it down by showing average surface area and total surface area painted for each year she was active. Contextually, an otherwise small work may be large for its year and vice versa. Not only is it interesting to see how the artist’s working habits evolved over time, but our model also shows how dimensions correlate to sale price at auction.

Made with Flourish

Why collect all this data? We believe the “Holy Grail” of art analytics will stem from a database of complete works enriched with auction data. We see the potential to harvest meaningful data sets from the images themselves and have already started experimenting towards that end using off-the-shelf solutions for identifying and searching objects shown within paintings.

Predictive Art Analytics

For our first attempt at predictive art analytics, Artnome data scientist Kyle Waters trained a random forest to make predictions on pricing using data across several dozen artists. The random forest is a popular machine learning model that uses many decision trees and makes predictions by averaging predictions from component trees. In general, the random forest is more accurate in its predictions than a single decision tree. The model learns basic relationships from training data to then predict new outcomes.

Random Forest Simplified

Random Forest Simplified

Machine learning is an exciting new tool that could help improve art analytics. However, what many people fail to realize is that a machine learning model is only as good as the quantity and quality of the data it is trained on. We believe this gives Artnome an advantage. Because our database covers complete works and not just those that have gone to auction, we have a larger data set from which to train the model.

Again, because we have the complete works in our database, we can also create estimates for all of an artist’s work, not just those that happen to be at auction at any given moment. For this reason, we like to think of ourselves as the “Zillow of blue chip art.”

Our pricing model is admittedly in its early stages and has lots of room for improvement. For example, our current model performs poorly on the works that typically sell for the most (often the ones that are also getting the most public attention).

Chop Suey, Edward Hopper, 1929

Chop Suey, Edward Hopper, 1929

Works from the Ebsworth auction like Hopper’s Chop Suey and Pollock’s Composition with Red Strokes are masterpieces. This means they carry “masterpiece” price tags. Both works are estimated to sell for roughly 10x the artist’s average sale price at auction (since 2000). Like the largest mansions on Zillow, these masterpieces are the hardest prices to predict using historical data because there simply aren’t that many of them. Additionally, there are a limited number of buyers who can afford them, which makes it that much more difficult to predict a hammer price at auction.

Human experts also struggle with predicting prices for top works. Just this week, Van Gogh’s Coin de jardin avec papillons failed to sell at $30M despite estimates around $40M prior to the auction. As Christie’s CEO Guillaume Cerutti shared with The Wall Street Journal’s Kelly Crow, “The air is just thin at that price.”

Coin de jardin avec papillons, Vincent Van Gogh, 1887

Coin de jardin avec papillons, Vincent Van Gogh, 1887

Though we struggled with the most expensive works, as you will see, our model did a very respectable job at predicting prices for works estimated at $3M or less, as we have a high volume of relevant data in our model for works in this price range. Our predictions are so good for work in this range that they may actually seem boring at first. Our model essentially came up with the same estimates as the experts at Christie’s. We are thrilled at these early signs that we could potentially automate pricing estimates at scale for an artist’s complete works using a machine learning model.

Selected Artworks for Analysis

We selected four works from the Ebsworth collection going to auction this week at Christie’s for analysis based on strength and availability of data from our database.

  • Horn and Feather - Georgia O’Keeffe, 1937

  • Cottages at North Truro - Edward Hopper, 1938

  • My-Hell Raising Sea - John Marin, 1941

  • Long Island - Arthur Dove, 1940

We compare Christie’s estimates to our estimates from the Artnome prediction model for each of the above works. You can see the correlation between variables in our model in the heat map below. (You may recognize this heat map as the feature image for this article. I thought it looked like a rather nice modernist painting, so I stripped off the annotations and repurposed it as art.)

Correlation Heatmap.PNG

We have also partnered with our good friends at MutualArt who offer access to auction prices and data on over 300,000 artists as part of their services. MutualArt’s insight analyst Kate Todd generously prepared pricing trends for the artists we cover in this article, as well as comps for the individual works we will be analyzing from the Ebsworth auction.

Georgia O’Keeffe - Horn and Feather

Horn and Feather, Georgia O’Keefe, 1937

Horn and Feather, Georgia O’Keefe, 1937

Georgia O'Keeffe (1887-1986)
Horn and Feather 
oil on canvas
9 x 14 in. (22.9 x 35.6 cm.)
Painted in 1937

Christie’s Low/High Estimate: $700,000 - $800,000
Artnome Model Estimate: $720,000

Made with Flourish

Above: The average lot value for works by Georgia O’Keeffe. Note the spike for 2014 when her Jimson Weed/White Flower No. 1 sold for $44.4M at Sotheby's.

Jimson Weed/White Flower No. 1 , Georgia O’Keefe, 1932

Jimson Weed/White Flower No. 1 , Georgia O’Keefe, 1932

O’Keeffe’s Horn and Feather is a lovely work, but few would confuse it as a masterpiece like Jimson Weed/White Flower No. 1. The estimate from Christie’s (as well as the Artnome estimate) reflect this. In fact, as a lifelong O’Keeffe fan, I’m not sure I would be able to identify Horn and Feather as O’Keeffe’s work out of context. It lacks the magnified, heavily cropped composition that is O’Keeffe’s signature treatment of small objects. Instead, the two objects float in a relatively passive sea of negative white space.

Our friends at MutualArt provided us with a great comp for Horn and Feather. Shell (Shell IV, The Shell, Shell I), painted in 1937 (the same year as Horn and Feather) sold at Sotheby’s last year for $1,515,000, which is 78% above its estimate.

Shell (Shell IV, The Shell, Shell I), Georgia O’Keefe, 1937

Shell (Shell IV, The Shell, Shell I), Georgia O’Keefe, 1937

Though it shares a similar subdued color palette, I think Shell is a superior work as it exhibits the cropping and use of negative space we expect from a work by O’Keefe. While this is a subjective observation on my part, Artnome believes these types of observations are also quantifiable and we are working toward that end.

If you are a regular Artnome reader, then you know that I believe all paintings by female artists are currently undervalued (research shows by as much as 47%) and worth investing in. As a data point, O’Keeffe (who may be the best-known female painter of all time) has an average lot value of $2,340,715 for paintings, far below that of Edward Hopper, her male contemporary, whose average lot value is $8,963,652 (2000-present). For this reason, I always root for O’Keeffe and other female artists to out-perform their estimates. That said, I will be rooting for Horn and Feather.

Edward Hopper - Cottages at North Truro

Cottages at North Truro, Edward Hopper, 1938

Cottages at North Truro, Edward Hopper, 1938

Edward Hopper (1882-1967)
Cottages at North Truro 
signed 'Edward Hopper' (lower right)
watercolor and pencil on paper
20 x 28 in. (50.8 x 71.1 cm.)
Executed in 1938.

Christie’s Low/High Estimate: $2,000,000 - $2,500,000
Artnome Model Estimate: $2,220,834.00

Made with Flourish

Above: The average lot value for works by Edward Hopper.

As a painter trained in both watercolor and oils, I see Hopper as every bit an accomplished watercolorist as he is a masterful painter with oils. So while works on paper generally fetch less than oils on canvas, I would not at all be surprised if Hopper’s Cottages at North Truro achieved the $2,220,834 estimate from our machine learning model.

While Hopper’s works on paper average just $318,554 at auction (since 2000), superior works can sell for much higher sums. In 2001, Charleston Slum, a Hopper watercolor on paper, sold for $1,876,000 at Christie’s on an estimate of $500,000 - $700,000.  

Charleston Slum - Edward Hopper, 1929

Charleston Slum - Edward Hopper, 1929

Our friends at MutualArt provided two additional comps below, both of which brought in significantly less at auction than Charleston and less than our estimate for Cottages at North Truro.

Vermont Sugar House, Edward Hopper, 1938

Vermont Sugar House, Edward Hopper, 1938

Shacks at Pamet Head, Edward Hopper, 1937

Shacks at Pamet Head, Edward Hopper, 1937

Hopper’s Vermont Sugar House sold at Christie’s in 2007 for $881,000 and Shacks at Pamet Head sold at Sotheby’s in 2004 for $702,400. Both works exceeded estimates of $500,000 - $700,000. It will be interesting to see if Cottages at North Truro can rally past these prices to meet our estimate.

John Marin - My-Hell Raising Sea

Screen Shot 2018-10-31 at 3.54.59 PM.png

John Marin (1870-1953)
My-Hell Raising Sea
signed and dated 'Marin 41' (lower right)--inscribed with title (on the reverse)
oil on canvas
25 x 30 in. (63.5 x 76.2 cm.)
Painted in 1941.

Christie’s Low/High Estimate: $250,000 - $350,000
Artnome Model Estimate: $803,372.00

Made with Flourish

Above: The average lot value for works by John Marin.

And finally, a prediction from our model outside of the range of Christie’s own estimates. Our model likes this painting. Even though the trends suggest that the market for Marin may be headed downward, our estimate has it at $803,372, over twice Christie’s middle estimate of $300,000.

I also don’t have access to a condition report for My-Hell Raising Sea, but it does look like there may be a crease of some sort on the right sight of the canvas. Condition is likely the most important variable missing from our model, and we are actively seeking solutions to resolve this moving forward.

Marin was among the first American artists to paint abstracts and is a bridge between figurative painters and the abstract expressionists. For that reason, works that highlight his tendency toward the abstract like Sailboat, Brooklyn Bridge, New York Skyline (which sold for $1,248,000 in 2005) have done well.

Sailboat, Brooklyn Bridge, New York Skyline, John Marin, 1934

Sailboat, Brooklyn Bridge, New York Skyline, John Marin, 1934

As a lifetime New Englander who is happiest on the northern shores of Maine, I strongly prefer Marin’s seascapes - they capture that landscape as well as any other painter, Winslow Homer included. But our model does not care about my fondness for the Maine seacoast.

The comps from MutualArt suggest that Christie’s experts have it right on this one and that our model may be too high. But we are of course standing by the the estimate from the model.

Two Sloops on a Squally Sea, John Marin, 1939

Two Sloops on a Squally Sea, John Marin, 1939

Our first comp, Marin’s Two Sloops an a Squally Sea, sold at Sotheby’s in 2016 for $212,500. While it exceeded its own estimate of $120,000 - $180,000, it fell well short of our $803,372 estimate for My-Hell Raising Sea.

Cape Split, Maine, John Marin, 1945

Cape Split, Maine, John Marin, 1945

And our second comp, Cape Split, Maine, had an estimate of $400,00 - $600,000, but failed to to find a buyer at auction just a year ago at Sotheby’s.

Arthur Dove - Long Island

Arthur G. Dove (1880-1946)
Long Island
signed 'Dove' (lower center)
oil on canvas
20 x 32 in. (50.8 x 81.3 cm.)
Painted in 1940.

Christie’s Low/High Estimate: $1,00,000 - $1,500,000
Artnome Model Estimate: $2,801,572

Like Marin, Arthur Dove is among the earliest American abstract painters. His works are simpler abstractions, and I mean that as a compliment. They have an organic feel that is supported by the use of an earthy palette.

Long Island is not a particularly sexy painting, even for Dove, but it has grown on me. It is unmistakably Dove in its pared-down composition, which features a nice balance of the sun (or moon) dwarfed by two massive monolithic forms resting on wave-like dunes. Our prediction model liked the piece more than I do, pricing it at $2,801,572, well above Christie’s estimate.

I personally much prefer the comp sent to us from MutualArt, Dove’s 1941 Lattice and Awning.

Lattice and Awning, Arthur Dove, 1941

Lattice and Awning, Arthur Dove, 1941

Lattice and Awning last sold for $1,685,000 against an estimate of $1,200,000 - $1,800,000 in 2013 at Sotheby’s. I believe it to be a stronger composition than Long Island, but my data suggests Dove may not have made that many paintings compared to the other artists in my database, with just 459 works listed in his catalogue raisonne. If this is the case (I think an expanded catalogue is in the works), it may drive up the desirability of Long Island.

Moving Forward

In the future, we plan on leveraging deep learning to harvest data from the images themselves. We believe using the image data is useful for predictive accuracy because we can then detect things like color, subject matter, artistic style, composition, etc. These are all variables that people clearly visualize and use to determine the price of an artwork, but that have not been quantifiable in a scalable and manageable way. Until now.

We also have some early thoughts on how to improve detection and prediction of masterpieces as outliers in our model. One idea is to include data on exhibitions from the top museums and galleries. If we have data showing that several works from a single exhibition or combination of exhibitions has led to dramatic increase in sale price, other works that were in those same exhibitions may also receive a bump from the model. Recent research suggests the number of institutions that are influential in establishing the value of art and artists is relatively small, so this may be a fairly manageable undertaking. What I like about this approach is that we are essentially factoring in the good judgment of the best curators of the last 100 years in a quantifiable way for our model.

Summary and Conclusion

In this article, we looked at new descriptive analytics driven by Artnome’s complete works database that gave us a unique view of O’Keefe’s complete works. We then used our data to explore predictive analytics around auction prices for several pieces from the Ebsworth collection that will be going to auction this week. We also provided further context with performance history and comps thanks to our friends at MutualArt.

We think there is a ton of low-hanging fruit when it comes to applying modern analytical tools and practice to art and the art market. In addition to building better prediction models, improving available data on art and artists helps us understand these works in a new light and provides a much-needed barrier against forgery.

At Artnome, we are looking to onboard three to five clients in the next few months who are interested in benefiting from early access to our prediction model and the insights from our unique database. We would ideally like develop a long-term relationship with a few key clients as we grow the strength of both our one-of-a-kind database and our machine learning driven models. I can be reached at jason@artnome.com.

For those looking for a more mature solution, MutualArt offers both full advisory services including authentication as well as self-serve tools for auction data and analysis. While there are dozens of data and analytics providers to choose from, we like Zohar and his team at MutualArt because they share our vision of data and machine learning leading to better analytics and a stronger art market.

Subscribe To The Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!


Comment

Is Art Blockchain’s Killer App?

November 6, 2018 Jason Bailey
Still Life in the Street, Stuart Davis, oil on canvas, 1941

Still Life in the Street, Stuart Davis, oil on canvas, 1941

I wrote my first article on blockchain and art almost a year ago. Bitcoin has plummeted by 64% and Ethereum by 86% over that same period. Once thought to be a promising new way to fund new ideas and startups, ICOs (initial coin offerings) are now viewed with extreme skepticism, and all but disappearing.

From an outsider’s perspective, one could easily assume in this climate that the blockchain use case for art would be dead. But from the insider’s perspective, nothing could be further from the truth.

As the “get rich quick” ICOs and cryptocurrency speculation of late 2017 has subsided, the potential for blockchain to solve real-world use cases for art has only grown. Even I have been surprised by how resilient and fast moving things are in the blockchain art space during this span of cryptoCurrency free fall.

In this post, we talk directly with the key innovators in the space and look at some of the strongest signs of momentum in the adoption of blockchain for the world of art, including:

  • Christie’s auction house partnering with startup Artory to roll out a blockchain pilot for the auction of the Barney A. Ebsworth Collection, estimated to exceed $300 million.

  • Established artists like Ai Wei Wei and Eve Sussman exploring the blockchain and CryptoArt.

  • Ambitious startups creating innovative tools and services making it easier for artists of all abilities to take advantage of the blockchain.

While we have seen fewer headlines on blockchain and art in the last few months, it has also been the most productive time period for the space. I’m not sure if there is causation here, but I’m certainly not willing to rule it out.

Christie’s to Record Sales on Blockchain

Horn and Feather, Georiga O’Keefe, oil on canvas, 1937

Horn and Feather, Georiga O’Keefe, oil on canvas, 1937

In my last article, “Art World, Meet blockchain,” I implied that large organizations like Christie’s would be better served teaming up with smaller, more nimble startups to explore technological innovations like blockchain. Christie’s has since made the right move in my opinion, forming a partnership with the blockchain title registry Artory.

Artory brings the rapid innovation of a startup, but with a known quantity at the helm in founder Nanne Dekking, who also serves as the current chairman of TEFAF (The European Fine Art Foundation).

Though pitching it as a pilot, I am pleasantly surprised that Christie’s chose the highly visible Barney A. Ebsworth Collection as their foray into blockchain. The Ebsworth Collection is widely recognized as the most important privately held collection of twentieth century American art. The collection is estimated to exceed $300 million total at auction and will be entirely recorded on the blockchain.

On a personal note, I am particularly fond of many of the artists who feature in this collection, as Stuart Davis, Arthur Dove, Charles Sheeler, Marsden Hartley and others are well represented in the Boston Museum of Fine Arts.

Woman as Landscape, Willem de Kooning, oil and charcoal on canvas, 1954-1955

Woman as Landscape, Willem de Kooning, oil and charcoal on canvas, 1954-1955

I asked Christie’s CIO Richard Entrup how the use of blockchain might impact the Ebsworth auction experience and if we should anticipate further adoption of blockchain from Christie's moving forward. Entrup shared:

Christie’s leadership in global sales is reflected and supported by continued investment in digital platforms and initiatives that work for our clients. Our pilot collaboration with Artory is a first among the major global auction houses, and reflects growing interest within our industry to explore the benefits of secure digital registry via blockchain technology.

We are running this as a pilot and there is no risk or we would not be doing it. Christie’s will retain all information about the buyer. The certificate relates to the object, not the owner. We will consider the future use of blockchain in our business at the end of the project. As ever, any changes will be led by our clients’ needs.

I think this crawl, walk, run approach is the exact right one to mitigate potential concerns from their collector base while exploring the upside that blockchain could bring to their business and the art industry at large. It is clearly a win for Artory and a reflection of the trust Nanne Dekking has established in the art world. I asked Nanne for his thoughts on the partnership:

We are honored that Christie’s is teaming up with our blockchain-secured art registry for its upcoming sale of the Barney A. Ebsworth Collection of American modernism. The house will become the first major auctioneer to apply the innovative technology. For first-time buyers and experienced collectors alike, Artory provides: the reassurance that they are dealing with a vetted seller; that there will be an immutable record of the transaction; and that they will receive a certificate of sale from an independent third party—all of which encourage them to act with confidence.

Artory will create a digital certificate of each transaction for Christie’s, and the latter will provide their clients with a registration card to securely access an encrypted record of information about their purchased artwork on the Artory Registry. Artory not only gives buyers the confidence they crave, it enhances their entire experience. Offering up-to-date information from trusted resources about transactions and the market in general, Artory saves buyers time and money when performing due diligence. Furthermore, unlike current bills of sale, which are easily lost or forged, Artory’s standardized certificates of sale take the headache out of collection management, providing irrefutable proof of ownership without compromising buyers’ anonymity or privacy.

It’s essential for some of the larger institutions like Christie’s to come on board if blockchain registries are going to become mainstream in the art world. I’m rooting for Christie’s blockchain pilot to add real value in the form of better provenance and data transparency and hoping that it serves as a foundation for expansion and not just a one-time experiment.

Blockchain, Artists, and Exhibitions

Kevin Abosch (left) and Ai Wei Wei

Kevin Abosch (left) and Ai Wei Wei

While the Christie’s partnership signals interest among collectors for using blockchain to better catalog art, interest in and by artists has also never been greater.

{Perfect & Priceless} Value Systems on the Blockchain, a show at the Kate Vass Galerie in In Zürich , Switzerland, curated by generative art expert Georg Bak, will open on November 15th and run through January 11th, 2019.

The show features many of the most innovative artists working with blockchain today, including collaborators Kevin Abosch and Ai Wei Wei, Matt Hall and John Watkinson of CryptoPunks, and blockchain art pioneer Rob Myers.

While all of the works in the show explore “value systems” as the title implies, I find materiality as a secondary theme to be the most interesting lens through which to view the work. Or as CryptoPunks artist John Watkinson puts it, “Bridging the divide between the digital and the physical.”

One of my favorite works in the show is the Chaos Machine by the Distributed Gallery, an art collective from France. Side note, if you are an astute and regular reader of Artnome, you may remember the Distributed Gallery as the team behind the controversial Richard Prince token, whose anonymity I exposed, and whom I then interviewed in great depth.

Chaos Machine, The Distributed Gallery, 2018

Chaos Machine, The Distributed Gallery, 2018

Chaos Machine, The Distributed Gallery, 2018 (close up)

Chaos Machine, The Distributed Gallery, 2018 (close up)

The Distributed Gallery’s Chaos Machine literally burns fiat (paper currency) and turns it into what they call a “chaos coin.” This is not a physical coin, but rather a blockchain-based cryptographic token. For me, the spectacle of watching the currency burn draws attention to the absurdity of wealth being stored and transmitted through such a seemingly arbitrary and fragile object as a paper bill in an age where all things are rapidly moving towards digitization.

Though there is no direct connection between the burning of the paper currency and the production of the chaos coin, it feels as if a transfer of value has been facilitated by the machine. Our mind establishes a cause and effect relationship based on a sequence in which one symbol of value is destroyed and another is born. The sequence also narrows the gap in the reverence held for established government-backed paper currencies versus the initially absurd-seeming chaos coin. Is a chaos coin really all that different from a banknote? The philosopher Bernard Aspe (as translated by Daniel Shavit) writes about Chaos Machine:

…it is because all the others place their trust in this strange object, the banknote (or the cryptocurrency unit), that I too can trust. I trust the trust of others. It is only in this way, only as trust that refers only to itself, that value «exists»…

…Behind the bill that is consumed, there is nothing - literally, and what is made visible here is above all nothingness…

…The new currency necessarily reproduces, for the most part, the aberrations of the previous one. Its main merit, however, is that it is more open about what is nothing to the principle of social interaction…

For me, Chaos Machine highlights how physical currency feels antiquated, clumsy, and vulnerable (not to mention government-dependent) in an age when music, books, and now art are increasingly moving towards the digital. With Chaos Machine, the Distributed Gallery has compressed the transition of digitalization which is being played out across decades into a single act. Simply insert the bill, watch it burn, and receive your new digital currency. This not only makes the transition to digital currency more visceral, but also makes for good theater.

CryptoPunks, printed version with paper wallet and wax seal

CryptoPunks, printed version with paper wallet and wax seal

In contrast to Chaos Machine, the work being shown by Matt Hall and John Watkinson of the CryptoPunks runs the process of digitization in reverse. They will present the first ever printed CryptoPunks, displayed in a grid of nine unique artworks.

The CryptoPunks are a generative art project comprised of a series of 10k pixelated portraits of punks with proof of ownership stored on the Ethereum blockchain. Matt and John famously released the Punks into the world for free in 2017 as one-of-a-kind digital collectables that anyone could claim. An economy quickly emerged around the characters, with the more rare Punks selling for thousands of dollars.

CryptoPunks has helped to prove out and popularize the application of digital scarcity to digital art on the blockchain as pioneered by projects like Rare Pepe Wallet. Where Chaos Machine took currency born in the physical world and turned it into a digital currency, Matt and John are taking their Punks, born digital, and giving them a new material existence as printed work for this show. It was important to them to communicate that as digital art, physical representations are only artifacts of the genuine digital asset. As John explains:

For this show, we tried to bridge the divide between physical and digital art, while still reinforcing the point that the digital, cryptographic asset represents the true "ownership" of the work, as opposed to the physical print. We did this by including with each print a "paper wallet," which is a set of 12 words that encode an Ethereum address (using the BIP-39 standard). So that it wasn't seen as subordinate to the print, and to have a little fun, the paper wallet was sealed with a custom CryptoPunk wax seal. The buyer can decide to either open the envelope and claim ownership over the Ethereum address that owns the punk, or they can simply leave it sealed, and include it with the print if they resell the work. So, one of the earliest forms of security is brought together with one of the most modern to make these works both physical and digital.

One way to think of these printed representations of the Punks is as a physical proxy for their digital equivalents. The work of another artist in the show, artist Kevin Abosch, has long explored the idea of proxies, first through photographs as proxies for the people and objects captured within them, and more recently, through tokens and tokenization as proxies.

In his work IAMA Coin included in the exhibit, Abosch created 100 physical artworks and a limited edition of 10 million virtual artworks. The physical works are stamped using the Abosh's own blood with the contract address on the Ethereum blockchain corresponding to the creation of the the 10 million virtual works (ERC-20 tokens).

Artist Kevin Abosch harvesting blood as material for his IAMA Coins

Artist Kevin Abosch harvesting blood as material for his IAMA Coins

In my interview with Abosch last spring, he provided more background and detail on the IAMA Coin project:

My IAMA Coin project really is the culmination of everything I am about: identity, existence, value, human currency. And it's just a function of me being an artist with a bit of success and feeling a bit commodified... as I have said in the press already. And I started to imagine myself as a coin. In fact, I started looking at of all of us as coins, and wondered what that would look like, all of us as coins in the hands of the masses, and wanted to do that in some kind of elegant way. So naturally I started to look at the blockchain.

But still I was thinking, "I'm an artist, I'm not trying to raise money for a company, so I will tokenize myself..." and you saw how I did that - the blood, the physical work, and the virtual work of my IAMA Coin project. It has been a bit of a challenge for some people as to how an ERC20 token itself can be a piece of virtual art, or it is a placeholder for art, whichever you prefer. I think when it comes to blockchain plus art, this would be a rather extreme position.

Abosch’s second work in the exhibition is a collaboration with Ai Wei Wei called PRICELESS. It further explores the idea of proxies and asks the question, “How and why do we value anything at all?” by tokenizing photographs of moments the two artists have shared. Abosch says:

I have been using blockchain addresses as proxies to distill emotional value for some time now, and with Wei Wei, we “tokenized” our priceless shared moments together. Some of these moments on the surface might seem banal while others are subtly provocative, but these fleeting moments like Sharing Tea and Walking In A Carefree Manner Down Schönhauser Allee or Talking About The Art Market are the building blocks of human experience. All moments in life are priceless.

Each priceless moment is represented by a unique blockchain address which is “inoculated” by a small amount of a virtual artwork (crypto-token) we created called PRICELESS (symbol: PRCLS). Only two ERC20 tokens were created for the project, but as they are divisible to 18 decimal places, these works of virtual art could potentially be distributed to billions of people. Furthermore, a very limited series of physical prints were made.

One of the two PRICELESS tokens will be unavailable at any price. The remaining token will be divided into one million fractions of one token and made available to individual collectors and institutions. These artworks of course may be divided into much smaller artworks, as the PRICELESS token is divisible to 18 decimal places. It is not unusual in the art world for large works to be priced higher than similar smaller works; so should a larger fraction of PRICELESS have a higher price than a smaller fraction. One of those peculiar ways we value things — greater size/quantity = greater value. The question is, if one token is priceless and truly unattainable, then how do we value the other token which is made available?

My understanding is that the photographs are not for sale and the physical works are printouts of the wallet address that contain a “nominal amount of PRCLS token.” In this sense, the wallet addresses are a proxy for the photographs, which are in turn a proxy for the moments shared between the two artists.

Printout of a PRICELESS wallet address

Printout of a PRICELESS wallet address

As described by Abosch above, there is also a second PRICELESS token which is being distributed and is divisible enough so that everyone on the planet could receive a share.

According to the smart contract, I happen to be one of the first people to receive a share of the token, and I have enough that I can give a fraction to every person on the planet. If you are interested, email me your MyEtherWallet address at jason@artnome.com and I will send you a fraction. As a collector of Abosch’s art, I will warn you he often scrambles traditional value systems so much that you are left uncertain as to what it is you own (if anything) and what that something is worth - and I think that is exactly the way he likes it.

Ai Wei Wei is not the only well-known artist from the more traditional art world exploring the use of blockchain. Renowned artist Eve Sussman who is best known for translating well-known masterworks into large scale re-enactments has teamed up with Snark.art to create a piece called 89 Seconds Atomized.

The work is based on her well-known piece 89 Seconds at Alcázar, a live re-enactment which meticulously creates the moments directly before and after the image portrayed by Diego Velázquez in Las Meninas (1656). The piece debuted at the 2004 Whitney Biennial to great acclaim and can be seen in it’s original form below.

According to Snark.Art:

89 seconds Atomized shatters the final artist's proof of Eve Sussman's acclaimed video 89 seconds at Alcazár into 2,304 unique blocks, to create a new artwork on the blockchain. An experiment in ownership and collective interaction, the piece can be reassembled and screened at will by the community of collectors.

I received a sneak preview of 89 Seconds Atomized at a well-attended blockchain art event hosted by Elena Zavelev and the New Art Academy at NYU this past October where Sussman gave an artist’s talk and presented the work to the audience.

Sussman provides more detail on her motivation to explore the blockchain for her recent piece in the video below.

This project is Snark.art’s first collaboration as part of their mission to “…create a distributed system of art ownership and creation while also offering an entirely new crypto-investment opportunity and point of entry to the blockchain world.” I’m looking forward to following their future artist collaborations.

Blockchain Startups, Tools, and Innovations

Perhaps you are not Ai Wei Wei or Eve Sussman, but you too would like to put your work on the blockchain for better provenance and the opportunity to sell your digital art. What about the rest of us?

There are now a dozen or so blockchain-based art markets that cater to artists of many ability levels and experience. Markets like SuperRare are evolving quickly and have even become a place for traditional auction houses like Christie’s to source talent. The French artist’s collective Obvious that recently sold their Portrait of Edmond Belamy for $450K were contacted by Christie’s after they were discovered on the SuperRare blockchain art market.

I asked the SuperRare folks what it was like to hear that Obvious had been scouted form their market place:

It was exciting and somewhat surreal to see Obvious get picked up by Christies and have such a successful auction, after starting to sell their digital works on SuperRare earlier in the year. It's fantastic to see this much attention on creators who are experimenting with the intersection of art and new technologies, whether AI or blockchain. I think this is just the beginning of a big shift in the way we think about art, value, and the potential of computers to augment human creativity.

Screen Shot 2018-10-31 at 2.41.26 PM.png

In addition to pioneering the lively and active SuperRare market, Pixura, SuperRare’s parent company, has announced they are making it possible for anyone to launch their own market for art and collectibles on the Ethereum blockchain.

With Pixura, the goal is to lower the barrier to entry for building crypto collectible applications. When we launched SuperRare, we got contacted by a lot of creatives, entrepreneurs, and even web developers who wanted a digital asset marketplace for their idea, but didn't have the means to do the blockchain engineering themselves. So we're launching a platform that lets anyone deploy a digital asset marketplace in minutes, without writing any code. The possibilities of this nascent technology is super inspiring, and we're excited to see what people build on it.

This is huge, and I think it gets at what most artists really want: a turn-key, Shopify-style solution for selling their own digital art and collectibles on the blockchain. From early on, most of the artists I spoke with were concerned that their carefully crafted personal brand might be thrown off if they were just tossed in with dozens of other artists at random on a catch-all marketplace. For example, if I paint impressionistic floral still lifes and you paint bloody skulls, I may not want my work to be juxtaposed next to your work (or maybe I do?), but existing blockchain marketplaces do not give artists that level of control.

I personally can’t put to rest the idea of launching an Artnome gallery for digital art built on the Pixura platform. I’d like to launch a heavily curated digital gallery with a small number of works by an even smaller number of artists. I’d want my gallery to celebrate artists as heroes - the way I read about them and experienced them when I was growing up. In an age when it is no longer popular to think of artists as geniuses, I still see great art and artists as transcendent and worthy of our greatest admiration. To be honest, I miss the romantic myth-making put behind artists like the abstract expressionists that made them feel so much larger than life to me. Curation, of course, flies in the face of blockchain and decentralization - but we already have several great distributed blockchain art markets, and I want to build a house of worship for amazing contemporary digital artists, dammit. I’ll get off my soap box - but feedback is welcome here, both for and against.

One thing that would hold me back from building a blockchain market is how complicated it still is for the average person to grasp blockchain. Getting your first cryptocurrency requires connecting your bank account and downloading third-party browser plugins like MetaMask in order to transact. While a recent Twitter poll I took with 193 respondents suggests almost 75% of people are open to collecting digital art on the blockchain, most people would agree that user experience is the number one thing slowing down new user adoption of cryptocurrency and collectibles.

Twitter poll taken from my @artnome account

Twitter poll taken from my @artnome account

Luckily, startups like Portion.io are investing heavily in UX (user experience design) to help streamline the process. Portion is a blockchain Dapp that allows anyone to be their own auction house on the blockchain. It is still early in their development, but they officially launched their beta last month with a collection of digital sneakers from artist Robb Harskamp. The first thing I noticed when buying my Harskamp Air Jordan 3 Retro Tinker from Portion is that I did not need MetaMask - and I didn’t miss it.

Air Jordan 3 Retro Tinker, Robb Harskamp - 1/1, Artnome digital art collection

Air Jordan 3 Retro Tinker, Robb Harskamp - 1/1, Artnome digital art collection

While those who have come to like MetaMask can either "safely generate a new wallet" or use "MetaMask when the functionality is built in", Portion has effectively removed a step for new buyers, making it easier to get started. I spoke to Portion CEO Jason Rosenstein about the importance of simplifying UX for blockchain markets:

Currently, there are as many crypto users as internet users in the year 1994. For blockchain projects to truly take off, I believe the key will be taking the cryptic out of crypto. There will come a point when people won’t need to know the underlying technology of a particular application is interacting with the blockchain. For example, when you send an email, not many people give a shit about the amazing underlying TCP/IP protocol. For crypto projects to succeed in the future, they must cater to the general population by improving UX/UI and investing in R&D to reduce barriers to entry.

Interface for the new Portion.io Dapp

Interface for the new Portion.io Dapp

Though Portion.io launched with a digital marketplace, their plan is to quickly move into the physical space, as well, allowing anyone to auction digital and physical goods with the many advantages that come with transacting via cryptocurrency.

Indeed, the physical art market dwarfs the relatively new and niche market for collecting digital art. This was never more apparent to me than at the social hour after Christie’s blockchain conference in London. Everyone there worked in the art trade in one form or another, and they were all hungry to learn how blockchain related to the trade for art and other physical luxury goods.

The BAC (Blockchain Art Collective) has an excellent head start on building out a solution for authentication and tracking of physical art that combines IoT (internet of things) and blockchain technologies. I spoke with Jacqueline O'Neill, executive director, about what makes BAC unique:

Blockchain Art Collective has chosen to tackle the physical art space first and foremost, as our blockchain and trusted IoT solutions have been in development with our tech partner Chronicled since 2014, and they bring a lot of unprecedented value to the art world.

What anyone from an artist to a gallery to an auction house to an art logistics company should understand is that we now have the capacity to create the same level of physical scarcity - which relies on a variety of security-related, technological features made possible by blockchain and trusted IoT - for physical art in the way we are seeing digital scarcity for digital art.

This one-to-one, tamper-evident, and encrypted physical-digital link improves the not-so-unfamiliar barcoding system for managing any volume of art assets by combining the role of the artist's signature, a physical or digital certificate of authenticity, and a physical or digital catalogue raisonné into a single, secured identity that can stay with an artwork over the course of its life.

Additionally, we can now connect that unique artwork with a rich digital life that protects its authenticity, tracks its provenance, and unlocks new vehicles for artists and arts institutions to monetize their artworks.

Blockchain Art Collective tagging sticker

Blockchain Art Collective tagging sticker

A large part of what I find compelling about BAC is that it was spun out of Chronicled, who have been deploying decentralized supply chain ecosystems and building protocol-driven solutions to enhance global trade across key industries since 2014. That experience and know-how combined with O’Neill’s art background and passion for the solution give it a real shot of catching on as a widely used solution for physical art.

Conclusion - Is Art Blockchain’s Killer App?

So what would it look like if art was Blockchain’s killer App? What would it take to successfully integrate blockchain into the art world and solve real-world problems? We would probably want to see some of the biggest institutions like Christie’s experimenting in a high-profile way with blockchain. Check. We might also want to see some of the world’s most influential living artists, people like Ai Wei Wei, experimenting with blockchain as subject matter and and as a medium. Check. And given it is still the early days, we’d want an army of startups and creative technologists working around the clock on R&D to improve and invent new solutions. Check.

I have been trying to tell people for the better part of a year now that the blockchain use case for art has very little to do with the cryptocurrency craze of late 2017. The popular opinion among most innovators in this space is that the volatility and press that came with the bull market may have done more harm than good for those trying to build out actual solutions. So perhaps we should not be surprised at all by the signs of growth and strength in blockchain and art. It may not be getting as much press as a few months ago, but I would not take your eye off the blockchain and art space.

Subscribe To The Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!




Comment

The Truth Behind Christie’s $432K AI Art Sale

October 29, 2018 Jason Bailey
Left: Portrait of Edmond Belamy, auctioned off at Christie’s for $432,500. Right: outputs from Robbie Barrat's 2017 art-DCGAN project run by Tom White

Left: Portrait of Edmond Belamy, auctioned off at Christie’s for $432,500. Right: outputs from Robbie Barrat's 2017 art-DCGAN project run by Tom White

Christie’s just sold the first piece of AI art to be offered at a major auction house for $432,500. The piece was consigned by Obvious, a group of three friends from France with no formal art training.

There have been a lot of questions surrounding the creation of the work sold at auction, including:

  • Does AI artist Robbie Barrat deserve credit for this work?

  • Why did Obvious exaggerate the role of the algorithm?

  • Did Obvious really say they were considering “patenting ‘their’ algorithm”?

  • What role did Obvious actually have in making the Portrait of Edmond Belamy?

  • How does Obvious view AI and art moving forward?

In this interview, I speak with Hugo Caselles-Dupré, the tech lead from Obvious, and ask the questions above to cut through speculation and get answers straight from the source. I believe Hugo is being candid and transparent in this interview (even when it is not in his favor).

My interview with Hugo is extensive and gives his unfiltered point of view. Because I have given Hugo over 6,800 words, before we dive into the interview, I’d like to share some thoughts from other members of the AI art community to offer some balance to this story.

I asked respected AI artist Mario Klingemann who deserves the credit for the Portrait of Edmond Belamy. He shared:

“The question of who deserves the credit here is not so easy to answer. One important aspect is that this work was made in the context of art and not in that of science, so there is no obligation to cite prior work or list the tools and frameworks that you use to create your art.

“When it comes to Ian Goodfellow's GAN, I see that as a production tool, similar to Photoshop in digital art or a brush in painting. As far as I know, Ian does not identify as an artist or label his creation as an artwork, so while I would see it as good etiquette to mention that his or other researchers' work was used in one's work, I would not go so far to give him artistic credit for all the creations that get made by using it.

“Now, with Robbie's contribution, it's something else - he curated the training data set, trained the model, and put it on GitHub. So in the end, Obvious just had to fork it, have it produce a number of images based on random feature vectors, and finally make their selection. So you could say that Robbie did two thirds of the work involved in this process.”

02472, Mario Klingemann, created using GANs

02472, Mario Klingemann, created using GANs

Another highly respected artist, Tom White, has been exploring AI and art since the mid-1990s. Tom shared that he believed that Obvious used a similar codebase and the same training set.

Outputs from Robbie Barrat's 2017 art-DCGAN project run by Tom White

Outputs from Robbie Barrat's 2017 art-DCGAN project run by Tom White

Indeed, Tom ran Robbie’s model himself and was able to come up with images that look almost exactly like the Portrait of Edmond Belamy. It is Tom’s opinion that the Portrait of Edmond Belamy is a work of “appropriation.”

Hugo’s thoughts on this are covered in detail in the interview below. He did ask that I include this video as partial proof that Obvious did train their own GAN.

Lastly, this is the second part of my interview with Hugo. In part one we focused on how Obvious went from failing to sell their work on eBay to making $432,500 on a single print at Christie’s. It is not required that you read the first part of the interview to proceed with the second part, though I do recommend it if you have the interest and the time.

I hope you enjoy the interview.

Interview With Hugo Caselles-Dupré

Does Robbie Barrat deserve the credit for the Portrait of Edmond Belamy?

JB: Okay. So I am going to ask the hard questions, but I think this is important because I know you want to clear the air with this interview. Would you say that Robbie deserves credit for a percentage of the work you created? Or would you say he built the camera and you used the camera? How do you think about this?  

HC: Yeah. I think this is a good question. We ask ourselves this question a lot, too. And we're like, 'Okay, what can we do with this?' But in the end, the fact that we did this physical piece and we signed with the formula is something that we wanted to do, and I think it has a lot of responsibility in the exposure of our project, too. So we owe him a lot, and that's what we said to him, that we wanted to be up front with him. So we owe him a lot. We wish him success. And we hope that if people like our project, they'll go to his project, too, and check out what he's doing. So we think he deserves something in this whole situation. And then, I considered even more. What can we really do?

JB: It’s just a matter of making it clear, and this interview is the exact right place to do it because we go a bit deeper. So could you have made the Belamy project without using Robbie’s code?

HC: Yeah, yeah, I really think so, because we had our eyes on many different data sets. We already knew that we wanted to do something like a classical art movement, like portraits or something like this. We already had this in mind. And so when we saw this, it was like, 'Okay, this is really convenient, so we can try working with this.' Yeah. We would have definitely found a way to do it ourselves. Before going over his code, I was already doing lots of things with GANs for my master's degree, so I already had lots of things going on with GANs. It was just a matter of collecting the data sets, and we would find a way because we have some scraping abilities. We used his code mostly for the scraper. So we used his scraper for getting all the data with the art. And in the end, the code that created the Belamy collection is the one that I -- the actual GAN permutation is the one that I talked to you about.

JB: The PyTorch DCGAN?

HC: I was already doing this with my master's degree. I was already involved in the GAN coding.

JB: You used the scraper mostly to gather the images?

HC: Yeah. Gather the data. So it's a Python script that when you run it, it collects all the data, like for the portrait class, for example. So you can collect all the data of the portraits.

JB: Where did you get the images?

HC: In the wikiart.com.

Why did you say, “Creativity is not only for humans,” implying that AI was autonomously making the work, even when you knew that was a false statement?

JB: What about your narrative that “creativity isn’t only for humans”? Were you playing up the machines and now saying that is not what you meant?

HC: Yeah. Exactly. I think that's what happens when you're doing something and nobody cares, then you’re just goofing around and doing really clumsy stuff. And then when everybody has this view, then they go back to what you did before and then you have to justify it. We kept justifying, because we still think that this part of the GAN operator that creates the images is really interesting and there is some form of creativity there … and we just thought it was cool to just do it like this. For us, it was just a funny way to talk about it.

JB: You didn't know you were going to be under the microscope.

HC: If we knew we were going to have to 400 press articles on what we do, we most definitely would have done that. But at [that] moment we were like, 'Yeah, it’s silly, okay, whatever, let's put this.' But retrospectively, when we see that, we are like, 'That's a big mistake.'

JB: All you can do is admit the mistake. What creative behavior do GANs exhibit? Many feel they don’t exhibit creative behavior.

HC: For me, the fact that you give it a certain number of examples and then you can continue to see results in the latent space, for me, the gap has to be [bridged]. So necessarily, there's some kind of, like, inventing something. So I guess there is some kind of creativity for me… because creativity is a really broad term, so it can be misunderstood, because creativity is something really related to humans. But at the basic, low level, it was given a set of images, it can create images that does not belong to the training set. So that's something that is transformed by the model, and there's some kind of creativity. So it's just a way you interpret the word "creativity." Maybe from certain perspectives you can say it's creativity.

JB: So it sounds like you believe it is dependent upon your personal definition of creativity? Some people say GANs are just are approximate distributions and that is not really creative - but it sounds like you think it is creative?

HC: Yeah. It's like, whatever you think creativity is, if we fit on the same definition, we are obliged to agree on something. So if we go to the same definition that creativity is something like, let's say, this ‘Concept A,’ then GANs will fit this concept. Or not? It's just a point of view thing, I guess -- and I understand that people can argue that [it’s] not great, we understand that, but it's just a point of view.

Did you claim you were going to “patent” the algorithm even though it was not yours?

JB: So there was an article where you are quoted saying you decided not to patent your algorithm. You mentioned to me that you never actually said that. But the formula on the front of your painting is by Ian Goodfellow, but you don’t credit him there.

HC: Yeah. Yeah. “Belamy” is translated to “Goodfellow” in French. So I think this argument is really not good, because we said many, many, many times that “Belamy” is the French translation for “Goodfellow,” because we admire Goodfellow and that he created GANs, and so we put the formula there. So it's a mathematical expression -- it's not ours, it's not his. It does not belong to anybody. So it's exactly like GANs, but we have the respect to pay to Goodfellow because he created this paper, but it's open source. So we never thought something about copyrighting the GAN algorithm. It doesn't make sense. Because for me, as a researcher in machine learning, it's really ridiculous to think that, because, like, you cannot put a patent on a theorem or an algorithm because it's part of the general knowledge of humankind, and anybody could call it in their own and use it. It's part of general knowledge. So yeah. There's more and more phrases in articles [that] we never said.

What contributions did Obvious make in creating Portrait of Edmond Belamy?

JB: So somebody else wrote the GAN code you used, correct? Did you use DCGAN for the Belamy painting?

HC: Yeah, yeah. Exactly. So we used DCGAN. It turns out this is the implementation available in PyTorch, it is the Soumith Chintala repository, so we used that because we tried both variants, because in my research I already had the code for many different types of GANs, so I already had code. And in the end, when I did the full search, DCGAN was fine. It was not [about] technological performance; we were just like, 'Hey, it's just this new way of using GANs.' I guess something like Big GANs, it's interesting for AI art, but it's also research. Like, there's an actual technological innovation with GANs, and we didn't claim to do something like this. We just wanted to have a regular GAN that worked well and allowed us to do what we wanted to do.

Because right now we are working on a project with 3D GANs, and I guess this time the technological innovation is a bit better, I guess. We are in contact with some researcher at the Max Planck Institute to use one of their models in order to create and train a GAN. And in this project, I think we are getting more involved in how it works. But since it was our first project … everybody's got to start somewhere, so you start with this. It seems like a reasonable idea. This project seems good. So let’s roll with it. So yes, we used this GANs, which we did, and we curated the assets in order to have the best result that we got. We tried many super resolution algorithms, and so we tried one with GANs, we tried others that don't really use machine learning techniques, more traditional techniques. And in the end, we found an enhancer, and so that worked really, really great and that gave really beautiful results, so we were like, 'We think it's really cool, and we are just going to stick with this.'

So yeah, we just tried a bunch of things, and when we thought the result was correct enough for our first project, we said, 'Okay, now let's try to show it to the world' and maybe use it to finance our further research and see where we can go with it. Because the actual first idea was, like, 'Okay, let's try this.' If we manage to have a little bit of expression and people are interested and we start new projects, then we'll continue with that. If nobody cares, we're just going to stop working, and then my two friends were planning on getting back their job, and we would stop and we would continue with our lives -- I have my Ph.D., they have jobs -- and we go on with our lives. And the fact that it blew up really changed everything.

I guess, yeah, a really big misconception [about it is] that it's just our first project, so we wanted to do this.

JB: Some of the engineers I have worked with would call this using off-the-shelf technology. There is not a lot of technical innovation going on here on your part. If it’s not technical, then where is the innovation in the Belamy project?

HC: So for this project, we guessed that the innovation with it is … we presented it in such an easy, not subtle way. Since it's really easy to comprehend, I think that's what the innovation is. But since it has resonated with so many people, is that there must be something here that is different to what was done before. But at first, we wanted to do something original, something unique. But you can't really control what people think of what you are doing. So yeah, maybe the fact that it was really accessible was the key. But we don't really know. So for further projects, we have lots of ideas.

But also one thing that must be really considered here is that we don't have any money. We don't have any computational power, so we spend lots of money on just trying this first project, and when we feel that we've got enough results, we stop there, because it was costing us money and costing us time, so we couldn't really afford to do something really innovative, because if you don't have the computational power, you just can't. So of course, I knew about progressive GAN from the day that they posted it on Reddit, and I wanted to try it the day after, but I just couldn't. So it's exactly the same thing with the big GAN papers, it's like, 'Okay, it requires like 512 GPU cores,’ something that we don't have, we don't have the budget for this. So for now, if you wanted to train this, you just can't. So yeah, we want to do this innovative stuff, but we've got to start somewhere to get some financing and continue working, having some credibility, having opportunities to get to access to more computational power. It was a way to have the means of doing something really innovative. At the time we created the Belamy Family, we didn't have the means to do something really creative -- or, I can’t really say that. It was really hard for us to try something really innovative, because when you try something really innovative -- and I see it in my research, too -- you need to try and fail a lot. So if you fail, you are going to train that model for nothing, and then you have to pay for it. So we couldn't really afford that.

Why didn’t you open source your code?

JB: A few more quick questions. I let some folks know I was interviewing you and asked if they had any questions. Someone is asking if you really want to make this understandable to the public, why don’t you open source the code?

HC: Yeah, okay, why not? But it's already -- we can do it, but it's already open source. It's like, what I told you, we used DCGAN, so yeah, it's already online. But we can do it if some people are interested. But mainly we just want to point to different things that we used and that's what we did also in our Medium blog post. So there is this DCGAN PyTorch repository. We also were inspired by the art-DCGAN repository of Robbie Barrat. …We can release the version of the data set that is curated, but yeah. It's not really interesting. We just removed all the paintings that have double faces or that were really like a real portrait. But we are not against open sourcing. For me, open sourcing this would be like taking someone's code and open sourcing it, like, it’s not yours, just point to where you get this code, and I got this code there at Robbie Barrat’s GitHub and Sumith’s GitHub. You can just use it. These tools are already available.

JB: So open sourcing it would imply that you are taking more credit than you want to take because you did not actually write any code?

HC: Yeah, yeah, yeah. I think you're making a good point. If I was in the shoes of Robbie Barrat, I wouldn’t like to see my code on something that Obvious released because, yeah, it's my code. And we already with lots of journalists talked about him, saying we were inspired by what he did, and on our main blog post, the link to his GitHub is available. It's one of the first things we talk about. And we are also really up front with him with this -- like, when we did this, we were like, ‘Okay, we need to send him a message and ask him if he's okay with that,’ and so he told us he was okay, and he told us, “Okay, I thought you were using just the code but not training models.” And then we said, “No, no, we trained our models. We tweaked the hyper-parameters to make it work.” We really had fun with it, and in the end, he said it's totally okay. And he asked us to make references to his code, and that's what we did on our website. We wanted really not to steal his ideas or to steal his code, we wanted to be really honest with him.

Where are GANs going from here?

JB: Given that you have some passion around art, how do you look at AI and art in general? What is AI adding to art, who is making interesting work, how do you position it relative to the larger sphere of art and your own passion for it?

HC: I think one of the reasons we got so much exposure is that AI art is something that is revealing what people think about AI, and revealing the fear and the misconception about AI. And that's why it also gets so much attention. So in the art spectrum, I would say this is really interesting because this is really showing something about this society, and so in this way, I guess art is a great way reveal the mood of society and what people are thinking right now. So this is really representative of the current atmosphere around AI and around all the misconceptions. So I think that's one of the reasons AI art is also interesting, because it goes to show something about the humans of today. I see that GANs were created in 2014, [and] the first results were not that great. Now, every six months we get a really big leap in technology — so we got GANs, then we got DCGANs, [then] we got Progressive GANs, then we got Big GANs that were the first result with the faces that were realistic. Then Progressive GANs was a big leap. Then Big GANs really, I think, is a huge leap, too, because that diversity of images is really big. So from the technology point of view, since the researchers were being really fast, I think that there will be more and more possibilities around this technology.

And it's not only GANs, because GANs is a huge example and something really interesting right now, but we won't be interested in AI research, there will be more breakthroughs in the future because people are putting lots of effort into doing AI research. So I think there will be more and more tools that are created, and that should create new artists and new art and new artistic approaches using these tools. Because we really do think that this tool is something incredible. When we talk about photography, we really meant it that when photography first appeared, it was just like a technology for highly qualified engineers, and so we do have the same thing with AI tools right now. So maybe AI could be something like photography that sparked a whole new art movement. So we hope it's that way, but we can't really know for sure. I don't know if it will last as long as and will be as important as photography, but I think there is a good chance.

JB: I look at something like Google Deep Dream, and once the code became open source, anyone could add photos. When I added photos, I was at first amazed, thinking, “This is better than Salvador Dali,” but after about 10 of them it loses its novelty. So nobody gets excited about Deep Dream images anymore because they realize even their grandmother can do it with the click of a button. How will GANs be different from Deep Dream?

HC: I see what you're getting at. I think here again we can compare it to photography. Anyone can take their cell phone and take anything. And so what makes photography really incredible or really interesting? And so I think as in photography, you have to add craftsmanship which will be [amplified] with your tool, and the message that is conveyed. When you see photographs of Weiwei with his middle finger and things like this, you see, anyone could have done this photography, but the way he did it was really relevant and was attached to a strong message. So that's why it's really important. And so I think that as time goes on, we will see that artists with the best ideas, I would say, or the most creative way to use the tools, will eventually be recognized for this and not just using something really new or something like that. And so that's exactly something that could be said, too, is like, ‘Okay, you are just using Deep Dream for GANs.’ We totally agree with that. In order for people to start getting cameras and take photos — and so for the great artists of tomorrow to rise, then you need to present technology.

Also, I think that what we do may spark people to get to know this tool and maybe be more creative than us. And we don't care. We're not in a competition. Since we want to make this technology shine, and so that more and more people know about AI, know about machine learning -- I'm passionate about machine learning, so I want people to know about it. I want them to know how great it is and how interesting it is. And I think in the end, you cannot fool people -- artists will eventually get sorted out and the best will naturally rise. This process has been seen for a million years, and it's always the same thing. We hope and believe that the best artists will get what they deserve and get the exposure that they really deserve.

So I think it's the craftsmanship that will be the most determinant thing in why AI art is interesting in the future. I don't think it will be a succession of technological innovation and stuff like this. I think already what we have right now, you can explore it in so many ways that there are things to be created that are potentially masterpieces. And you need that work and you need that dedication to find a way to find these masterpieces.

JB: When you project forward from your description it sounds like humans are going to become more and more important in what differentiates good art in GANs, not less and less. The public has this dystopian vision that AI is going to replace artists. But what you just described is the opposite.

HC: I totally agree with that.

JB: What are some examples of humans making good GAN art? What is good GAN art?

HC: Yeah, what is an example of good GAN art? To reproduce something that was done before is something that has been done a lot through the history of art. I think that's a bit of what we do. Like, portraits have been something really important in art for a long time, and reinventing portraits and seeing it from a different perspective is something that is interesting. So we were inspired by that and we thought that it was a really striking way to show how it's really interesting.

What you are saying is that I don't think that machine will replace artists and things like that. In the end, art is made for humans, so it needs to have this human part. And I guess that's also one thing that is in our work, is that it is really goofy and human in a way, so I guess that's why it resonated for people, too. Like, if you make something really striking for the machine, we couldn't comprehend it, so it would stop being interesting for us. And in the end, it's people that enjoy art, not machines, so it doesn't make any sense that the human part is totally removed from the art process.

Comment
← Newer Posts Older Posts →
Get The Newsletter
Thank you!
Blog RSS

You Might Also Like:

Featured
Primary_Image (1).png
Field Guide - Imagined Specimens and Ecosystems
Read More →
Vanishing_NFT.png
Back Up Your NFT Art or It Could Disappear
Read More →
Blake.png
Why Museums Should Be Thinking Longer Term About NFTs
Read More →
Screen Shot 2021-06-02 at 3.49.24 PM.png
GreenNFTs Hackathon Brings New Ideas, Awareness, and Solutions
Read More →
Screen Shot 2021-05-23 at 10.31.52 AM.png
Constructive Instability - The Art of Lucas Aguirre
Read More →
Museum_NFTs.png
What Makes a Museum Object NFT Valuable Beyond the Scope of the Technology?
Read More →
A sample of the highest selling NFTs on the SuperRare marketplace
In Search of an Aesthetics of Crypto Art
Read More →
Hic Et Nunc Brings True Spirit Of Web Art To The Here And Now
Hic Et Nunc Brings True Spirit Of Web Art To The Here And Now
Read More →
newplot (6).png
Who Is In Your SuperRare Network?
Read More →
TWOSOLDIERSATWARI.png
Artists Rally to Support #EndSARS
Read More →
mint_3.png
Interview with Generative Artist Kjetil Golid
Read More →
12647ec4427e16c13b1a19fda327b7f2.jpg
Interview With Generative Artist Jared Tarbell
Read More →
complex2.jpeg
The Game of Life - Emergence in Generative Art
Read More →
How_To_Become_A_Successful_Artist_Warhol.png
How To Become A Successful Artist
Read More →
Can Machine Learning Predict the Price of Art at Auction?
Can Machine Learning Predict the Price of Art at Auction?
Read More →
2020 Art Market Predictions
2020 Art Market Predictions
Read More →
Screen Shot 2020-01-12 at 12.27.04 PM.png
Artnome - 2019 Year in Review
Read More →
Augmenting Creativity - Decoding AI and Generative Art
Augmenting Creativity - Decoding AI and Generative Art
Read More →
Tabula Rasa - Rethinking the intelligence of machine minds
Tabula Rasa - Rethinking the intelligence of machine minds
Read More →
Can AI Art Authentication Put An End To Art Forgery?
Can AI Art Authentication Put An End To Art Forgery?
Read More →

POWERED BY SQUARESPACE