• Generative Art
  • Art Analytics
  • Right Click Save
    • About
    • Advisory Board
    • Press
    • Contact
    • Art Authentication
    • Community
  • Blog
Menu

Artnome

100 State Street
Framingham, MA, 01702
Phone Number
Exploring Art Through Data

Your Custom Text Here

Artnome

  • Generative Art
  • Art Analytics
  • Right Click Save
  • About
    • About
    • Advisory Board
    • Press
    • Contact
    • Art Authentication
    • Community
  • Blog

Blog

Exploring art through data using the Artnome database. 

 

Artnome Turns 2, Announces Partnerships and Advisory Board

July 23, 2019 Jason Bailey
Wayne Thiebaud, Happy Birthday, 1962

Wayne Thiebaud, Happy Birthday, 1962

Artnome turns two this month and we are celebrating by announcing two exciting new partnerships and a brilliant new board of advisors to help us navigate the next two years! Both our partnerships and our advisory board align perfectly with our two primary goals of improving the art historical record and promoting artists working at the intersection of art and tech through interviews and articles. 

As many of you know, Artnome co-curated a show on the history of generative and AI art with Georg Bak called Automat und Mensch, which had a successful opening last month in Zurich.

From Left: Robbie Barrat, Herbert W. Franke, Mario Klingemann, Kate Vass, Jason Bailey, Georg Bak

From Left: Robbie Barrat, Herbert W. Franke, Mario Klingemann, Kate Vass, Jason Bailey, Georg Bak

Working with Kate Vass and her team was a fantastic experience, and I am excited to announce our partnership, making Kate Vass Galerie the official gallery of Artnome! Kate’s most recent shows on blockchain art and generative art, respectively, line up perfectly with the topics and artists we cover and love. 

Under this new partnership I will continue to curate for the Kate Vass Galerie, make introductions for more up-and-coming artists, and will be linking to and advertising the Kate Vass Galerie as the official Artnome gallery. In keeping with both of our missions and goals, we are looking at adding a blockchain-based gallery to make collecting digital art accessible to the next generation of collectors.  

Jared Tarbell, Substrate.GZB, 2019

Jared Tarbell, Substrate.GZB, 2019

Artnome’s second partnership is with Art Recognition, another amazing Swiss company with strong female founders. Particle physics Ph.D. Dr. Carina Popovici and her co-founder Christiane Hoppe-Oehl have developed AI to assign the probability of forgery based on photographs of an artwork. Specifically designed to augment, not replace, human connoisseurship, Art Recognition produces a heat map alerting human experts to areas of potential concern. I could not be more thrilled to team up with Art Recognition to help improve authentication and fight forgery in the art market. 

 
Example heat map from Art-Recognition. The regions highlighted in blue represent the salient features which support the prediction that the artwork is fake, whereas areas highlighted in red show evidence against the classifier's decision.

Example heat map from Art-Recognition. The regions highlighted in blue represent the salient features which support the prediction that the artwork is fake, whereas areas highlighted in red show evidence against the classifier's decision.

 

As Artnome’s primary partners, Kate Vass Galerie and Art Recognition provide funding to help offset the cost and time I put into writing articles for Artnome, gathering data, and creating analysis in order to make it freely available to the public. Put more simply, when you support Kate Vass Galerie and Art-Recognition, you directly support Artnome in our mission to fight art forgery and highlight underrepresented artists whom we feel deserve more attention! 

Last but not least, I have asked an amazing list of industry experts, all of whom have already played important roles in the growth of Artnome over the last two years, to become formal advisors as Artnome moves forward and continues its growth. These are all people for whom I have the deepest respect and who have already provided critical guidance and opened up opportunities for Artnome. 

Anne Bracegirdle, Former Associate Vice President, Art+Tech at Christie’s

0 (2).jpeg

Anne reached out to me in 2018 to invite me to moderate two panels at Christie’s inaugural Art + Tech event in London. That was a huge leap of faith, given I had zero experience presenting or moderating. Having “moderator at Christie’s” on my resume has opened countless doors and created new opportunities and friendships ever since. I was again invited by Anne to speak, this time on “AI and art” in 2019 which has further raised the profile of Artnome and helped to solidify our brand as experts in art and tech. Anne has great passion and commitment for the digital art space and will play a key role in raising its profile as “the” important genre for our generation.

Bernadine Bröcker Weider, Co-founder & CEO, Vastari

0+%283%29.jpg

I’ve had the pleasure of attending several conferences around the world with Bernadine, and she brings amazing energy and leadership everywhere she goes. Bernadine is the CEO and founder of Vastari, the largest online facilitator of international touring exhibitions and loans between private collectors, museums, and exhibition producers. Vastari facilitated over 456 matches of content to venues in 2018 and works with thousands of museums around the world, including eight of the top ten most visited.

Nanne Dekking, CEO & Founder at Artory and Chairman of the Board of TEFAF

Nanne.jpg

I’ve had the pleasure of sharing the stage with Nanne at several conferences, and we immediately hit it off based on our common mission and principles. Nanne is an eloquent advocate for change in the international art market and is using his long art career to improve the art historical record and to bring transparency to the art world. As Founder and CEO of Artory, Nanne has developed the first standardized data collection solution by the art world, for the art world.

Nanne was asked two years ago to become the chairman of the board of trustees of TEFAF. In this role, he has greatly contributed to make vetting rules for authentication and selection criteria more transparent and to avoid any commercial interest. Prior to Artory, Nanne held senior roles with Sotheby’s and Wildenstein & Co.  

Jessica Fjeld, Assistant Director, Cyberlaw Clinic, Harvard Law School

0+%286%29.jpg

Though I did not anticipate it, I have met many talented art lawyers since starting Artnome. None of them came across as understanding art, law, and tech as well as Jessica has. I interviewed Jessica for a recent article I wrote exploring why copyright for AI art is so complicated. She navigated the intersection of art, tech, and law with ease while putting even the most complex aspects in terms that anyone could understand. I am thrilled to have her unique perspective and invaluable advice moving forward.

Ahmed Hosny, Machine Learning Research Scientist, Dana-Farber Cancer Institute

0 (4).jpeg

Ahmed’s “Green Canvas” project was the inspiration for my interests in machine learning and art, and he is solely responsible for my initial explorations of blockchain and art. Ahmed is a tech visionary and is the first person I go to when I run out of ideas or have questions about new technologies and how they work. You can learn about his amazing work using machine learning to identify and treat cancer and other important projects on his website or on his LinkedIn profile.

Marion Maneker, Editorial Director at Art Media Holdings (Penske Media Corporation)

IMG20120209142536489_900_700.jpeg

I cold called/emailed Marion several years ago after stalking him on his excellent site Art Market Monitor to tell him I had the world’s largest database of complete works. Rather than hang up on me, he agreed to meet with me in person and educated me on the art market. Several years later, he is still educating me, and it has shaped the analytics I am developing from my data to make them far more relevant than they otherwise would be. If you are interested in the art market, Marion’s site is the best place to learn about it and to follow important news and developments.

Thanks for your support over the last two years! If you have questions, suggestions, or concerns you can always reach me at jason@artnome.com.

Comment

Robbie Barrat Interview At Christie's With Artnome

July 12, 2019 Jason Bailey
Interview on Stage with Artist Robbie Barrat

Interview on Stage with Artist Robbie Barrat

Artnome has two missions:

  • Improve the art historical record through better data to help fight against forgery.

  • Highlight work by artists who deserve more attention, including artists working with tech and artists from underrepresented communities.

Thanks to my good friends at Christie’s, Anne Bracegirdle and Marisa Kayyem, I was able to share both of Artnome’s missions on stage to a live audience at Christie’s NYC last month.

First I presented on machine learning and analytics in relation to the art market, second by interviewing my favorite AI artist, Robbie Barrat, on stage. Christie’s did a phenomenal job recording the presentations and I am excited to share them with you below.

Machine Learning and Analytics: Predictions for the Art Market

Screen Shot 2020-01-11 at 6.43.50 PM.png

An Interview with Robbie Barrat and Jason Bailey

Again I am super grateful to Christie’s for giving me this platform to shine a light on art and tech. I encourage you to watch all the presentations from the summit here.

If you found the topics in the videos interesting I would also encourage you to sign up for our newsletter below. And as a teaser, be on the look out for some big announcements from Artnome in the next few weeks to celebrate our second birthday!

Subscribe

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!


Comment

Machine Learning for Art - Deep Kitsch or Creative Augmentation?

July 8, 2019 Jason Bailey
The Untouchables, Pen and Ink drawing by Marco Marchi, colorized with style2paints in Runway ML

The Untouchables, Pen and Ink drawing by Marco Marchi, colorized with style2paints in Runway ML

Machine learning is at its best when used as a tool for augmenting human capabilities, not for replacing them. And while we may not all be able to build our own machine learning models from scratch, new tools like Runway ML and Joel Simon’s forthcoming Artbreeder are opening up access to these machine learning models for everyone. Will this flood our screens with infinite images of deep kitsch? Or can machine learning augment human creativity on a larger scale and point towards a new direction for art?

Animation by Alexander Reben using real + drawn faces model by Joel Simon in Artbreeder.

My most important mentor, my high school art teacher Marco Marchi, taught me that creativity can start with new tools, but it should never stop there. This is especially true with machine learning models that can apply eye candy like filters at the push of a button, only then to devolve into derivative and contrived visual effects.

Marco also taught me that the beauty in making art is in the discovery process. Likewise, art appreciation is about unpacking the artist’s discovery process and finding your own discoveries along the way.

The future of work according to The Jetsons

The future of work according to The Jetsons

There is not much discovery in pushing a button to apply a single filter or machine learning model, nor is their much to unpack, but I believe it can be a starting point for a more interesting artistic journey if used as a tool for augmentation.

Marco passed away recently, and I have been thinking about him a lot and revisiting his work. Over the weekend I played around with style2paints, a machine learning model for colorizing sketches, and tried it on several of Marco’s sketches.

Earth Gift, pen and ink drawing by Marco Marchi

Earth Gift, pen and ink drawing by Marco Marchi

Earth Gift, pen and ink drawing by Marco Marchi, colorized in style2paints

Earth Gift, pen and ink drawing by Marco Marchi, colorized in style2paints

I wondered what Marco would have thought of these new machine learning tools. I decided that he would approach them with with openness and curiosity rather than judgment and suspicion. With that in mind, I decided to do the same. In the rest of this article, I take style2paints for a spin documenting and sharing my creative process along the way while trying to discover if machine learning for the masses will augment creativity, produce more kitsch, or both.

AI as a tool for augmenting creativity and introducing unpredictability

After playing with Marco’s ink drawings, I decided to run other images with simpler compositions through style2paints to try and learn its quirks and breaking points. Etchings and drawings from online databases like the MET’s open access collection provided me with plenty of interesting fuel.

Portrait of Charles Meryon, 1853, Félix Henri Bracquemond,

Portrait of Charles Meryon, 1853, Félix Henri Bracquemond,

Portrait of Charles Meryon,, updated in style2paints

Portrait of Charles Meryon,, updated in style2paints

I pretty quickly figured out that recognizable images with fewer details and less shading actually produced the most realistic outcomes. But the most realistic results are rarely the most interesting.

I find breaking tools is the fastest way to get to their creative potential. By playing with a combination of filters for hue, brightness, and grayscale in Runway ML, I was able to break the style2paints model a bit.

Portrait of Guido Arnot 1918 by Egon Schiele

Portrait of Guido Arnot 1918 by Egon Schiele

Portrait of Guido Arnot, updated in style2paints

Portrait of Guido Arnot, updated in style2paints

There are known problems with machine learning models showing skin tone bias in favor of lighter-skinned people sometimes called “whitewashing” (this is due to disproportionate or biased training data skewed towards caucasians). With certain filter combinations in Runway ML you can actually approximate the reverse effect and produce a wider variety of skin tones as seen in these Egon Schiele drawings.

Standing Nude Facing Right, 1918, Egon Shiele

Standing Nude Facing Right, 1918, Egon Shiele

Standing Nude Facing Right, updated in style2paints

Standing Nude Facing Right, updated in style2paints

I then tried some more unusual subject matter as an input to see if I could break the model that way - for example, Thomas Rowlandson’s Comparative Anatomy, which comically compares the human head to that of an elephant and a bull.

Comparative Antomy, 1800-’25, Thomas Rowlandson

Comparative Antomy, 1800-’25, Thomas Rowlandson

Comparative Antomy, updated in style2paints

Comparative Antomy, updated in style2paints

I loved the results from these cartoonish images. It seemed the less details I provided the model and the less realistic the characters, the more interesting the results were.

I thought about other sources for simplified minimalistic inputs like cartoons and coloring books. Pretty quickly I wondered what I could get by grafting coloring book images onto historic etchings?

Screen Shot 2019-06-30 at 2.42.16 PM.png
Screen Shot 2019-06-30 at 3.21.39 PM.png

I started in Photoshop, but the results were a bit disjointed. I then I printed the images out and drew on them directly using pen and marker to unify the images and to make them my own, and then re-ran the model.

style2paints - June 30th 2019 at 3.22.47 PM.png

Drawing directly on the printouts by hand was producing the most interesting results. Eager to explore, I made a quick dumb sketch of a monster staring at Homer Simpson to see how the model would handle it.

Screen Shot 2019-06-30 at 3.25.37 PM.png

To my surprise, it produced a pretty compelling ghost from my crappy doodle.

style2paints - June 30th 2019 at 3.29.51 PM.png

I now had an image that felt really satisfying and unique. Sure, it is not going to the Louvre anytime soon, but I was developing a visual language and an understanding of the tool that gave me some control while leaving a delightful amount of unpredictability.

I also really liked the idea of blending of highbrow historical etchings from the MET database with lowbrow cartoons from children’s coloring books. I tried a few others, including a mash up of a Picasso drawing and Garfield I called Picarfield.

Picarfield Glitched, Jason Bailey, 2019

Picarfield Glitched, Jason Bailey, 2019

The clean version of Picarfield was kind of boring. But once I started breaking the model a bit more, it produced some glitchy versions full of artifacts, which I actually like in this case.

style2paints - July 4th 2019 at 12.34.40 PM.png
style2paints - July 4th 2019 at 12.35.27 PM.png

I often find running towards things that repel you and exploring them more deeply is a great trick for prompting creativity. One of my favorite painters, Albert Oehlen, is a master at this. In a recent interview Oehlen shared:

If someone stands in front of one of my paintings and says, 'This is just a mess', the word 'just' is not so good, but 'mess' might be right. Why not a mess? If it makes you say, 'Wow, I've never seen anything like that,' that's beautiful.

Ohlen’s Computer Paintings transform the Mac paint/inkjet aesthetic into high-art.

Albert Oehlen’s Prix Ars Electronica (1991)

Albert Oehlen’s Prix Ars Electronica (1991)

I ultimately decided creating analog work to feed into the model was the more interesting direction. Introducing handmade images feels like a good way to fight back against the often cookie-cutter output of machine learning models. I made a super creepy contour drawing of my wife (sorry, sweets, you don’t really look like this) and ran that through style2paints.

Screen Shot 2019-06-30 at 5.24.53 PM.png
style2paints - June 30th 2019 at 5.27.04 PM.png

And then “put a bird on it”… just because.

style2paints - June 30th 2019 at 5.51.27 PM.png

To redeem myself, I did a few more accurate (and flattering) life drawings of Erin. I found it more than a little ironic that playing with AI models on my computer had lead me to do some life drawing for the first time in years.

Screen Shot 2019-07-01 at 8.41.02 PM.png
style2paints - July 1st 2019 at 8.50.44 PM.png
style2paints - July 1st 2019 at 8.51.00 PM.png
style2paints - July 1st 2019 at 8.48.12 PM.png

In the end I didn’t create any masterpieces (there is a reason I spend more time writing than making art these days), but the process was an adventure, and the more I experimented, the closer I got to making interesting work. I think process and exploration is the secret sauce that makes any art interesting.

The reason I admire and write about artists like Helena Sarin, David Young, and Robbie Barrat so regularly is that they are going beyond simply applying models: they are deeply exploring the creative process, and it shows in their work.

Pocos Frijoles, an artisan coffeeshop, Helena Sarin

Pocos Frijoles, an artisan coffeeshop, Helena Sarin

Helena is prolific and generous in sharing her process on Twitter and will soon be launching a new site. It is fun to watch her evolve as she explores different directions and influences continuing to blend her handmade paintings and drawings with machine learning tools.

Nude Study, Robbie Barrat, 2019

Nude Study, Robbie Barrat, 2019

Robbie recently looked to traditional methods of artistic training as part of his process, sharing “I’m trying to get better at using neural networks for making artwork by engaging in more traditional exercises (figure drawings this time).”

(b62a,unknown01,1) from the Tabula Rasa series, David Young, 2019

(b62a,unknown01,1) from the Tabula Rasa series, David Young, 2019

David Young breaks down machine learning models to the fewest possible components as a way of “revealing the materiality of AI.” As he describes them on his site, “These images are an exploration of how a machine learns. They were generated from no more than a handful of training images.”

All three artists use machine learning models to make art, but they could not be more different. That is because the tools are only a starting point - it is their process which makes their work rich and unique.

On the opposite side of the spectrum we have “artists” giving their machine learning models human names and putting wigs on robots to reinforce disingenuous claims that AI is replacing human artists. This is sad, as art has great potential to educate an anxious and confused public on the actual capabilities of machine learning. When the AI hype dies down and we head into the next AI winter, we should see fewer uninformed dystopian AI art projects. These projects will thankfully be lost to history while those using AI and ML as a tool instead of subject matter should continue to grow and break new ground.

I am also optimistic that with Runway ML and Artbreeder, we will see some more traditional (analog) artists bringing deeply established creative process to these new (digital) tools. Those more traditional artists may not have the technical chops to train their own machine learning models, but I am hoping that a rich sense of artistic process will expand the work that we see being created today into new and exciting directions.

389373_20190413.jpeg

This article was written in loving memory of my high school art teacher Marco Marchi. Marco gave purpose to the lives of thousands of students by being a friend and mentor and teaching us to think creatively, independently, and to challenge the status quo. The idea that I could someday be like him when I grew up helped me make it through high school and into college. Rest in peace, Marco.

2 Comments

Bye Bye Camera - an App for the Post-human Era

June 24, 2019 Jason Bailey
Screen Shot 2019-06-24 at 9.10.33 AM.png

In a climate where endless hype has people paranoid about AI stealing humans’ jobs, a new app called Bye Bye Camera uses neural networks to eradicate humans from the world altogether. Bye Bye Camera simply removes people from photos and fills in the background. The app is being launched today by an artist who goes by “Damjanski” and his art collective Do Something Good.

According to Damjanski:

I’ve created this project together with two of my longtime collaborators, Andrej and Pavel, from Russia. A couple of years ago I created a collective called Do Something Good where I connected all the people I’ve collaborated with online. By now we’re 16 people around the world from different fields and collaborate on different projects.

The app takes out the vanity of any selfie and also the person. I consider Bye Bye Camera an app for the post-human era. It’s a gentle nod to a future where complex programs replace human labor and some would argue the human race. It’s interesting to ask what is a human from an Ai (yes, the small “i” is intended) perspective? In this case, a collection of pixels that identify a person based on previously labeled data. But who labels this data that defines a person immaterially? So many questions for such an innocent little camera app.

At a high level, Bye Bye Camera works by combining functionality from an image recognition app called Yolo with a neural network that analyzes the background and tries to repaint it.

Artnome editor Erin Bailey disappears into unruly hedges

Artnome editor Erin Bailey disappears into unruly hedges

Sure, this functionality has existed in other photo apps and tools, but in Damjanski’s hands, this feels more like an artwork than a business venture. For one thing, other tools are used to make corrections - Bye Bye Camera just wipes everyone out!

Damjanski reinforced my perception of his app, sharing with me:

A lot of friends asked us if we can implement the feature to choose which person to take out. But for us, this app is not an utility app in a classical sense that solves a problem. It’s an artistic tool and ultimately a piece of software art.

The idea of “app as artwork” may seem strange at first. It makes more sense when you remember that most of the serious AI artists feel strongly that it is the models they are training and not the output that should be considered the artwork. Once you arrive at this conclusion, highjacking the Apple App Store and Google Play Store as your own personal art gallery makes a lot of sense. Again, from Damjanski:

First and foremost I see this as an artistic tool for me to create more art. It’s almost like a new ‘brush’ I am using. A year ago we created another tool called “No Shutter App” that only takes pictures while the phone’s shutter loads. In both cases it’s a way for me to enhance my artistic practice. But we also like opening it up to the public and see what other people create with it. A commercial success is not the primary goal here.

Demo photo provided by Bye Bye Camera

Demo photo provided by Bye Bye Camera

The results from Bye Bye Camera are often eerie and remind me of work by one of my favorite photographers, Lewis Baltz. Baltz created gorgeous photographs of generic office parks completely void of humanity.

Lewis Baltz, South Corner, Parking Area, 23831 El Toro Road, El Toro (1973)

Lewis Baltz, South Corner, Parking Area, 23831 El Toro Road, El Toro (1973)

While Baltz photos are sans-human from conception, there is, of course, also a long and rich history of manual and mechanical removal of humans from photographs for a variety of reasons.

In Soviet Russia it was common practice for censors to remove people from photographs as part of the propaganda efforts. The image below was taken near the Moscow Canal while Nikolai Yezhov was serving as the water commissar. After Yezhov fell from power he was killed and his image was systematically removed from the photograph.

Voroshilov,_Molotov,_Stalin,_with_Nikolai_Yezhov.jpg
The_Commissar_Vanishes_2.jpg

It is important to remember that in the early days of Soviet censorship, the photograph still had an air of truthfulness to it. So carefully removing a person from a photograph had the haunting effect of visually rewriting them out of history.

In today’s post-truth society where photography as a reliable record of reality has long since dissolved, erasure of humans from photographs takes on a different meaning. In 2007, artist Michael Somoroff erased the figures from photographer August Sander’s famous photographic series People of the 20th Century, creating his own series titled Absence of Subject.

August Sander, Pastrycook (1923)

August Sander, Pastrycook (1923)

Michael Somoroff, Absence of Subject (2007)

Michael Somoroff, Absence of Subject (2007)

Unlike the Russian censors, Somoroff’s goal was not to rewrite history, but rather to reimagine it through a postmodern lens while paying homage to Sander. By removing the human figures intended as the primary point of interest, he is pushing the background into the the foreground, creating an entirely new image in the process.

Regular Artnome readers will recall that I used a similar process in my previous article in an attempt to create a neutered version of David Hockney’s Portrait of an Artist (Pool with Two Figures) by using algorithms to remove and replace the human figures.

Jason Bailey, Olive Synchronism (After Hockney), version 2 (2019)

Jason Bailey, Olive Synchronism (After Hockney), version 2 (2019)

My goal was to make a copyright-free version of famous paintings by systematically identifying and removing the “visually important” parts and replacing them using Photoshop’s “context aware fill.” Though the goals of Somoroff, Damjanski, and myself are different, the approaches are quite similar. This is not all that surprising as these new AI/ML-driven tools have built in affordances that encourage appropriation and radical remixing. We can expect this to be a trend that only grows more extreme over the next few years, manifesting in deep fakes and other forms of extreme photographic and cinematic modification.

If Damjanski’s name sounds familiar to you, it may be because you remember him from his guerrilla art exhibition in the Jackson Pollock gallery at the MOMA titled MoMAR. The project used AR (augmented reality) to replace Jackson Pollock paintings with work by generative artist David Kraftsow from his YouTube glitch series.

momar-ya.gif

According to Damjanski, MoMAR is “an unauthorized gallery concept aimed at democratizing physical exhibition spaces, museums, and the curation of art within them.”

Since we think of Artnome mascot Frida as being a human, we were delighted to see that Bye Bye Camera feels the same way!

Since we think of Artnome mascot Frida as being a human, we were delighted to see that Bye Bye Camera feels the same way!

While Bye Bye Camera does not co-opt a physical space as with MoMAR, Damjanski is introducing his art into a public distribution channel, the app store, which we do not typically associate as being a “gallery.” With more than 93 million selfies posted daily, I, for one, welcome Bye Bye Camera as perhaps our last best hope of stemming the avalanche of endless selfies in favor of some more Baltz-ian subject matter in my Instagram feed.

Comment

Mass Appropriation, Radical Remixing, and the Democratization of AI Art

May 26, 2019 Jason Bailey
Huckleberry Objectivity (after Rauschenberg), Jason Bailey - 2019

Huckleberry Objectivity (after Rauschenberg), Jason Bailey - 2019

The tools for using artificial intelligence and machine learning to create art are currently complex enough that only a small number of highly technical people have mastered them. This came up on a recent panel I moderated on AI art at CADAF (Contemporary And Digital Art Fair) in NYC.

Specifically, I asked the panel if you “need a degree from MIT and to be a white guy over 40” to create art with artificial intelligence and machine learning, as many of the panels I attend on AI art have seemed particularly homogenous. The question was, of course, somewhat rhetorical, as I am well aware of great work being done with AI by non-white and non-male artists. But the balance for panels so far does seem to skew in that direction.

One way to increase the diversity in art created with artificial intelligence is to continue to invest in making code open source. However, not everyone can or will learn to code, but with the development of new tools like Runway ML, you may not have to.

I have been following Runway ML since May, 2018, when I read cofounder Cristóbal Valenzuela’s excellent essay “Machine Learning En Plein Air: Building Accessible Tools for Artists.” In the essay, Valenzuela compares the current barriers for artists in using AI to the barriers that would-be artists faced in creating their own pigments and paints prior to the invention of the collapsable paint tube.

Valenzuela points out that many credit the portability of the paint tube with making it practical for artists to paint plein air (outdoors) in oils. The great painter Pierre-Auguste Renoir even credited the invention of the paint tube with triggering “Impressionism” and assisting to usher in modern art at large.

“Without colors in tubes, there would be no Cézanne, no Monet, no Pissarro, and no Impressionism.” - Renoir

Like Valenzuela, I believe the democratization of artificial intelligence for artists and designers will drive a revolution in aesthetics. Specifically, I believe we will enter an era of mass appropriation and radical remixing of visual materials like we have never seen before.

In this blog post, I argue that it is the nature of AI and ML as art making tools that they will require an enormous amount of visual material as fuel to train their models. It is also the nature of AI and ML that the resulting artworks are essentially radical remixes of the work that is used to fuel them.

We can already see this trend in “appropriation” and “remixing” of visual material in the work of accomplished AI artists. Mario Klingemann’s Lumen-award-winning piece The Butcher’s Son for example, was trained on a large number of pornographic images.

The Butcher’s Son, Mario Klingemann, 2018

The Butcher’s Son, Mario Klingemann, 2018

Klingemann explained in an interview with Fast Company in 2018 that he chose to train on pornography because it is “one reliable and abundant source of data that shows people with their entire body.” He then added that sports would have been another source, but he is not that into sports.

Similarly, AI artist Robbie Barrat has famously trained models on famous portrait paintings of nudes and even trained a Pix2Pix model on the complete Balenciaga online fashion catalog.

AI Fashion, Robbie Barrat, 2018

AI Fashion, Robbie Barrat, 2018

AI Fashion, Robbie Barrat, 2018

AI Fashion, Robbie Barrat, 2018

Barrat’s most recent work remixes appropriated imagery in a slightly different way. For his series of artworks Corrections, Barrat starts with images of classical paintings by old masters and applies a custom “flesh finding” algorithm.

 
Saturn Devouring His Son (After Peter Paul Reubens), Robbie Barrat, 2019

Saturn Devouring His Son (After Peter Paul Reubens), Robbie Barrat, 2019

 

Barrat shared the genesis behind his Corrections in a recent interview I did with him for the catalog for the Automat und Mensch exhibition.

When I was in France in Ronan’s studio [French painter Ronan Barrot], I saw that he had a scene that he was not satisfied with, and he covered over the parts that he didn’t like with bright orange paint. And then he filled those parts back in. I thought that was really interesting, and he called those “corrections.” And during our confrontation, he also corrected a lot of my AI skulls. I was really fascinated with the painting over the parts you don’t like and redoing it. I thought that was a really cool way of doing it.

So I basically am teaching the neural networks to do pretty much the same thing. I’m using Pix2Pix, and it is basically trying to learn the transformation from a patch of a painting with the center part missing back to the full patch of the painting. If you look up “neural inpainting,” you’ll find this.

While Barrat and Klingemann both typically appropriate and remix from large public data sets of images, other AI artists like Helena Sarin, David Young, and Anna Ridler choose to train their models based on their own photography and hand-drawn/painted artwork. Anna Ridler, for example, shot a large corpus of tulip photos to train her Mosaic Virus video piece which draws “historical parallels from ‘tulip-mania’ that swept across Netherlands/Europe in the 1630s to the speculation currently ongoing around crypto-currencies.”

Anna Ridler, tulip photos used to train Mosaic Virus

Anna Ridler, tulip photos used to train Mosaic Virus

Even when training models using their own materials and avoiding appropriation, the process of using neural networks to create art is ultimately still one of remixing.

Some artists might argue with me here and point out that neural nets are creating entirely new works, not remixing old ones. But there is no question that the visual materials that artists curate and train the models with have a huge impact on the end results. It is this process, in particular, that I am referring to as “radical remixing” - radical in that I agree it is far different from traditional visual remixing through techniques like collage or even sampling in music.

Once you see the pattern of appropriation and remixing present in the work of AI art pioneers, it becomes easier to imagine what this might look like once the tools become more accessible to artists and designers at large. I believe it will start a new aesthetic revolution that will change the world through art, advertising, and political propaganda - all heavily based on mass appropriation and radical remixing of visual imagery.

How close are we to the democratization of AI for artists? I decided to be the guinea pig and jump in to Runway ML and see how challenging it would be for me to play with these tools to create new images.

Runway ML - Artificial Intelligence for Augmented Creativity

I have wanted to explore Runway ML and write an article about it for many months now, but I though it would take a lot of time to learn how to use it. I was wrong. In less than an hour I was up and running, creating multiple projects. Pretty quickly I became convinced Runway ML is onto something huge here and will be the Photoshop for the next generation of artists and designers.

Before I share more about the projects that I built using Runway ML, I should explain a bit about how the tool is constructed at a high level. The team at Runway ML does not necessarily write the actual algorithms for artists and designers to use. Instead, they have designed a framework to make it possible to integrate the latest algorithms from academic research into an intuitive interface that does not require programming skills on the part of the artist or designer.

For example, the first thing I tried in Runway ML was to colorize an early black-and-white scene from The Wizard of Oz. Runway ML has added Jason Antic’s excellent DeOldify model into their interface for color correction. It was super easy: I just loaded the clip in Runway ML and applied the filter. I didn’t even need to read the instructions or documentation.

The whole process took less than 15 minutes, and most of that was processing time. The interface was intuitive enough that I did could just use it.

Wiz_One.png

With some newfound confidence that I could use the tools in Runway ML, I began to explore and run other models. I started thinking more about what visual materials I would want to play with. This is just the nature of Runway ML - you think, ‘I’ve got access to these cool models - what visual material can I apply them to?’ The tools in Runway ML simply do not do much until you feed them visual material; hence, my belief that we are hurtling towards an era of mass appropriation and radical remixing spurred by the inevitable democratization and adoption of these new tools.

My favorite visual materials are the artworks from 20th century art. Unfortunately, all the work created since 1923 (the work I love most) is still under copyright. With Robbie Barrat’s Corrections in mind, I wondered if there was a way to identify the visually important parts of these artworks, not to correct them, as with Barrat’s work, but to surgically remove their “hearts,” leaving only the parts that are unimportant enough to be legally shared with the masses as new works of parody.

Runway ML had exactly the model I was looking for. A model called “Visual-Importance” trained “neural networks on human clicks and importance annotations on hundreds of designs” to identify the “visually important” and “relevant” areas of images based on human perception. So I decided to run some copyrighted images from 20th century art through the model to identify the important parts of the images and then isolate and remove them.

I’m not a lawyer, but I remembered from my interview with Jessica Fjeld, assistant director at Harvard Cyber Law Clinic, that fair use of images is largely based on the “amount and substantiality” of the portion of the work you are pulling from. You are most likely to run into problems if you take the most memorable aspect, often referred to as the “heart” of the work. My entire process is based on using the most advanced technology we have at our disposal to surgically remove the “heart” and to remix the “visually unimportant” scraps into a new work of parody.

I started by running David Hockney’s wonderful Portrait of an Artist (Pool with Two Figures), which recently sold for $90M at Christie’s last fall. The heat map below produced in Runway ML shows the most “important” and “relevant” areas of the painting highlighted in white.

Visual-Importance - May 24th 2019 at 3.18.41 PM.jpg

I then took this image into Photoshop and ramped up the contrast to isolate the important areas from the less important areas in clear blocks of white on black.

Pool_Black_White.png

I erased the “visually important” areas of the artwork, leaving only the irrelevant portions. I was actually really happy with these results and almost stopped there.

Hockney_White.png

To go a step further in transforming the work to make it my own, I used Photoshop’s content-aware-fill plugin. The plugin uses an algorithm to fill in all the white spaces by using the remaining areas of the work, the areas that were deemed “visually unimportant” by the neural net in Runway ML.

Olive Synchronism (After Hockney), version 2, Jason Bailey, 2019

Olive Synchronism (After Hockney), version 2, Jason Bailey, 2019

Though I love Hockney and the original painting, I believe my version is a new work which has an entirely different feel. I am quite fond of it. It has a vibe similar to Lewis Baltz, one of my favorite photographers who is known for finding beauty in desolation. The new version is most notably absent of the emotion, narrative, and price tag present in Hockney’s original painting.

I am, of course, not the first artist to create art from art. There is a long tradition of artists painting other artists’ paintings within their own painting. These paintings about paintings were especially popular in the 17th and 18th centuries.

The Art Collection of Archduke Leopold Wilhelm in Brussels, David Teniers the Younger, 1650

The Art Collection of Archduke Leopold Wilhelm in Brussels, David Teniers the Younger, 1650

Marcel Duchamp famously painted a mustache onto Leonardo Da Vinci’s Mona Lisa in 1919 and named the work of parody L.H.O.O.Q., which spoken in French translates roughly to “she has a hot ass” in English. Duchamp considered the work a “rectified ready-made,” a new work transformed by his addition of facial hair and the new title.

L.H.O.O.Q., Marcel Duchamp, 1919

L.H.O.O.Q., Marcel Duchamp, 1919

Artists have long used direct appropriation of visual materials as a core part of their work - AI is just a new tool with deep potential to take it to a another level. Collage was raised to an art form by Picasso and Braque in the 1920s, and other artists like Kurt Schwitters, Hanna Hoch, John Heartfield, Jess Collins, and Richard Hamilton, spent their careers mastering it.

Roy Lichtenstein’s entire body of work was based on appropriating images from comic strips. Lichtenstein would enlarge, isolate, and crop elements directly appropriated from pop culture to create new works of art. His first appropriated work was directly lifted from a comic strip of Mickey Mouse and Donald Duck fishing. He titled the 1961 work Look Mickey.

I decided to extend the chain of appropriation and identify and remove the important elements from Lichtenstein’s appropriated work below.

Sketch for Nut Aestheticism (after Lichtenstein, after Disney), Jason Bailey - 2019

Sketch for Nut Aestheticism (after Lichtenstein, after Disney), Jason Bailey - 2019

Nut Aestheticism (after Lichtenstein, after Disney), Jason Bailey - 2019

Nut Aestheticism (after Lichtenstein, after Disney), Jason Bailey - 2019

In this case I think I like my erased work better than my final version with the missing areas automatically filled in. The version with the white splotches would make a really cool graphic on a t-shirt.

Notably, I am also not the first artist to erase portions of another artist’s work and to claim that the results are a new work of my own authorship.

In 1953, artist Robert Rauschenberg asked himself if it was possible to create a new artwork completely from erasure (as I have). He started exploring this idea by erasing his own drawings. He felt that the result was not sufficiently creative, so he decided to ask William de Kooning, whom he looked up to as an artist, if he could erase one of his drawings, instead. After some hesitation, de Kooning gave Rauschenberg a drawing created with mixed media that he was fond of, but also knew would be a real challenge for Rauschenberg to erase.

Traces of ink and crayon on paper, with mat and hand-lettered label in ink, in gold-leafed frame25 1/4 x 21 3/4 x 1/2 inches (64.1 x 55.2 x 1.3 cm)

Traces of ink and crayon on paper, with mat and hand-lettered label in ink, in gold-leafed frame

25 1/4 x 21 3/4 x 1/2 inches
(64.1 x 55.2 x 1.3 cm)

de Kooning chose well, and it took Rauschenberg two months to erase the piece. Rauschenberg finished the piece by putting it in a gilded frame and having his friend artist Jasper Johns add an inscription reading “drawing [with] traces of drawing media on paper with label and gilded frame.” The work now hangs in the SFMOMA.

Rauschenberg was also no stranger to appropriation. In 1979, he was sued by photographer Morton Beebe for including a photo Beebe had taken in 1971 of a cliff diver, which Rauschenberg used in his own piece titled Pull from 1974. In a letter responding to Beebe, Rauschenberg argued:


Dear Mr. Beebe,
I was surprised to read your reaction to the imagery I used in Pull, 1974. Having used collage in my work since 1949, I have never felt that I was infringing on anyone’s rights as I have consistently transformed these images sympathetically with the use of solvent transfer, collage and reversal as ingredients in the compositions which are dependent on reportage of current events and elements in our current environment hopefully to give the work the possibility of being reconsidered and viewed in a totally new concept.

I have received many letters from people expressing their happiness and pride in seeing their images incorporated and transformed in my work. In the past, mutual admiration has led to lasting friendships and, in some cases, have led directly to collaboration, as was the case with Cartier Bresson. I welcome the opportunity to meet you when you are next in New York City. I am traveling a great deal now and, if you would contact Charles Yoder, my curator, he will be able to tell you when a meeting can be arranged.
Wishing you continued success,
Sincerely
Robert Rauschenberg

Perhaps not surprisingly, Rauschenberg’s estate has been one of the more liberal estates when it comes to opening works under copyright for scholarly use and public consumption (partly why I felt it was okay to use the erased de Kooning drawing above in this post).

In homage to Rauschenberg as a great appropriator and master remixer of visual imagery, I ran his 1954 work Buffalo II, which recently sold for $89M at auction, through my visual neutering process.

Huckleberry Objectivity (after Rauschenberg), Jason Bailey - 2019

Huckleberry Objectivity (after Rauschenberg), Jason Bailey - 2019

I love Rauschenberg, but I think I actually like my “visually unimportant” version even more than the original.

Rauschenberg passed away in 2008, but from what I know of him, I believe he would appreciate my posthumous collaboration in the spirit of his own visual remixing and appropriation.

Enthused by the results from processing Rauschenberg’s largely abstract Buffalo II, I was curious about how my process would handle a completely abstract work. Since de Kooning seemed okay with Rauschenberg’s erasure of his drawing, I figured he would not have minded if I ran his painting Interchange through my process, as well.

Persimmon Precisionism (after De Kooning), Jason Bailey - 2019

Persimmon Precisionism (after De Kooning), Jason Bailey - 2019

De Kooning’s Interchange is best known for reportedly selling for $300M in 2015. Mine is quite nice, as well - I don’t miss the heavier black lines. I feel like my version almost has a Diebenkorn feel to it.

Conclusion

Historically, when new tools for copying, manipulating, and multiplying existing images become available, we see an upsurge in appropriation and remix-based art. We saw this with Andy Warhol and Robert Rauschenberg, who co-opted screen printing for fine art and in photography with artists like Sherrie Levine and Richard Prince.

Series of people who do not actually exist created using GANs - https://www.thispersondoesnotexist.com/

Series of people who do not actually exist created using GANs - https://www.thispersondoesnotexist.com/

With AI, we have tools that are sophisticated enough to generate endless convincing images of people who do not exist and to map our faces and our own physical movements using motion transfer onto people who do exist, as if they were our own personal puppets. Add to this the fact that we are making these powerful tools available to the most tech savvy generation ever, one that “gets” artistic appropriation as seen in their love KAWs, and you have the makings of an artistic revolution.

Think of Photoshop’s impact on visual culture for everything from politics to pop culture over the last 20 years. If you multiply that by ten, you will get my approximation of the impact accessible tools for using AI could have on art and design in the next 20 years.

Subscribe To the Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!


1 Comment

50 Famous Artists Brought to Life With AI

May 21, 2019 Jason Bailey
Pablo Picasso with Jacqueline Roque

Pablo Picasso with Jacqueline Roque

I’m working on a longer article about democratizing AI for artists, but in the process of writing that article, I started using Runway ML and Jason Antic’s deep learning project DeOldify to colorize old black-and-white photos of artists - I couldn’t stop. So I decided to share an “eye candy” article as a preview of my longer piece.

When I was growing up, artists, and particularly twentieth century artists, were my heroes. There is something about only ever having seen many of them in black and white that makes them feel mythical and distant. Likewise, something magical happens when you add color to the photo. These icons turn into regular people who you might share a pizza or beer with.

That distance begins to collapse a bit and they come to life. The Picasso photo above, for example, always made me think of him as a this cool guy who hung out in his underwear all the time. But the colorized version makes him seem a bit frail and weak, and maybe even a tinge creepy.

Photos of artist couples, in general, seem to really hammer home their humanity. I think it is because so many photos of artists seem staged or posed. But when we catch them with their spouse or lover, they are their relaxed selves for a candid moment. You can almost imagine inviting them over to play cards.

Lee Krasner and Jackson Pollock

Lee Krasner and Jackson Pollock

Joan Miro and Pilar Juncosa

Joan Miro and Pilar Juncosa

Alfred Stiglietz and Georgia O’Keefe

Alfred Stiglietz and Georgia O’Keefe

William and Elaine De Kooning

William and Elaine De Kooning

Other photos feel even more magical and distant after the deep learning auto colorization. The image below with Frida Kahlo crouching next to a deer, for example (my favorite), feels somewhat otherworldly. Likewise with the famous photo of Salvador Dali flying through the air and Yayoi Kusama photographed with her spotted horse.

Frida Kahlo

Frida Kahlo

Salvador Dali

Salvador Dali

Yayoi Kusama

Yayoi Kusama

I also really enjoy watching the algorithm trying to figure out how to colorize the actual paintings from the artists. It turns James Ensor into a bit of a pale zombie, but has him painting in a neon palette fit for a blacklight. Kandinsky’s palette shifts almost entirely to purples and blues and takes on an almost tribal feel.

James Ensor

James Ensor

Wassily Kandinsky

Wassily Kandinsky

Jackson Pollock

Jackson Pollock

Georges Braque

Georges Braque

Raul Houseman and Hanna Hoch

Raul Houseman and Hanna Hoch

There are some known issues with AI and machine learning being overly trained on white people, and thus struggling with properly representing people with nonwhite skin tones. You can see this a bit in the colorized image of Picasso in which he appears more pale than the olive/bronze tone we are used to seeing in known color photos. Jason Antic, the developer working on DeOldify takes this very seriously, and just yesterday tweeted the following:

On the question of skin tone bias in DeOldify: We take this issue seriously, and have been digging into it. We're going to be overhauling the dataset to make sure it's driving more accurate decisions in the next few weeks. Overall, there seems to be a red bias in everything.

"Everything" includes ambiguity in general object detection. It can even be seen in Caucasian colorings (see example below- left is original, right is DeOldify). So overall it seems that DeOldify needs a better dataset than just ImageNet. We're prioritizing it -now-.

Example of bias toward red from Jason Antic Twitter stream

Example of bias toward red from Jason Antic Twitter stream

That being said, it's a challenging problem and it'll be hard to verify. Everybody is different, so making this work for all cases will probably be something we'll need years to perfect. So please keep an eye out for updates and keep trying it out for yourself!

That said, it does a reasonable job of not “whitewashing” non-Caucasian artists, considering it has a ways to go before perfecting skin tones, regardless of color.

Jacob Lawrence

Jacob Lawrence

Alma Thomas

Alma Thomas

Wifredo Lam

Wifredo Lam

Rufino Tamayo

Rufino Tamayo

Aaron Douglas

Aaron Douglas

Kazuo Shiraga

Kazuo Shiraga

Isamu Noguchi

Isamu Noguchi

The algorithm also seemed to struggle with the oldest images. This makes sense to me given there is less fidelity, and therefore, less input to guide the algorithm. With less guidance the algorithm sometimes has to get creative as with the Monet photo below.

Claude Monet

Claude Monet

Paul Cézanne

Paul Cézanne

August Rodin

August Rodin

Paul Gauguin

Paul Gauguin

Many of my favorite DeOldified photos are the ones that show artists we are familiar with but either rarely or never have seen photographed in color.

Egon Schiele

Egon Schiele

Gustave Klimt

Gustave Klimt

Edvard Munch

Edvard Munch

Leonora Carrington

Leonora Carrington

Hilma af Klint

Hilma af Klint

Piet Mondrian

Piet Mondrian

Henri Matisse

Henri Matisse

I also really enjoy the photos of the artists in their studios and at work. This younger looking Francis Bacon photo is among the most convincingly colorized photos in the batch I converted.

Francis Bacon

Francis Bacon

Alice Neal

Alice Neal

Agnes Martin

Agnes Martin

Helen Frankenthaler

Helen Frankenthaler

Robert Motherwell

Robert Motherwell

Bridget Riley

Bridget Riley

Louise Bourgeois

Louise Bourgeois

Barbara Hepworth

Barbara Hepworth

Hans Hofmann

Hans Hofmann

Mark Rothko

Mark Rothko

Jean Dubuffet

Jean Dubuffet

Giorgio de Chirico

Giorgio de Chirico

Frank Auerbach

Frank Auerbach

Alberto Giacometti

Alberto Giacometti

Eva Hesse

Eva Hesse

Louise Nevelson

Louise Nevelson

Hope you enjoyed seeing these artists in color as much as I did. In the next article, the one I set out to write when I got distracted, I will go into more detail on Runway ML and how it is making these remarkable new AI tools accessible to everyday artists and designers.

1 Comment

Solving Art's Data Problem - Part One, Museums

April 29, 2019 Jason Bailey
Joseph Siffred Duplessis (French, Carpentras 1725–1802 Versailles). Benjamin Franklin (1706–1790). 1778. Oil on canvas. Oval, 28 1/2 x 23 in. (72.4 x 58.4 cm). The Metropolitan Museum of Art. The Friedsam Collection, Bequest of Michael Friedsam. 32.…

Joseph Siffred Duplessis (French, Carpentras 1725–1802 Versailles). Benjamin Franklin (1706–1790). 1778. Oil on canvas. Oval, 28 1/2 x 23 in. (72.4 x 58.4 cm). The Metropolitan Museum of Art. The Friedsam Collection, Bequest of Michael Friedsam. 32.100.132. https://www.metmuseum.org/art/collection/search/436236

I recently came back from a conference in Bahrain that focused on, among other things, artificial intelligence and machine learning in art. I am as excited as anybody about the potential to apply these new tools to art and art history, but we do not have all that much data about art in a format that is clean, accessible, and easy to analyze. Moreover, without quality data, these new machine learning tools do not add much value to the discourse and use of art.

Lack of data has caused other problems, as well. People debate the exact number (which is likely unknowable), but many people suggest that 15-20% of art in museums and on the market is either forged or misattributed. A lack of quality data on art in an easily accessible format contributes to this problem.

So how do we solve the problems around quantity, quality, and accessibility of data in art? This question has been my focus for the last five years as I have built out the Artnome database of artists’ complete works along with new analytics that can only be derived from such a database. However, tackling a problem of this scale requires collaboration and effort from many different experts and groups attacking the problem from many different angles, including museums, collectors, estates, galleries, and auction houses.

In this first part of my series on art and data, I speak with Neal Stimler, Senior Advisor at the Balboa Park Online Collaborative. Neal served over a decade at The Metropolitan Museum of Art in New York City in successive positions. He worked on rights and permissions, designed digitization workflows for The Met’s collection at scale, oversaw partnerships with the Google Cultural Institute and Wikimedia communities, among other organizations, and was the project manager for The Metropolitan Museum of Art’s Open Access program that launched in 2017. Neal’s expertise in cultural heritage has deep roots in data and digital asset management, but it also incorporates areas of practice that include copyright policy, education, public engagement, operations management, and cross-reality technologies.  

JB: Thanks for joining us, Neal. Let’s start with the basics. What is Open Access?

NS: The term open access is derived from open academia, where the standard is Creative Commons Attribution license or better. Open-Access (OA) content - whether we are talking about a piece of art, a writing or other work - is free of most copyright and licensing restrictions and is often available to the user without a fee. For a work to be OA, the copyright holder grants everyone the ability to copy, use, and build upon the work without restriction. I recommend the essential book Open Access by Peter Suber and Creative Commons’ overview on the topic. The video that most inspired my work in Open Access was “A Shared Culture.” A key aspect of engaging Open Access, too, is awareness and dedication to supporting the public domain.

The adoption of open access in museums and the GLAM sector is relatively more recent than in the academy. In the cultural heritage sector, professionals and supporters center around the GLAM-Wiki and OpenGLAM communities of practice. These communities advocate for open-access policies for data, digital assets, and publications resources from galleries, libraries, archives, and museums (GLAMs). Practitioners within and external to cultural institutions build tools to make these world heritage resources available to the public for uses ranging from commercial to creative to scholarly.

JB: What is involved with a museum making its collection available online? How long does it take for a museum to transition from being closed to open access [OA]?

NS: Some resources to consult in this process include The Rights and Permissions Handbook (American Alliance of Museum OSCI 1st Edition; Rowman and Littlefield, 2nd Edition), “Copyright Checkpoint,” and the “Copyright Cortex.” Some museums may also consider RightsStatements.org and International Image Interoperability Framework (IIIF) to address back-end rights management and image services. The “Collections As Data” project and “Museum APIs” wiki may also be useful resources.  

After performing a thorough rights assessment on the assets in question and after consulting with licensed legal counsel in their jurisdiction, museums then need build tools to provide mass self-serve access to data and digital assets sets. These tools typically come in the form of a museum's collection online website, a public application programming interface (API), and a GitHub repository of data in the .CSV and .JSON formats. Data should be offered with the same permissions and legal frameworks as associated image assets.

Importantly, for a data set to be useful to the broadest spectrum of the public, it must include not only identifying or “tombstone” data for objects, but also rich contextual data like object descriptions, provenance, bibliography, artist biographies, or other data that help users to interpret and understand objects.  The API serves application developers and partners, while .CSV and .JSON formatted data mainly supports researchers and scholars. Open-access content should be hosted in partnership with crucial aggregation platforms such as Wikidata, Wikimedia Commons, and Internet Archive. Other partners and aggregators might be impactful given the nature of the type of collections. Museums, too, should be mindful to evaluate and make decisions with respect to cultural and ethical considerations of open access in collaboration with communities and scholars.  

The process from being “closed” to going open access depends on an institution’s preparedness. An advanced level of digital transformation is required for an institution to manifest policies and deliver the necessary tools in order to provide quality open access services to the public. An absolute commitment to open access and sincere leadership are required at the executive level and upper-level management for open access initiatives to succeed. Open access should represent a broader philosophical shift across all aspects of the museum’s operations and programming. An internal working group or project team from relevant areas across the organization should be assembled. The internal group is led by a project manager who leads the project vision and has ultimate decision-making authority. Partnerships with allied organizations engaged with an institution’s users and working directly with Creative Commons is strongly recommended to implement the best practice approach.

Attributed to Duncan Phyfe (American, born Scottish, 1770–1854). Armchair. 1810–15. Made in New York, New York. United States. Mahogany and brass. 32 3/4 x 20 7/8 x 17 3/4 in. (83.2 x 53 x 45.1 cm). The Metropolitan Museum of Art. Gift of C. Ruxton …

Attributed to Duncan Phyfe (American, born Scottish, 1770–1854). Armchair. 1810–15. Made in New York, New York. United States. Mahogany and brass. 32 3/4 x 20 7/8 x 17 3/4 in. (83.2 x 53 x 45.1 cm). The Metropolitan Museum of Art. Gift of C. Ruxton Love Jr. 60.4.3. https://www.metmuseum.org/art/collection/search/268

JB: What are the benefits of institutions implementing open access policies?

NS: The benefits of museums adopting open access policies are certain, clear, and proven. First, museum users expect open access by default. Museums need to redefine their obligation to “access” in the 21st century. The collection is not theirs; they hold it in the public’s trust, and that comes with responsibilities to serve a broad spectrum of users.

Museums employing a clear Creative Commons standards-based policy and well-developed technology platforms in their open access initiatives may receive a significant positive public response. Museums may also see an increase in website traffic on their sites at the time of launch. This web traffic can extend over the long tail through placing data and digital assets onto partner sites’ platforms, where engaged communities of practice make use of the content. Two crucial partners are Wikimedia platforms and Internet Archive, which authentically serve engagement goals with user communities, as well as provide analytics.

Second, digital humanities and other researchers, as well as data scientists, can perform new models of research and publication with unambiguously marked open content. Open access content enables the building of new intersectional and multimodal knowledge systems that are not possible with the restrictions of “closed” content.

Third, open access museums find that their collections, having been opened, become the go-to sources for data and images by journalists and scholars seeking quickly accessible, high-quality, and confidently rights-cleared content for their publications. Simply put, open access data and images are used, and closed data and images are increasingly not used due to the omnipresent burdens of time, money, and process needed to solve rights issues.  

Fourth, museums that make the transition to open access improve operational efficiency, save money on operations (the image request process), and reduce friction for the benefit of users. Image revenue and licensing as a business for public domain artworks continue to decline. Staff who previously wasted resources manually processing burdensome rights-clearing requests for works in the public domain may now focus on rights cataloging for newly acquired and backlogged objects; can build more accurate and complete collection records; and can increase the amount of comprehensive data that provide greater possibilities for the use and interpretation of collections.

Carleton E. Watkins (American, 1829-1916). Bridal Veil, Yosemite. c. 1865-1866. Albumen print from wet collodion negative. Image: 40.1 x 52.4 cm (15 13/16 x 20 5/8 in.); Matted: 61 x 76.2 cm (24 x 30 in.). The Cleveland Museum of Art. Andrew R. and …

Carleton E. Watkins (American, 1829-1916). Bridal Veil, Yosemite. c. 1865-1866. Albumen print from wet collodion negative. Image: 40.1 x 52.4 cm (15 13/16 x 20 5/8 in.); Matted: 61 x 76.2 cm (24 x 30 in.). The Cleveland Museum of Art. Andrew R. and Martha Holden Jennings Fund. 1992.12. http://www.clevelandart.org/art/1992.12

JB: What can users do today with open access collection content?

NS: We do not yet know the full extent of what is possible. Let’s examine several examples and potential applications for how users can engage with open access collections content as a guide.

Art

The Next Rembrandt, a collaboration with ING and Microsoft with advisement from the Technical University of Delft, Mauritshuis, and Museum het Rembrandthuis, produced a “new” Rembrandt painting using data to algorithmically generate a composite portrait based off defining characteristics of Rembrandt’s style. The project drew upon many data sources, including data and images of Rembrandt portraits, which are largely in the public domain. Without further clarification, this project be cannot be considered an open access example per se, in that the research data, code and final image do not appear to available for reuse by others with an open access license. This kind of research and production does provide a useful example though of how public domain collections can foster creative potential for making new art and re-interpreting art history through data. Future examples could be made with open access artworks and data. Watch the video.

Artificial Intelligence and Machine Learning

Andrew Lih and members of the Wikimedia community used the Wikimedia “The Distributed Game” in a specific iteration “Depicts” to assist AI and machine learning to tag images from The Metropolitan Museum of Art’s open access collections. The new data created through this effort helps create standardized data on a decentralized platform of Wikidata where all can benefit from it rather than the data being solely confined to The Met’s collection online. This is a breakthrough for museums and scholars worldwide. Lih stated the project was, “...a powerful demonstration of how to combine AI-generated recommendations and human verification. Now, with more than 3,500 judgments recorded to date, the Wikidata game continues to suggest labels for artworks from The Met and other museums that have made their metadata available.” In conclusion, Lih wrote, “One benefit of interlinking metadata across institutions is that scholars and the public gain new ways to browse and interact with humanity's artistic and cultural objects.”

Bots

Creative developer Andrei Taraschuk is an art fan who makes Twitter art bots for individual artists to share their work on social media. Taraschuk also created art bots for each curatorial department that The Cleveland Museum of Art made available with its #CMAOpenAccess program. These artworks are now being shared more widely than any one institution could do within the confines of their own social media program. Watch Andrei’s Ignite Boulder 37 talk, “Enriching Social Media Through the Power of Art, Bots and AI.”

Commercial Art Platforms

Artsy is a unique platform in the art data environment for collecting and discovering art because of its museum partnerships, research in the Art Genome Project, and its incorporation of open access images and data from third-party providers. Artsy presents art information from the marketplace along with related works in museum collections. Artsy's collections online website is a rare opportunity to examine and find artworks in museum collections with similar works currently for sale. Artsy's approach is valuable for the history of collecting and studying connoisseurship at the intersection of the art market and art history on a nuanced digital platform. Artsy, for example, incorporates open access artworks from The Cleveland Museum of Art. Artsy also has a focus on open source software development, and its public API provides educational and non-commercial access to images and information for historical and public domain artworks.

Data Visualization

Open access museum collection data can be interpreted and perhaps better understood through computational methods such as data visualization. A key leader in museum data is Jeff Steward at Harvard University Art Museums. Jeff’s 2015 "obJECT" lecture, which is part of the Sightlines series of The Digital Futures Consortium, gives an excellent overview of how museum collection data can be creatively visualized. Watch a video of the “collection blooms” visualization. Read more on Harvard University Art Museum’s Index and explore the API and GitHub pages. In addition, The Tate, from 2013 to 2015, developed a digital strategy and open access digital collections data initiatives. Key figures included John Stack, Elena Villaespesa Cantalapiedra, and Richard Barrett-Small. Data researcher Florian Kräutli created visualizations and provided analysis on the data for Tate and The Museum of Modern Art. The Cleveland Museum of Art partnered with Pandata to do a visualization of their collection with the launch of Cleveland’s open access initiative.

Design

Open access museum images have been used in design collaborations with the Rijksmuseum and Etsy, as well as the National Gallery of Denmark and Shapeways. The Rijksstudio Awards by Rijksmuseum featured a top 30 finalist submission by Dr. Andrea Wallace called the “Pixel and Metadata” dress, where the museum collection data itself became a design object.  

Online Learning

Smarthistory is one of the most accessible online learning resources for public and digital art history. It is an open educational resource, or OER. Its mission is to “open museums and cultural sites up to the world” through blog posts, essays, images, timelines, and videos on art history. Smarthistory has a deep corpus of content that serves learners at high school and undergraduate university levels, as well as lifelong learners. Its content is clearly communicated, well researched, and critically engaged, making it a reliable and progressive learning platform. Smarthistory uses Creative Commons legal tools for the licensing of its publication overall, as well as utilizes Creative Commons designated images to populate its essays and videos. Imagine if museums treated their websites like Smarthistory, using Creative Commons legal tools for content so that others could more freely build, create, and share art online. New types of art publications could be created algorithmically and by humans in the future with a more open approach modeled on and expanded from Smarthistory. Smarthistory was founded by Dr. Beth Harris and Dr. Steven Zucker, who are the executive directors. Dr. Naraelle Hohensee is the managing editor.  

Katsushika Hokusai (Japanese, 1760-1849). Under the Wave off Kanagawa (Kanagawa oki nami ura), also known as The Great Wave, from the series "Thirty-Six Views of Mount Fuji (Fugaku sanjurokkei)." 1830-33. Color woodblock print; oban. 25.4 × 37.6 cm …

Katsushika Hokusai (Japanese, 1760-1849). Under the Wave off Kanagawa (Kanagawa oki nami ura), also known as The Great Wave, from the series "Thirty-Six Views of Mount Fuji (Fugaku sanjurokkei)." 1830-33. Color woodblock print; oban. 25.4 × 37.6 cm (10 × 14 3/4 in.). The Art Institute of Chicago. Clarence Buckingham Collection. 1925.3245. https://www.artic.edu/artworks/24645/

JB: What copyright and data frameworks are the museums you are working with using? What are those frameworks? It seems like institutions have areas of consensus, but also differences in their approaches to open access.

NS: Working in open access means building resources and working “in the commons.” An institution does not have to undertake the open access process in isolation and risk creating a bespoke policy that does not follow the established practices of leading open access institutions and allied organizations like Creative Commons. Creative Commons provides the most widely used, interoperable, and globally standardized legal framework for open access. The Creative Commons Zero Public Domain Dedication is the most open and permissive tool, as well being the most commonly used by leading cultural institutions who seek to assertively remove as many barriers as possible to foster the use, reuse, and remix of their collections. Note that it would not be considered open access if a museum applied a Creative Commons Attribution license to digitized objects in the public domain.

Some GLAM institutions have implemented conditions that require users to “share-alike,” meaning that creators who use “share-alike” content must offer their new creation or derivative work under the same conditions as the source material. While the “share-alike” concept may appear more progressive, it may potentially hinder the freedom of expression, individual liberty, and interpretation of others with its dependent contingencies. Share-Alike was intended to help build and expand the commons, but it may more often act as a deterrent, causing users to look elsewhere for content that can be used without undue burden on their creative production and consistent with other harmonious terms like Creative Commons Zero. Furthermore, museums may not have a right to license under share-alike, therefore creating confusion for both institutions and users. The application of other licenses like share-alike or non-commercial should only be considered for works created by the institution, where they hold the copyright as opposed to the digitization of underlying works that are in the public domain.

Some museums, early in the development of open access, created specific policies for open access in their terms and conditions or by using the statement “public domain.” It is important that cultural institutions understand that the concepts and legal framework for “public domain” are determined by a range of factors, and is often dependent on country-specific or national definitions. Some institutions may use the Creative Commons Public Domain Mark for collection images and data, but this tool does come with considerations around works that may have a “hybrid” public domain status, meaning they have a status that is “public domain in some jurisdictions but may also be known to be restricted by copyright in others.”

Museums especially should opt for Creative Commons Zero when applicable to digitized collections or museum produced content because it, as stated on the Creative Commons website, “provides the best and most complete alternative for contributing a work to the public domain given the many complex and diverse copyright and database systems around the world,” and “clarifies the status of your work unambiguously worldwide and facilitates reuse.” The commons of the Internet is a realm of production beyond any one nation or group. Museums doing open access should desire to see their collections engaged and used assertively on a global scale.

Margareta Haverman (Dutch, Breda 1693–1722 or later). A Vase of Flowers. 1716. Oil on wood. 31 1/4 x 23 3/4 in. (79.4 x 60.3 cm). The Metropolitan Museum of Art. Purchase. 71.6. https://www.metmuseum.org/art/collection/search/436634

Margareta Haverman (Dutch, Breda 1693–1722 or later). A Vase of Flowers. 1716. Oil on wood. 31 1/4 x 23 3/4 in. (79.4 x 60.3 cm). The Metropolitan Museum of Art. Purchase. 71.6. https://www.metmuseum.org/art/collection/search/436634

JB: I always get super excited every time another museum makes its collection open access, but to be honest, it is not always clear how to engage with this content. I feel like in addition to making data and assets available, we are missing the tools to make it easier for the average person to consume, filter, and mine all of this data for exciting insights and to tell their own stories or do their own research using the content. Do you agree? Are you aware of efforts to make museums’ collections easier to analyze and consume?

NS: Baseline content elements and tools for museums to deliver open access are identified in this text. They are more mature than people may realize. The GLAM sector does need to improve tools for working collaboratively at scale with decentralized and distributed data and digital assets at the peer level. Between museums and partners, what is needed are highly automated and sustainable pipelines for digital assets to connect and to be distributed online to partners and subsequently end-users. In terms of tools for end-users, there are exemplary artists, developers, and scholars working with museum content. Those creators, whether independent or institutionally affiliated, have the tools they need to make in their contexts. Active partnership with museums can maximize creative output and benefit makers. Museums also need to do the due diligence of documenting and sharing open access projects made from their collections that they admire to inspire others and build a greater corpus of relevant examples.

The first plateau for any museum to reach is to make data, images, and publications open access. After that best practice step, museums must understand and commit to the future development of open access initiatives for the long term as being equal to exhibition-making, collecting objects, conservation, and scholarly publishing. Open access is a pillar of both museum content development and community engagement. Open access is not a “set it and forget it” scenario. Open access requires not only ongoing operational and technical maintenance, but sincere incorporation into the programmatic functions of a museum such as education, public programs, and scholarly publishing. The answer to the public engagement question for the long term with open access museum collections is not one-time contests, festivals, or hack-a-thons. These short-term tactics will not achieve a museum’s goal of deep and authentic engagement with users because they do not scale and are not part of annually budgeted programmatic efforts.

The critical opportunity for museums is to co-produce knowledge systems and experiences of collections built in collaboration with users. I’ve written about this in detail recently for the Museums and the Web 2019, Boston conference paper, addressing the historical development of collections online and “Wikification.” The ability for users to see their contributions manifested, reflected and impacting the ways that museums carry out their missions on a data level is needed whether it be on a museum’s collection platform, a third-party site such as Wikimedia or a user’s independent creative project. Museums need to commit to working together on tool development and resources that work well beyond small consortia and self-selecting peer groups. The impact and scale of museum collections come into fruition when they are part of an ecosystem of content on popular and commercial applications familiar as well as widely adopted by users with a diverse range of interests and skills.  

Making museum data easier to analyze, consume, and create with for users is a necessary part of the hard work of digital transformation that responsible museums must do. Museums can remain relevant by providing essential services for cultural production and consumption in the digital world. Museums must prioritize an operational philosophy and practice that efficaciously meets the transactional customer expectations of not only millennials, the rising dominant global generation, but also the successive-born digital generations who will have even higher levels of synchronicity between digital and physical lives. Artificial intelligence and machine learning, along with human interaction, have the potential with open access to help museums make more meaningful user connections through accessible, multilingual, and translated content, as well. Commercial businesses have already prioritized customer needs with new technology developments. Museums also can optimize the use of these tools to offer potential benefits for human connectivity and greater mutual understanding, especially when engaged with museum content with open access.

JB: Have you spoken to museums that are afraid to make their collections open access? If so, what drives the fear and how do you overcome this?

NS: Yes, I am in frequent conversations with museum clients about how to make the open access transition for their institutions. Most fear in this regard stems from the same conditions that undermine positive improvement in other aspects of business and life: uninformed anecdotes; too much self-focus; and a misguided sense of tradition that says “this is the way we have always done it” or “we are limited by an edge case.” While there may be real on-the-ground obstacles to taking on open access for an institution, it is important to face change and have the will to move forward. In addition to pointing the major open access success stories of leading institutions, I encourage executive leaders and staff throughout GLAM communities to remember their missions and responsibilities to the public they serve. Open Access is “mission critical” for museums.

William Henry Fox Talbot (British, Dorset 1800–1877 Lacock). A Scene in a Library. Before March 22, 1844. Salted paper print from paper negative. Image: 13.3 x 18 cm (5 1/4 x 7 1/16 in.). The Metropolitan Museum of Art. Gilman Collection, Gift of Th…

William Henry Fox Talbot (British, Dorset 1800–1877 Lacock). A Scene in a Library. Before March 22, 1844. Salted paper print from paper negative. Image: 13.3 x 18 cm (5 1/4 x 7 1/16 in.). The Metropolitan Museum of Art. Gilman Collection, Gift of The Howard Gilman Foundation. 2005.100.172. https://www.metmuseum.org/art/collection/search/283066

JB: In addition to museum collection data, there are catalogue raisonné data and gallery and auction records. The New York Public Library defines catalogue raisonné as “a comprehensive, annotated listing of all the known works of an artist either in a particular medium or all media.”  In a perfect world, we would have an artist-level view of all of the works an artist has created, where they currently reside, and where they have been in the past. How could catalogue raisonné data be useful working across museums and with estates, galleries, libraries, or auction houses in a unified and decentralized manner?

NS: Catalogue raisonné data is particularly interesting because, in aggregate with open access, it has the potential to transform how the history of collecting and provenance are studied across public and private collections over time. Catalogue raisonné numbers are facts that are not copyrightable. The difficulty in many cases with this data is that catalogue raisonné data is mostly still only in print format. In the case of 20th or 21st-century artists, this data remains the purview of artists’ estates or representatives whose primary interest is focused on the accounting and value promotion of a particular artist’s work rather than building shared knowledge through comparative research with other artists or collections. Catalogue raisonné projects published in print, or those in a digital format with restrictive or closed access, are prime examples of costly, inefficient, and outdated knowledge production processes. Moreover, they are “data silos.”

If catalogue raisonné numbers and data were published as open access, they would provide richer cataloging records for museum collections around the world through shared bibliographic data and enable museums to focus energy on creating new catalogue records for new or unprocessed collections. Catalogues raisonné are a collaborative publication in which academics and curators work together to produce knowledge, although typically in an enclosed and invite-only process. The perspectives contributed by external and independent scholars in making a catalogue raisonné entry are not often incorporated at the same level of authority as internal curatorial knowledge within museum collections online and may only be incorporated as citations or when absorbed into summary knowledge as presented to the public in an object description or label. Wikidata can act as a unified and decentralized platform where catalogue raisonné numbers and data could have a broader impact. From Wikidata, catalogue raisonné data could be used by museums as well as auction houses, collectors, and scholars. Wikimedia contributor Jane Darnell mentioned to me in a tweet that she digitizes catalogue raisonné data from old publications for use on Wikidata as related to WikiProject sum of all paintings. Jane shared examples of catalogue raisonné and Wikidata work on the paintings of Hofstede de Groot and Bartholomeus van der Helst.  

Some examples of digital catalogue raisonné include SFMOMA Rauschenberg Research Project, Pieter and Jan Brueghel sites, and Artifex Press. A model that points to a more progressive future is the Paul Mellon Centre for Studies in British Art catalogue raisonné on the artist Francis Towne. In its copyright page, the Francis Towne catalogue provides nuanced details about the rights status of the overall publication and elements within it. The Towne catalogue [as a whole publication] is offered under Creative Commons Attribution-NonCommercial 3.0 Unported license, along with acknowledgment of sourcing open access images using Creative Common Zero. The online publication provides a search filter to find open access images directly within the catalogue itself that may be downloaded at high resolution and reused as related to terms of the source image. For other museum examples, although not catalogue raisonnés, consult Ancient Terracottas: From South Italy and Sicily In the J. Paul Getty Museum, and The Digital Walters. These digital publications offer downloadable and rich content packages and use Creative Commons legal tools.

Challenges concerning the current states of catalogue raisonnés speak to ongoing difficulties in the education, training, skills development, and present condition of digital art history and scholarly practice. Art historians are still working mainly in outmoded practices of knowledge production that can be made be more collaborative, transparent, and synchronized when compiling catalogue raisonnés not only as digital first, but open access as publications beyond images and text into data and code. The code can also be published with companion open source legal tools with Creative Commons licensed content and may work in conjunction with a Creative Common Zero Public Domain Dedication.

Rembrandt van Rijn (Dutch, 1606-1669). Self-Portrait Leaning on a Stone Sill. 1639. Etching and drypoint on cream laid paper. Sheet: 20.6 x 16.3 cm (8 1/8 x 6 7/16 in.); Platemark: 20.4 x 16.1 cm (8 1/16 x 6 5/16 in.). The Cleveland Museum of Art. B…

Rembrandt van Rijn (Dutch, 1606-1669). Self-Portrait Leaning on a Stone Sill. 1639. Etching and drypoint on cream laid paper. Sheet: 20.6 x 16.3 cm (8 1/8 x 6 7/16 in.); Platemark: 20.4 x 16.1 cm (8 1/16 x 6 5/16 in.). The Cleveland Museum of Art. Bequest of Mrs. Severance A. Millikin. 1989.244. http://www.clevelandart.org/art/1989.244

JB: How do museums manage rights and permissions issues? Do museums own the copyright for the images of their collections? Can people feel free to modify or share the images they find through the open access initiatives for these museums?

NS: Rights and permissions management for art museums can involve many roles internal and external to an organization. It may include staff within an organization such as rights and permissions managers, collections managers, legal counsel, registrars, curators, conservators, and in important cases, even museum directors. External to an organization are artist’s estates and their representatives, who may be the exclusive agent representing an artist’s rights for copyright use requests. For works in copyright, art museum staff work in close coordination with artists; estates and their representatives to review use and permission requests on a case-by-case basis. Loans and other restrictions can apply to works, as well, often defined on a contractual basis between parties. It is crucial to distinguish the fees charged by art museums for digitization vs. fees charged for rights and permissions requests. Assessing fees for digitization may be appropriate for the costs of museum staff labor (e.g., handling objects, photography, post-production), time, and resources.

The rights and permissions process is a highly manual, labor-intensive, time-consuming and often costly process for the museum and end user. Fees are assigned for projects for a variety of factors. The rights and permissions process within art museums acts more like “gatekeeping” to deny access to the use of artworks by the public, either at the behest of the specific institution or by the rights’ holder. A significant limitation of the rights and permissions process across the GLAM sector is that it is primarily focused on processing image requests, largely leaving no standard mechanisms or process for other content packages such as code, data, text, and multimedia asset requests. Another limitation is that these requests are typically handled through email or online web forms that take days to weeks to process.

Users need to understand the details of licenses and terms of use statements because these details vary between objects and institutions. It is prudent for users to cross check multiple databases and do thorough image rights research as part of their process. Open access is part of the necessary reform to the landscape of rights and permissions pitfalls. Unambiguous and legally operative terms like Creative Commons Zero make the ability to use and reuse clearer for users. The public should have confidence using open access museum content in their creative projects as aligned with the terms of use.

Marie Denise Villers (French, Paris 1774–1821 Paris (?)). Marie Joséphine Charlotte du Val d'Ognes (1786–1868). 1801. Oil on canvas. 63 1/2 × 50 5/8 in. (161.3 × 128.6 cm). The Metropolitan Museum of Art. Mr. and Mrs. Isaac D. Fletcher Collection, B…

Marie Denise Villers (French, Paris 1774–1821 Paris (?)). Marie Joséphine Charlotte du Val d'Ognes (1786–1868). 1801. Oil on canvas. 63 1/2 × 50 5/8 in. (161.3 × 128.6 cm). The Metropolitan Museum of Art. Mr. and Mrs. Isaac D. Fletcher Collection, Bequest of Isaac D. Fletcher. 17.120.204. https://www.metmuseum.org/art/collection/search/437903

JB: Improving art data to preserve and protect our art historical records is something I think about a lot. I worry that we may not get there in my lifetime. How would you describe your view of the need to improve art data? How does this look? How long do you think it will take us to get there? What are the biggest stumbling blocks to improving art data? How do we overcome them?

NS: We are already on the way to improving the quality of art data in the broadest sense of the concept. The GLAM sector continues to see steady progress for its commitment to open access around the world. There are successions of new institutions joining the open access wave. Just think about what has been achieved already and where we are right now. Some of the world’s leading and most significant institutions have made the open access transition with sincere public declarations and celebrations of their collections. Those institutions that lag behind must be held to account by their directors, boards, and staff to implement an open access future. Open access is a plateau that institutions must reach as soon as possible if they wish to participate in the next tier of digital, educational, and culturally relevant efforts that are inextricably interlinked with global technological innovation. Much has been achieved. More is to be done.

I see a future of open art data where entire ecosystems and suites of content (e.g., code, data, images, multimedia assets, and texts) are circulating in creative production between humans and machines, or what Director of MIT Media Lab Joi Ito refers to as “extended intelligence.” I can imagine a landscape where museum publishing becomes increasingly automated by bots pulling from open access texts, which is an exciting opportunity, but also speaks to the urgent need to improve infrastructure and copyright policy to expand our possibilities for making an inclusive and boundary-traversing art history. I see new applications being built by the commercial sector in partnership with museums that improve the user experience of exhibitions and collections. I imagine new commercial products being made in brand partnerships with new businesses that increase revenue and operational sustainability for museums. The road will be built collaboratively with iterative joint efforts from commercial and prosocial actors. Wikimedia platforms can have a vital role to play as a shared and unified, yet decentralized, third space where the integrated knowledge systems can be formed as they have not been before.

The biggest stumbling blocks are apathy, doubt, and fear. Museums and those allied across the cultural heritage communities can overcome these obstacles with dedication, mutual support, and ultimate concern for our users: the public. Museums, too, must prioritize users' liberty and individual self-actualization. As Merete Sanderhoff, Curator and senior advisor at the National Gallery of Denmark, stated in “The Only Way is Open,” open access aims to make “human creativity from all times and all corners of the world accessible to all citizens, to foster new knowledge and inspire new creativity.”   

Vilhelm Hammershøi (Danish, 1864-05-15 - 1916-02-13). Interior in Strandgade, Sunlight on the Floor. 1901. Oil on canvas. 46.5 x 52 cm. The National Gallery of Denmark. The Royal Collection of Paintings and Sculptures. KMS3696. https://www.smk.dk/en…

Vilhelm Hammershøi (Danish, 1864-05-15 - 1916-02-13). Interior in Strandgade, Sunlight on the Floor. 1901. Oil on canvas. 46.5 x 52 cm. The National Gallery of Denmark. The Royal Collection of Paintings and Sculptures. KMS3696. https://www.smk.dk/en/highlight/stue-i-strandgade-med-solskin-paa-gulvet-1901/

JB: Is there anything else you want to share, Neal?

NS: I want to thank my colleagues Nik Honseysett, Daniel Brennan, Michael Weinberg, and Ryan Merkley for their constructive feedback on this interview. Thank you, Jason, for the invitation to collaborate on this project. Those interested in working with me as a consultant can send me an message on the contact page of my website, Twitter or via LinkedIn.

Comment

Giving Generative Art Its Due

April 17, 2019 Jason Bailey
Mantel Blue, Manolo Gamboa Naon (Personal collection of Kate Vass), 2018

Mantel Blue, Manolo Gamboa Naon (Personal collection of Kate Vass), 2018

I have long dreamed of attending an art exhibition that presented the full range of generative art starting with early analog works of the late 1950s and ranging all the way up to new AI work we have seen in just the last few years. To my knowledge, no such show has ever existed. Just to attend such a show would be a dream come true for me.

So when the Kate Vass galerie proposed that I co-curate a show on the history of generative art, I thought I had died and gone to heaven. While I love early generative art, especially artists like Vera Molnar and Frieder Nake, my passion is really centered around contemporary generative art. So pairing up with my good friend Georg Bak, expert in early generative photography, was the perfect match. Georg brings an unmatched passion and detailed understanding of early generative art that firmly plants this show in a deep and rich tradition that many have yet to learn about.

As my wife can attest, I have regularly been waking up at four in the morning and going to bed past midnight as we race to put together this historically significant show, unprecedented in its scope.

I couldn’t be more enthusiastic and proud of the show we are putting together and I am excited to share the official press release with you below:


Invitation for Automat Und Mensch (Man and Machine)

Invitation for Automat Und Mensch (Man and Machine)

“This may sound paradoxical, but the machine, which is thought to be cold and inhuman, can help to realize what is most subjective, unattainable, and profound in a human being.” - Vera Molnar

In the last twelve months we have seen a tremendous spike in the interest of “AI art,” ushered in by Christie’s and Sotheby’s both offering works at auction developed with machine learning. Capturing the imaginations of collectors and the general public alike, the new work has some conservative members of the art world scratching their heads and suggesting this will merely be another passing fad. What they are missing is that this rich genre, more broadly referred to as “generative art,” has a history as long and fascinating as computing itself. A history that has largely been overlooked in the recent mania for “AI art” and one that co-curators Georg Bak and Jason Bailey hope to shine a bright light on in their upcoming show Automat und Mensch (or Machine and Man) at Kate Vass Galerie in Zurich, Switzerland.  

Generative art, once perceived as the domain of a small number of “computer nerds,” is now the artform best poised to capture what sets our generation apart from those that came before us - ubiquitous computing. As children of the digital revolution, computing has become our greatest shared experience. Like it or not, we are all now computer nerds, inseparable from the many devices through which we mediate our worlds.

Though slow to gain traction in the traditional art world, generative art produces elegant and compelling works that extend the very same principles and goals that analog artists have pursued from the inception of modern art. Geometry, abstraction, and chance are important themes not just for generative art, but for much of the important art of the 20th century.

Every generation claims art is dead, asking, “Where are our Michelangelos? Where are our Picassos?” only to have their grandchildren point out generations later that the geniuses were among us the whole time. With generative art we have the unique opportunity to celebrate the early masters while they are still here to experience it.

 
9 Analogue Graphics, Herbert W. Franke, 1956/’57

9 Analogue Graphics, Herbert W. Franke, 1956/’57

 

The Automat und Mensch (Man and Machine) exhibition is, above all, an opportunity to put important work by generative artists spanning the last 70 years into context by showing it in a single location. By juxtaposing important works like the 1956/’57 oscillograms by Herbert W. Franke (age 91) with the 2018 AI Generated Nude Portrait #1 by contemporary artist Robbie Barrat (age 19), we can see the full history and spectrum of generative art as has never been shown before.

 
Correction of Rubens: Saturn Devouring His Son, Robbie Barrat, 2019

Correction of Rubens: Saturn Devouring His Son, Robbie Barrat, 2019

 

Emphasizing the deep historical roots of AI and generative art, the show takes its title from the 1961 book of the same name by German computer scientist and media theorist Karl Steinbuch. The book contains important early writings on machine learning and was inspirational for early generative artists like Gottfried Jäger.

We will be including in the exhibition a set of 10 pinhole structures created by Jäger with a self-made pinhole camera obscura. Jäger, generally considered the father and founder of “generative photography,” was also the first to use the term “generative aesthetics” within the context of art history.

10 Pinhole Structures, Gottfried Jäger, 1967/’94

10 Pinhole Structures, Gottfried Jäger, 1967/’94

We will also be presenting some early machine-made drawings by the British artist Desmond Paul Henry, considered to be the first artist to have an exhibition of computer generated art. In 1961 Henry won first place in a contest sponsored in part by well-known British artist L.S. Lowery. The prize was a one-man show at The Reid Gallery in August, 1962, which Henry titled Ideographs. In the show, Henry included drawings produced by his first drawing machine from 1961 adapted from a wartime bombsight computer.

Untitled, Desmond Paul Henry, early 1960s

Untitled, Desmond Paul Henry, early 1960s

The show features other important works from the 1960s through the 1980s by pioneering artists like Vera Molnar, Nicolas Schoeffer, Frieder Nake, and Manfred Mohr.

We have several generative works from the early 1990s by John Maeda, former president of the prestigious Rhode Island School of Design (2008-2014) and associate director of research at MIT Media Lab. Though Maeda is an accomplished generative artist with works in major museums, his greatest contribution to generative art was perhaps his invention of a platform for artists and designers to explore programing called "Design By Numbers."

Casey Reas, one of Maeda’s star pupils at the MIT Media Lab, will share several generative sketches dating back to the early days of Processing. Reas is the co-creator of the Processing programing language (inspired by Maeda’s “Design By Numbers”) which has done more to increase the awareness and proliferation of generative art than any other singular contribution. Processing made generative art accessible to anyone in the world with a computer. You no longer needed expensive hardware, and more importantly, you did not need to be a computer scientist to program sketches and create generative art.

This ten minute presentation introduces the Process works created by Casey Reas from 2004-2010.

Among the most accomplished artists to ever use Processing are Jared Tarbell and Manolo Gamboa Naon, who will both be represented in the exhibition. Tarbell mastered the earliest releases of Processing, producing works of unprecedented beauty. Tarbell’s work appears to have grown from the soil rather than from a computer and looks as fresh and cutting edge today as it did in 2003.

Substrate, Jared Tarbell, 2003

Substrate, Jared Tarbell, 2003

Argentinian artist Manolo Gamboa Naon - better known as “Manolo” - is a master of color, composition, and complexity. Highly prolific and exploratory, Manolo creates work that takes visual cues from a dizzying array of aesthetic material from 20th century art to modern-day pop culture. Though varied, his work is distinct and immediately recognizable as consistently breaking the limits of what is possible in Processing.

aaaaa, Manolo Gamboa Naon, 2018

aaaaa, Manolo Gamboa Naon, 2018

With the invention of new machine learning tools like DeepDream and GANs (generative adversarial networks), “AI art,” as it is commonly referred to, has become particularly popular in the last five years. One artist, Harold Cohen, explored AI and art for nearly 50 years before we saw the rising popularity of these new machine learning tools. In those five decades, Cohen worked on a single program called Aaron that involved teaching a robot to create drawings. Aaron’s education took a similar path to that of humans, evolving from simple pictographic shapes and symbols to more figurative imagery, and finally into full-color images. We will be including important drawings by Cohen and Aaron in the exhibition.

AI and machine learning have also added complexity to copyright, and in many ways, the laws are still catching up. We saw this when Christie’s sold an AI work in 2018 by the French collective Obvious for $432k that was based heavily on work by artist Robbie Barrat. Pioneering cyberfeminist Cornilia Sollfrank explored issues around generative art and copyright back in 2004 when a gallery refused to show her Warhol Flowers. The flowers were created using Sollfrank’s “Net Art Generator,” but the gallery claimed the images were too close to Warhol’s “original” works to show. Sollfrank who believes “a smart artist makes the machine do the work” believed she had a case that the images created by her program were sufficiently differentiated. Sollfrank responded to the gallery by recording conversations with four separate copyright attorneys and playing the videos simultaneously. In this act, Sollfrank raised legal and moral issues regarding the implications of machine authorship and copyright that we are still exploring today. We are excited to be including several of Sollfrank’s Warhol Flowers in the show.

 
Anonymous_warhol-flowers, Cornelia Sollfrank

Anonymous_warhol-flowers, Cornelia Sollfrank

 

While we have gone to great lengths to focus on historical works, one of the show’s greatest strengths is the range of important works by contemporary AI artists. We start with one of the very first works by Google DeepDream inventor Alexander Mordvintsev. Produced in May of 2015, DeepDream took the world by storm with surreal acid-trip-like imagery of cats and dogs growing out of people’s heads and bodies. Virtually all contemporary AI artists credit Mordvintsev’s DeepDream as a primary source of inspiration for their interest in machine learning and art. We are thrilled to be including one of the very first images produced by DeepDream in the exhibition.

 
Cats, Alexander Morrdvintsev, 2015

Cats, Alexander Morrdvintsev, 2015

 

The show also includes work by Tom White, Helena Sarin, David Young, Sofia Crespo, Memo Akten, Anna Ridler, Robbie Barrat, and Mario Klingemann.

Klingemann will show his 2018 Lumen Prize-winning work The Butcher’s Son. The artwork is an arresting image that was created by training a chain of GANs to evolve a stick figure (provided as initial input) into a detailed and textured output. We are also excited to be showing Klingemann’s work 79543 self-portraits, which explores a feedback loop of chained GANs and is reminiscent of his Memories of Passersby which recently sold at Sotheby’s.

 
The Butcher’s Son, Mario Klingemann, 2018

The Butcher’s Son, Mario Klingemann, 2018

 

Automat und Mensch takes place at the Kate Vass Galerie in Zürich Switzerland and will be accompanied by an educational program including lectures and panels from participating artists and thought leaders on AI art and generative art history. The show runs from May 29th to October 15th, 2019.

Participating Artists:

Herbert W. Franke

Gottfried Jäger

Desmond Paul Henry

Nicolas Schoeffer

Manfred Moor

Vera Molnar

Frieder Nake

Harold Cohen

Gottfried Honegger

Cornelia Sollfrank

John Maeda

Casey Reas

Jared Tarbell

Memo Akten

Mario Klingemann

Manolo Gamboa Naon

Helena Sarin

David Young

Anna Ridler

Tom White

Sofia Crespo

Matt Hall & John Watkinson

Primavera de Filippi

Robbie Barrat

Harm van den Dorpel

Roman Verostko

Kevin Abosch

Georg Nees

Alexander Mordvintsev

Benjamin Heidersberger

For further info and images, please do not hesitate to contact us at: info@katevassgalerie.com  

3 Comments

Autoglyphs, Generative Art Born On The Blockchain

April 8, 2019 Jason Bailey
Collection of four Autoglyphs

Collection of four Autoglyphs

If you are a regular Artnome reader, you know we are big on blockchain and generative art. So of course I was super excited when my good friends Matt Hall and John Watkinson of CryptoPunks fame gave me a sneak peek of Autoglyphs, their new project which creates old-school generative art that literally lives on the blockchain.

In this post I nerd out with Matt and John about Autoglyphs, grilling them with all kinds of questions including:

  • What are Autoglyphs and how do they work?

  • How do Matt and John manage to actually put art on the blockchain?

  • Did early generative art serve as inspiration for Autoglyphs?

  • Why did they create just 512 out of billions of possible Autoglyphs?

  • What are the differences between Autoglyphs and CryptoPunks?

  • Do Matt and John think of themselves as artists?

  • What makes a good Autoglyph?

Autoglyphs are highly unique because traditionally, the actual image files associated with blockchain art like CryptoPunks, CryptoKitties, or Rare Pepe are stored in a database somewhere “off chain,” meaning off of the blockchain. Artists typically address this “off chain” storage by including a link to the image from the blockchain called a “hash” so that you can locate the image file for your artwork from its record.  For example, even though the image of my CryptoPunk is comprised of relatively few pixels, it actually lives “off chain” on the LarvaLabs server at:

https://www.larvalabs.com/public/images/cryptopunks/punk2050.png

Screen Shot 2019-03-30 at 6.40.36 PM.png

This means the actual artwork does not technically benefit from any of the tamper-proof advantages like “decentralization” or “immutability” typically associated with the blockchain (unless you think of the token itself as the artwork instead of the image). Put another way, there is nothing stopping someone from altering, moving, or removing the image from the location the hash is pointing to. If that were to happen, all you would be left with is an immutable record stating that you own an artwork, with no way of actually seeing it.

Perhaps you are thinking, “Why not just store the image on the blockchain? It is, after all, a database, right?” Well, blockchain is great for a lot of things, but storing large image files is not one of them. Unless you can make art with a super tiny footprint, it is impractical to store traditional image files like JPEG or PNG on the blockchain.

This is what makes Autoglyphs so damn cool. Matt and John decided to accept the storage limitations of the blockchain as a challenge to see what they could create that could actually be stored “on chain.”

Michael A Noll, Computer Composition with Lines, 1964, digital computer and microfilm plotter

Michael A Noll, Computer Composition with Lines, 1964, digital computer and microfilm plotter

Piet Mondrian, Composition With Lines, second state, 1916-17, oil on canvas, ©Rejikmuseum Kröller-Müller


Piet Mondrian, Composition With Lines, second state, 1916-17, oil on canvas, ©Rejikmuseum Kröller-Müller

I love this idea because it is a throwback to the compute and storage challenges that the earliest generative artists like Michael Noll and Ken Knowlton faced when trying to create art on computers in the early 1960s. As you will see, this is not lost on Matt and John, who are huge fans of early generative art and decided to embrace the aesthetic and run with it. With that, let’s jump into the interview.

Autoglyphs - An Interview with Matt Hall and John Watkinson

glyph41.png

Jason Bailey:  Thanks for chatting, guys. I have a bunch of questions, but I’m happy to start with the easy one. What was the impetus or inspiration behind Autoglyphs?

John Watkinson: There is a lot of talk of art on the blockchain. With the CryptoPunks, all of the ownership and the provenance is permanently and publicly available, and those rules are set and fixed. And yet there's still a bit of an imperfection there in that the art comes from outside of the blockchain and stays out there, and it's just referenced by a smart contract. We don't have any complaints about the CryptoPunks, but it felt like there was an opportunity to go further. With Autoglyphs, we asked ourselves, “Can we make the entire thing completely self-contained and completely open and operating on the blockchain?”

JB: So the decision to literally store the artwork on the blockchain comes with some pretty hardcore restrictions, right? What sort of parameters are you now boxing yourself into once you make that decision?

JW: You have to have very small and efficient code generating the work. The actual output of the work has to be a very small amount of data or text because you can't have a large amount of data on the blockchain. So a small amount of efficiently running code, and fairly small, efficient output.

Those were the constraints, and they were pretty extreme. For a while thought we couldn't do it, or couldn't do it in a way that was satisfying for us. I was sort of exploring various generators and trying to make them more efficient, just binary image generators. I got to one that I thought was pretty good and I then experimented with it, trying to turn it into a smart contract, and I just couldn't get it to work. It was just hitting limits and wasn't working at all.

Then I tried it a few months later and just pushed it a little further and just got there. Still, the transaction fee of making an Autoglyph is going to be about half of an Ethereum block. So an Ethereum block is about eight million gas, so that's how much computation can happen in one mined block of Ethereum, and this is going to be three million gas, so it's almost half a block.

That means that the transaction fees will be relatively expensive - between one and two dollars - depending on the price of gas. So it's a pretty hefty transaction. If we went much more than that, we would already be outside of feasibility. If we went over eight million, it would be completely impossible, you wouldn't be able to do it.

JB: Got it. Dumb question: Does the code for generating the image live on the blockchain? Or is there actually an image on the blockchain?

JW: The code lives on the blockchain, and in fact, when you ask the blockchain for the image, it will just generate it again for you. That part happens on a end node, so that doesn't cost any actual money or gas. But whenever you say, “Give me the image for Autoglyph five", it will just generate it again for you based on the seed information that was created in the transaction.

Matt Hall: It's probably also worth making the distinction between the image and the instructions to generate different representations of it. The actual image you see on the website is not generated on the blockchain. The art, the instructions for how to write it are on the blockchain, but we make an SVG or PNG file on the web server. If that was your question, then no, the actual image data doesn't come off the blockchain, but there's an ASCII representation of exactly what that is on there. It's an ASCII art representation of the glyph.

Screen Shot 2019-04-01 at 11.22.33 AM.png

JB: Nice. That was going to be my next question. I love ASCII art, and I assumed that it was generating some sort of ASCII format. So the ASCII art version of the image is an image made out of text and is actually on the blockchain. But in addition to that, you're generating PNGs or JPEGs for end user convenience that you've got hosted at Larva Labs? Is that a fair way to put it?

Screen Shot 2019-04-01 at 11.25.14 AM.png

JW: Yes. We're generating the image, and we basically created instructions on how to do that. So in the source code for the actual smart contract, if you scroll down a little bit below that big ASCII art “Autoglyphs,” you'll see that there are these little instructions. For every ASCII art character, it tells you how to draw it. We generate image files that way. But the idea is that anyone can generate it - kind of like a Sol LeWitt instruction set for creating a drawing. If you own a glyph, then you can make it at any scale, with any materials you want. You can make your Autoglyph using these instructions.

lewitt_49_instructions.jpg

JB: Great. That was going to be my next question. Is it a bit like a Sol LeWitt, where essentially if Larva Labs, God forbid, goes out of business and you decide that you no longer want to support the interface people will have everything they need built within this little blockchain code to infinitely generate these Autoglyphs? Will Autoglyphs outlive us all?

Sol LeWitt, Wall Drawing 87, June 1971

Sol LeWitt, Wall Drawing 87, June 1971

JW: Yes, that's the idea. They'll be able to make their Autoglyphs and follow these instructions to render them. We have a little pen plotter, we're going to make some of our Autoglyphs physically rendered with that, which is kind of just for fun. It's well set up for plotting that way.

MH: We were kicking around different versions of this and then we saw this show at the Whitney. It is a retrospective of a bunch of digital art. They have early generative art and all sorts of different stuff. There was this big Sol LeWitt piece, and they were explicit about how this piece had been executed by an assistant at the gallery, but that's in keeping with the intention of the artwork and the instructions of the artist. We thought that was good, it was perfect, because we can't do a lot of things we want to do directly on the blockchain, but we can have the spirit of it be completely self-contained.

Sol LeWitt (1928-2007), Wall Drawing #289, 1976

Sol LeWitt (1928-2007), Wall Drawing #289, 1976

By providing them with the instructions on the blockchain, now the art can be rendered very large and detailed. For example, we could have stored these as tiny pixel graphics, graphs, something like that, but then you're limited to that. This way they can operate at any scale and in any material.

JB: It does feel like a throwback to some of the early generative art. I'm thinking like Ken Knowlton and Michael Noll. Other than Sol LeWitt, were there other artists who inspired the Autoglyphs? Or do they just look like old-school generative art due to the storage limitations of the blockchain?

Ken Knowlton, from the pages of Design Quarterly 66/67, Bell Telephone Labs computer graphics research.

Ken Knowlton, from the pages of Design Quarterly 66/67, Bell Telephone Labs computer graphics research.

JW: A little bit of both. We definitely needed to clamp down the parameters pretty hard because of the technical requirements, but we'd been getting into the early pioneering digital art of the '60s and early '70s stuff. It's definitely an homage to Michael Noll and Ken Knowlton and that kind of stuff, which we really love. Only once we got to this digital art world via the CryptoPunks did we really realize how much of all this stuff had been explored in the '60s. It’s almost humbling how much ground was covered so quickly in digital art in the '60s and early '70s.

JB: Yeah, I love early generative art. It looks like from the Autoglyphs site that the algorithm, while it had to be simple by definition, is capable of generating billions of unique artworks, but then there are 512 that ultimately will be produced before it stops, right? So how do those 512 get selected among the billions of possible works? And second part of the question, why 512?

MH: Good question. They're going to be randomly seeded. There's a random seed that goes into the algorithm to generate them, and if you operate the contract manually, you can specify the seed manually - but you can't reuse an existing seed that's already been used to make a glyph. We debated whether to limit it or not, whether to make it so that everyone and anyone can come and get their glyph. There are a few arguments in each direction, but ultimately when you make generative art like this, the generator kind of is the artwork a little bit, and there's so much it can express.

It's basically a very tiny generator. If you scroll down in that source code, the core of the generator is the draw function, which is only about forty lines. So we said, “At what point does a generator kind of play itself out, where you've seen everything?” You could make more, but it's just going to be like, “Oh, that's similar to that one, that's similar to that one,” so how much surprise and variety can it really deliver? So we found that threshold.

We made it a power of two just to keep it nerdy. But that was the around the threshold where we said, “This is about the right amount of these things in order to fully explore the generator but not make them all worthless because there's a myriad of other ones similar. This should be enough to discover cool surprises and get a sense of what it can generate and have a good collection out there, but not hit it too hard and destroy all the mystery of it.”

Drawing code for Autoglyphs

Drawing code for Autoglyphs

JB: Sweet. And then you mentioned on the site that 128 of the Autoglyphs are already claimed, so who claimed them?

JW: We're going to claim those. We want to have a decent chunk that we can explore and mess around with, and we want to display them in large groups together. That's how many we're taking for ourselves and the rest are going to be up for grabs.

MH: It's a similar model to the CryptoPunks, where we wanted to convert ourselves into the same kind of relationship to the artwork as everyone else. So we just become owners after the thing is launched, and we like how that sort of played out on CryptoPunks. People ask, “Why don't you take a cut of all the sales?” Well, we didn’t take a cut of the CryptoPunks, so we want to just be the same as everybody else. We felt that that still was the right way to go with this.

JB: Right. It's experimental and you're along for the ride with the same level of risk as everybody else, right?

JW: Yes, exactly. That informs the sale price for the rest of them and where that money goes, and then we don't feel like we need to claim the sale price of those things. We can donate that because we have a portion of the artwork.

JB: Got it. No, that totally makes sense, and I'll come back to the charity stuff, too. For me, at least, CryptoPunks was sort of stealth generative art, meaning that most people don't know what generative art is, and they didn't need to in order to love Cryptopunks, right? I think part of the appeal of CryptoPunks was that anybody could look at them and get it and fall in love with them, like, “Oh, cool, look at all these different cool characters.”

You also received interest from art nerds like me and you were in that awesome show with theKate Vass Galerie. Are you worried at all that the Autoglyphs may not have the same broad appeal? Or maybe you didn't even assume that there's going to be a broad appeal for CryptoPunks, either, kind of going back to your assumption of these things sort of being experiments?

A collection of CryptoPunks

A collection of CryptoPunks

JW: Yes, I think that's what it is. We didn't expect it with the CryptoPunks; we don't really expect that here. We know people like you and the other people we've met who are into this stuff, and we know that there will be at least a narrow appreciation of this for the same reasons why we dig it. But no, we don't necessarily expect it to have as broad appeal as the CryptoPunks, just because they were a little more consumer-friendly, just easier to engage with, easier to understand. You didn't necessarily need to know that they were generative, you just liked them, like, “I want one that looks like me.” You're not going to find an Autoglyph that looks like you, so…

MH: If you do, that'd be cool!

JB: I like that challenge — that's the first thing I'm going to do when I get off the call.

Autoglyph #130 -Autoglyph I believe most looks like my inner self

Autoglyph #130 -Autoglyph I believe most looks like my inner self

JW: Yeah, it's more of like a Rorschach image.

MH: You see your true self in the Autoglyphs.

JW: Exactly. Yeah, you see your emotional self. We took the attitude, “Let's not worry about that; let's just kind do experiments that we like and we think are cool and resonate with us.” But there's no doubt that we were like, “Let's keep the size small here, because the audience might just be smaller, and that's fine.” It doesn't need to be as big or as wide a variety of people owning it or as high a transaction volume as the CryptoPunks.

JB: Got it.

MH: I think it's fair to say that we're starting to think a little longer term about these things, too, now that we're coming up on two years of the CryptoPunks launch. We thought CryptoPunks might be just a blog post, a couple weeks of interest and the end of it — and it's still going strong. And then seeing this generative art from the '60s and having some similarities with the very limited computing ability we have to work with, it just felt like, “There's cool stuff to explore here that could have appeal long term.” It's okay even if doesn't have the broad appeal at the moment, it's fine.

JB: What are you guys? Do you think of yourselves as artists, and had you in the past, or has that changed in the last few years?

JW: It's funny you ask the “what are you guys” question, because we've been looking at each other the last couple weeks asking the same question. What are we, what are we doing here? We're quite a wide variety of things, and this is one of them.

And obviously it’s almost a loaded term: We're artists now, I guess. And especially looking at generative art from the ‘60s with fresh eyes. There were a lot of people working at Bell Labs and just experimenting and trying things out. Then in hindsight we can look back at that and be like, “Man, that's really cool art that really predates this whole digital art thing.” And they were just engineers, they were nerds just expressing themselves. I think we put ourselves in that camp happily, so not claiming that we're career artists or that's what we’re trying to promote ourselves into, but claiming the ability to express ourselves and make things just like anyone else.

I don't know, Matt. Is that how you feel about it?

MH: Yeah. I felt more comfortable with that term when I found out the history of technicians becoming recognized as artists because they have the skills necessary to operate something new.

JW: And they were thinking about it more than anyone else.

MH: Yeah, just familiar with it, and would see the limitations and the strengths in how they're utilized. So I feel pretty comfortable in that category.

JB: So the CryptoPunks were initially free. Autoglyphs are coming in at like $27.69 with proceeds going to the charity350.org. Could you maybe share a little bit of the thinking behind that? Why 350.org?

Screen Shot 2019-04-07 at 1.59.55 PM.png

MH: Even with the CryptoPunks, where we gave away 9,000 of them, a large number of them went to a few early people that just got on it and automated that process, so we wanted to avoid that. We wanted to a have a better distribution of people, so we felt like the best way to have that was some price associated with generating them .

JW: So then the solution there was, “Let's donate that money to charity,” and then if the whole set sells out, then it will be a pretty good total.

MH:  So if we can sell out of these things it'll be about $10k to 350.org, which is a good organization for trying to move power generation over to renewables. It felt like the right fit in all of those dimensions.

JB: Great, yeah. A softer ball question, so from each of you, what makes a good Autoglyph?

JW: I think with a generator you kind of get a sense of what it makes and then you get surprised by a few things. So I always like the ones that are just like, “Woah, that's not what I expected.” Once you look through 40 or 50 of them, you can always tell which ones are crazy or weird looking, and it’s always fun when it kind of breaks out of expectation. Those are the ones for me that I like. I like ones with diagonal lines. For some reasons, those are the most appealing, ones that are just made out of diagonal lines.

MH: I think we both like the ones where, because the symbol sets are simple, it’s cool when you get the sense that there is a pattern there that's not actually there. There are ones that look like there are curves in them, but there aren't. I like that a lot. I also like ones that look different at different scales. So when they're zoomed out, they look like one thing, and then as you zoom it in, it dissolves. It’s something we're trying to figure now when we're working on physical representations of them, how thick should the lines be, what's the ideal viewing distance, where do these patterns resolve? I think that's my answer.

JB: Cool. And then anything you want to share on the launch process? I think you mention the date in the email, but are there plans to show the physical works anywhere specific?

JW: Yeah. We're just going to launch them first just on the web and on blockchain, and then we'll figure that out next. I think we do want to show a bunch of the glyphs that we claimed for ourselves, maybe one of the art shows in New York in May. We're going to figure out which one's the best one to do that for. We haven't totally figured that out yet. We first just want to put it up, we still want it to be an experiment that pops up on the internet and not have it be a gallery-type launch or anything like that.

JB: Thanks for your time guys! I think Autoglyphs are awesome and can’t wait to add some to the Artnome collection!


3 Comments

Why Is AI Art Copyright So Complicated?

March 27, 2019 Jason Bailey
Left, GANbreeder image by Danielle Baskin. Right, GANbreeder image painted on canvas commissioned by Alexander Reben

Left, GANbreeder image by Danielle Baskin. Right, GANbreeder image painted on canvas commissioned by Alexander Reben

Despite claims that machines and robots are now making art on their own, we are actually seeing the number of humans involved with creating a singular artwork go up, not down, with the introduction of machine learning-based tools.

Claims that AI is creating art on its own and that machines are somehow entitled to copyright for this art are simply naive or overblown, and they cloud real concerns about authorship disputes between humans. The introduction of machine learning as an art tool is ironically increasing human involvement, not decreasing it. Specifically, the number of people who can potentially be credited as coauthors of an artwork has skyrocketed. This is because machine learning tools are typically built on a stack of software solutions, each layer having been designed by individual persons or groups of people, all of whom are potential candidates for authorial credit.

This concept of group authorship that machine learning tools introduces is relatively incompatible with the traditional art market, which prefers singular authorship because that model streamlines sales and supports the concept of the individual artistic genius. Add to that the fact that AI art - and more broadly speaking, generative art - are algorithmic in nature (highly repeatable) and frequently open source (highly shareable), and you have a powder keg of potential authorial and copyright disputes.

The most broadly publicized case of this was the Edmond Belamy work that was sold by the French artist collective Obvious through Christie’s last summer for $432k. I have already explored that case ad nauseum (including an in-depth interview with the collective). I cite it here only to point out that there were a large number of humans that were involved in creating a work that was initially publicized as having been created by a machine.

In this article we look in detail at the recent GANbreeder incident (which we outline below) that has received some attention in the mainstream press. This is another case where the complexity of machine learning has driven up, not down, the number of humans involved with the creation of art and led to a great deal of misunderstanding and hurt feelings.

For this article I spoke with several people involved in the incident:

  • Danielle Baskin, the artist who alleges that Alexander Reben used her and other people’s images from GANbreeder

  • Alexander Reben, the artist accused of using other people’s GANbreeder images

  • Joel Simon, the creator of GANbreeder

I was also lucky enough to speak with Jessica Fjeld, an attorney with the Harvard Cyberlaw Clinic, who has written about and researched issues involving AI-generated art relative to copyright and licensing. She is the first lawyer I have spoken with who truly understands the nuances of law, machine learning, and artistic practice.

The GANbreeder Incident

Danielle Baskin’s GANbreeder Feed including a time stamp for the image in question

Danielle Baskin’s GANbreeder Feed including a time stamp for the image in question

GANbreeder is the brainchild of developer Joel Simon. Simon created a custom interface to Google’s BIGgan so that non-programmers can collaborate on generating surreal images that combine pictorial elements of the user’s choosing to “breed” child images. If you are not sure what GANs (generative adversarial networks) are, you can check out this earlier article we wrote covering the topic.

Let’s look at a super simple GANbreeder example here. I clicked a few buttons in the GANbreeder interface and chose to cross an agaric mushroom with a pug. GANbreeder then outputs 6 images with varying degrees of influence from both the mushroom and the pug. Results below:

Screen Shot 2019-03-22 at 8.23.36 PM.png
Screen Shot 2019-03-22 at 8.24.04 PM.png

You can get more sophisticated and breed many things against each other in combinations, but the tool is dead simple (thanks to Joel Simon’s great design) and literally anyone can use it in seconds without training.

It was Simon’s vision that people would collaborate using GANbreeder and expand the tool through other creative uses. Along those lines and with Simon’s support, conceptual artist Alexander Reben wrote a scraper for GANbreeder that automatically grabbed images and stored them locally to his PC. Once local, Reben applied a custom selection algorithm that would choose images that Reben liked or disliked based on his body signals.

Reben believed the images he scraped from GANbreeder were randomly generated (as he states in this early interview with Engadget). He then sent the images selected via his body signals to a painting service in China where anonymous artists created painted versions on canvas. He called the project amalGAN.

amalGAN, Alexander Reben

amalGAN, Alexander Reben

Reben then shared the painted images widely on social media in support of his upcoming gallery shows. This triggered an avalanche of anger and frustration from other GANbreeder users. They began to complain that Reben had stolen images that they had created using the GANbreeder system.

Screen Shot 2019-03-23 at 10.12.35 AM.png

Reben acknowledged that he did not realize the images were being created by humans. It was his understanding that the images were automatically generated at random by the algorithm.

At the time of my interview, Reben could not confirm that his scraper had not included exact images by other artists, but he believes a tiny percentage (3 out of 28) of his images were subtle variations of works that other artists had created.

The first person to call Reben out on this on Twitter was artist and serial entrepreneur Danielle Baskin. Baskin is a GANbreeder power user who often stayed up until 5:00 a.m. breeding images. She even started a service called GANvas where people could select images on GANbreeder and she would print them on canvas and ship them to customers around the world.

Volcano Dogs, Danielle Baskin on GANbreeder

Volcano Dogs, Danielle Baskin on GANbreeder

When I spoke with Baskin about her experiences with GANbreeder, she was careful to state that she felt she was “discovering” images on GANbreeder vs. “creating” them.

I feel like I am discovering them, not creating them. They all exist; you’re finding them. That is why I view the image as having an intelligent force behind it. It’s like I am discovering someone’s works.

Then why get so upset with Reben for “discovering” a similar image? Baskin explained the source of her frustrations with Reben’s work.

I thought that the whole project was so awful. Like, it was just so bad that it couldn’t have been real, but that it was a statement. Then I learned that it was real and I was like, “F*ck this project.”

Not that it is a competition or something. But he sort of took all the things I had in progress and had been thinking about for a long time and was immediately able to get a gallery show and sell work and stuff. And he didn’t present a clear story as to what he was doing. So that upset me. All these things were on my mind because I was so obsessed with GANbreeder.

It’s like you are writing a history book and you have been researching your subject matter for a year, and someone publishes a history book on the same subject matter, but they barely researched it and were able to sell tons of their books on Amazon. Someone took your content and got all this credit for it, but it wasn’t even good.

The gene sequence Danielle Baskin used to create the disputed image

The gene sequence Danielle Baskin used to create the disputed image

It was clear to me that Baskin was not a fan of Reben’s work. But I wondered if she thought he had done anything malicious or with bad intentions. I also wondered if she felt he had resolved the issue. Project aside, what did she think of him as a person?

When I met him in person, I realized Alex has built an incredible community of artists that use technology and he is a great person. It’s funny because I hate his art, but I like him - but I don’t like him as an artist.

In giving him the benefit of the doubt and in talking with him, I think he genuinely didn’t know how it [GANbreeder] worked. He thought when he refreshed the home page it was totally random images from latent space; he had no idea that other people created the images. He knew the creator of GANbreeder, so maybe he thought that Joel would have explained that to him if it were the case that it was created by other people.

I told Reben that it looked to Baskin and others like he was trying to take shortcuts, or was at least trying to remove himself from the work in some aspects. He partially objected and explained:

There was still a lot of work with me training the data sets on the art that I like and I didn’t like. The real idea was that all of the work was done before the art was made. And the actual art making process was just two simple steps back and forth. Everything involved with that is complicated, involving servers and building computers and learning algorithms and all that sort of stuff.

The interesting thing is that a lot of effort and knowledge came in to the code making. A lot of the creativity was compressed into that code, whereas now that the code is made, it is now a tool for me to… Like I said in one of the reports, I can now lay in a hammock and look at a screen and be able to just use this system to produce output.

I asked him specifically what the amalGAN project was about.

The project was, to me, about human/machine collaboration and how many steps and layers of abstraction I could add. On my website I have like seven steps of human to machine, human to machine, back and forth. I had the idea that the final step is basically the machine giving a human somewhere else the activity of using their brain to upscale the image, using their brain to interpret how to turn pixels into paint. It is basically like the machine using human knowledge to execute rather than being printed out on a printer. To me, that is conceptually interesting because it has to do with that human/machine collaboration.

I then asked Reben how he felt about the issue with Baskin and others in the GANbreeder community and what, if anything, he had done to reconcile it.

I’m sorry this happened out of mostly my ignorance of the system. I probably should have done a bit more research. When I learned there was an issue, I changed my system so it would never happen again. I’m sorry people feel this way. I think I did as much as I could at the time to get permission from Joel and to address as many concerns as I could by inviting people over to discuss. I do have the disclaimer on my website and again in my talks that some images may have come from the GANbreeder community. I have no way to verify that because there are no records of who made what.

I think most reasonable people at this point, including Baskin, acknowledge that it was done unknowingly. However, it could have become more serious - Baskin shared with me that she had considered sending Reben a cease and desist letter.

This exchange of course opens up all kinds of legal questions, and it is here that I believe things actually become interesting. For example:

  • Does Reben have the legal right to use an image that is either similar to or the same as the one that Baskin created in GANbreeder?

  • Would Reben’s work meet the legal definition of a “derivative work”?

  • How much would Reben need to change the image for it to be considered fair use? Is turning it into a painting enough?

  • What if it was the same image, but he used it as a conceptual component instead of as an aesthetic component?

  • Does it matter that Joel Simon’s intention for GANbreeder was for artists to build on each other’s works?

  • As the developer of the interface/tool, does Simon deserve some ownership over the works?

  • What about the folks who created BIGgan or the folks who designed the graphics cards? Do they deserve credit?

To help navigate all of this, I spoke with Jessica Fjeld, the assistant director of the Cyberlaw Clinic at Harvard Law School. I share the majority of our interview because I believe Fjeld does an excellent job of shedding light on an incredibly murky topic. It is my hope that sharing her explanations might help other AI artists from entering into sticky situations around copyright and authorship moving forward.

The Legal Implications - Jessica Fjeld

GANbreeder_Images_Jason_Bailey

Fjeld patiently walked me through several concepts that helped me to better understand how law interacts with the new AI-generated works. Like me, Fjeld believes that all the talk about whether machines deserve copyright is overblown and distracts from real issues surrounding increased complexity of human attribution. Unlike me, she can explain the reasons why and the implications within our legal system. Fjeld explains:

Mostly the question that gets asked is, “Will AIs get smart enough that they can own their own copyright?” To me that is not that interesting because I think AGI (artificial general intelligence) is a ways out. If we get AGIs and we decide to give them legal personhood the way we give to humans and corporations, then yeah, they can have copyright, and if we decide not to do that, then no, they can’t, end of question.

In the meantime, what we really have are sophisticated, interesting tools that raise a bunch of questions because of the humans involved in collaboration making stuff with them. So we get these complicated little knots. But they are not complicated on a grand philosophical level, like, “Can this piece of software own copyright?” They are just complicated on the level of which of these people involved do [own copyright], and what parts of it.

I asked Jessica what the legal implications were in the GANbreeder incident. Disclaimer: Alexander is a past client of Jessica’s, but she is not currently representing him in relation to the GANbreeder incident.

It is a fascinating question. I have tooled around a little bit with GANbreeder myself, so I can understand it. One thing that is important to note is that copyright protects original expressions that are fixed. So “original,” “fixed,” and “expression” are the key terms here.

Something has to be new, and obviously, much of what is on GANbreeder is. Part of what makes it an exciting website is you get some of these really unfamiliar feelings - sometimes eerie, sometimes funny.

Then the next word we learned about is “expression.” Copyright does not protect ideas; it only protects particular expressions of those ideas. So if someone said I had the idea the put into GANbreeder “dog, mountains, and shell,” and I got an image that was similar to the one that someone else is now using, that is not protectable. The exact image, maybe; but a very similar one, no. And something that is very interesting about GANbreeder, as I was tinkering with it, if you have it create a child on the scale from similar to different, if you say to make it very similar, a lot of the children images that come out are very, very similar. There may be individual pixels or a slight shift in the orientation, but at a casual glimpse, you wouldn’t even necessarily see [the difference].

It’s interesting especially because of the timing of when Alex took these images off, when all the works on GANbreeder were unsigned because there were no accounts. It’s a little hard to say. If you were thinking about pursuing an infringement case, you would really have to prove the exact image had been copied rather than a similar idea where, say, one is orange and one is red.

I asked Fjeld how different a work had to be to be considered original.

In GANbreeder, if you keep making tiny changes, eventually you are going to get something that does have what we would call “originality” in copyright. But it is really hard to say when that happens. And in a lawsuit, it will just be a fact-specific inquiry: Is this the same or is it not? And we have this concept of derivative works for works that are very similar. It can be an infringement to make something that is extraordinarily similar, but not just a mere reproduction.

I asked Fjeld if it mattered that Joel Simon’s intention for GANbreeder users was to build upon each other’s existing works. Wasn’t Reben simply using the tool as intended? It turns out there is a thing called an “implied license.” Fjeld explains:

The other piece around how GANbreeder encourages folks to draw on other people’s work I think brings up another interesting question, which is that it’s largely settled law, particularly in the Ninth Circuit in the U.S., that you can grant a non-exclusive license to use your work in an implied way, so it doesn’t have to be explicit.

U.S. copyright law does require that if you are going to dispose of your right to the work - so either going to give an exclusive license to someone else, or if you are going to sell your copyright - you have to have a writing. But for implied license, a non-implicit license, you don’t have to have a writing. And at least some courts upheld that it can just be implied - you don’t even have to have a conversation about it.

And when I look at GANbreeder, because of the way it’s set up, because of the way the whole system is architected, it gives you an image created by someone else and encourages you to iterate on it. It certainly looks to me like there is an implied license to do that within the context of the site. Anyone who is creating work there understands that other people are going to use it as a basis to make their own work.

Now, when courts look for implied licenses, it is again a fact-specific inquiry. I think with regard to what Alex did, the question is, did people understand that part of the implied license they were given, not just that you can monkey around with it in the context of the GANbreeder app or you can also integrate it into this other system and have it painted by anonymous painters in China and show it in a gallery. They might not have anticipated that, and that’s probably where the issue comes in.

There was an implied license to do something, but the scope of that implied license wasn’t totally clear. Then that is complicated because it is a site that is architected with a thousand models and images in it, so you are essentially navigating the points in a multi-dimensional space created by that number of models and can have any combination of those thousand images. But it creates a lot of very similar images.

So the combination of the fact that the scope of the implied license wasn’t very clear and the fact that people may have an attachment to their ideas or individual expressions and then may see a very similar one… it is my understanding that Alex’s project shouldn’t have directly just reproduced anyone else’s; it would have started with someone else’s, and then he tweaked it based on his body signals.

I wondered why Reben’s work would not be considered derivative and asked Fjeld if she thought it could legally be considered so.

I would say that yes, there is an argument that Alex’s works could be considered derivative of existing works on the GANbreeder website. There remains the question of the implied license because the derivative work is a copyright infringement, but if the use is licensed, then there is no infringement.

There is also a question of what the damages would actually be, because in copyright, you can get statutory damages if you register your work in a narrow window around its creation or before the infringement happens. If you don’t do that - and to my knowledge, none of the GANbreeder images have been registered - then what you get is actual damages. And it’s not totally clear what the damages would be for folks that anonymously created images on a website and then later found that someone had them painted and displayed them in a gallery.

*I also don’t know if there have been any sales. There is the image that Alex used and whether there is a derivative work in that process, and then he takes this further step and has them painted into oil paintings, which, again, I think is another tweak. So there is a series of manipulations of the underlying content.

*Note: There have not been any sales.

I asked Jessica if she thought these “manipulations” by Reben pointed towards “fair use” (a term I had heard in the past but did not fully understand).

Yes, they do steer me more to think about fair use. As I have heard Alex presenting on this work, he really emphasized that for him, it really isn’t about the outputs; they are not the artwork at all. For him, the artwork is the process by which he had trained this series of systems to produce the artwork, test them against his own preferences, to title them, etc. For him, the interesting thing is the process by which he tried to design a bunch of algorithms to take himself as far as possible out of the creation process. The expression of them is that he ends up putting his name on painted images in a gallery. But even putting his name on them is a little complicated in regards of what he was thinking about in regards to the artwork.

When we think about fair use, one of the main factors that courts consider is how transformative the use is. And I do think there is a strong argument here that because the underlying theme of the work, we could think about it as fair use because we want to incentivize this kind of exploration of the space. The way that Alex talks about it, there is an argument that, ethically speaking, it should be clearer that despite the paintings being up at a show with his name on them, he doesn’t really think of himself as the author of them in a certain way. The use is transformative because it is making this point of how far can we push toward algorithmic authorship.

You could think about the Richard Prince vs. Patrick Cariou case. They are both fine art photographers, but Prince is a conceptual artist, an “appropriation artist,” he calls himself, and Cariou is a more traditional fine art photographer.

left, one of Patrick Cariou’s photographs of Rastafarian’s, and at right, a painting from Prince’s ‘Canal Zone’ series

left, one of Patrick Cariou’s photographs of Rastafarian’s, and at right, a painting from Prince’s ‘Canal Zone’ series

Cariou had gone to Jamaica and taken this book of photographs and done a gallery show all of Rastas. He spent all of this time investing in these relationships to produce these images. Prince used them in a gallery show in which he manipulated them a little bit. One of the classic images that gets shown a lot is a full image of a guy hanging out in a jungle setting, and Prince very roughly cut out an electric guitar and pasted it in on top of him. A lot of the original image was still there with this crude-looking addition on top. And Prince won that case as fair use because the argument he made is that he had transformed the content. Yes, Cariou was also a photographer that had a gallery show, but Prince was using it in this conceptual, imaginary space. I think you could think of Alex’s work in a similar way.

Fjeld shared my belief that the driving force behind some of the confusion in AI-generated art was that more people, not fewer, are typically involved. We talked a bit about the developer behind GANbreeder, Joel Simon, and what rights he had, if any, to the works.

In GANbreeder you can click a button and it’s possible to get the coolest thing that GANbreeder ever produced. And how much do we want to think it is in line with the goals of copyright if someone is just clicking a button and the software is producing it… how much do we want the person to click the button to be the person to get the rights? Do we think that Joel, who set up the system, gets some rights? Do we think the people who put in the work to create these models that took thousands and thousands of hours of computing power should get some rights?

There used to be a doctrine in copyright called “sweat of the brow” where courts had an instinct that they wanted to protect people’s investment of time, and that has been rejected. So the notion that people who spent time to create the model should earn rights in the outcomes isn’t the state of copyright in the U.S. right now. But there is something in there that ethically feels to us like if you just click a button once, you are involved in that creation, but maybe you shouldn’t be the person who gets all the rights.

I found Fjeld’s explanations both fascinating and much needed in this space. It was a welcome reprieve to hear a lawyer talk about these issues that we keep seeing coming up in the AI art space without over-focusing on the red herring of whether the machine deserves copyright.

Conclusion

Regardless of what the law says, we all answer to the court of public opinion, and it hasn’t been particularly kind to Alex Reben over the GANbreeder incident. I think the animosity towards Reben stems from folks not liking that he appears on the surface to be doing less work than other artists, yet getting more attention. A common complaint waged against conceptual artists. But more importantly, I think people can see with their own eyes that at least one of his works looks the exact same as an image created by Danielle Baskin, and a few others are similar to images made by other members of the GANbreeder community.

I like Alex and consider him a friend. I also like Danielle and plan on following her work moving forward. So I thought back to what I learned from Jessica Fjeld about it being important that Alex’s work not be the exact same as Danielle’s. This seemed like a pretty easy thing to figure out, so I compared the two images using James Cryer‘s excellent tool called Resemble.js, which can compare two images and highlight the differences.

GANbreeder image claimed by Alexander Reben

GANbreeder image claimed by Alexander Reben

GANbreeder image claimed by Danielle Baskin

GANbreeder image claimed by Danielle Baskin

Analysis of both images from Reben and Baskin highlighting lack of differences using Resemble.js

Analysis of both images from Reben and Baskin highlighting lack of differences using Resemble.js

Other than a little bit of aliasing (I took a lower-resolution screenshot of Baskin’s image), they look the exact same to me. I shared my new findings with Alex and asked if he would consider removing the image from his website in light of the new evidence. He did one better and called Baskin to discuss the best way to move forward. Reben then crafted the following statement, which he first ran by Baskin for approval.

I spoke to Danielle by phone to work out what she thought would be fair for me to do to move past this issue, given all the information we have at this time. We landed on giving her a credit under the artwork on my website as "Original GANnbreeder image sourced from Danielle Baskin" and to make the credit for GANbreeder more obvious on the page. If any other images happen to arise with a similar issue, I'll have to deal with them on a case-by-case basis. But since the images from the website at that time have no authorship information and may be randomly generated, there may be no other issues apart from the few which were already identified. I'm also only concerned with images which are basically the same, not images which are similar and "bred" from a like set of "seed words," as this use aligns with the spirit of the website. Of primary concern to both of us was to put this issue to rest so that GANbreeder can continue to be used as a creative tool and grow from what was learned.

Score one for the court of public opinion.

As always, if you have questions or ideas you can reach me at jason@artnome.com.

Sign up for the Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
1 Comment

Blockchain Art 3.0 - How to Launch Your Own Blockchain Art Marketplace

February 27, 2019 Jason Bailey
Warhol Thinks Pop, Hackatao - 2018

Warhol Thinks Pop, Hackatao - 2018

The most frequent question we are asked at Artnome is, “How can I get my artwork on to the blockchain?” Finally, with the development of what I am calling blockchain art 3.0, we are seeing the new tools that enable artists to tokenize their own art and sell it on their own marketplace.

In this article I am going to show you how I set up a blockchain-based marketplace in less than an hour without coding. But before we dive into a tutorial on how to use these new applications and speak with the teams behind them, we first look at the evolution that led up to this point.

If you are eager to just learn about the new applications, you can skip the history and go to the part that describes these new offerings and how you can use them to create your own tokenized artwork on both the Bitcoin and Ethereum blockchains.

How We Got To Blockchain Art 3.0

Joe Looney presenting the Rare Pepe Wallet at RareAF, NYC, 2018

Joe Looney presenting the Rare Pepe Wallet at RareAF, NYC, 2018

Before we jump in to blockchain art 3.0, let’s take a look at the evolution of blockchain art that got us to where we are today. While people have been making art “about” the blockchain since its inception, I consider blockchain art 1.0 the period where folks first started exploring “digital scarcity” and gave birth to the idea of selling art on the blockchain.

A big problem with producing and selling digital art is how easily it can be duplicated and pirated. Popular opinion is that once something is copied and replicated for free, the value drops and the prospect of a market disappears. Most collectors feel that for art to have value, it needs to have measurable and provable scarcity.

Blockchain helps solve this for digital artists by introducing the idea of "digital scarcity":  issuing a limited number of copies of an artwork, each associated with a unique token issued on the blockchain. This provable scarcity is the same concept that enables tokens like Bitcoin and Ethereum to function as currency.

Blockchain art 1.0 was the Wild West, as there was no real blueprint yet for artists and technologists to work from. Though often overlooked by the mainstream media, there is no question for me that blockchain art 1.0 started with Joe Looney’s Rare Pepe Wallet. You can read my in-depth description of Rare Pepe Wallet in this earlier post, but for now all you need to know is that Rare Pepe Wallet pioneered the possibilities of buying, selling, trading, editioning, gifting, and destroying digital artworks on the blockchain. Joe and the Rare Pepe community not only conceived of the first such market, they were to first to prove it could work at scale, selling over $1.2 million worth of digital art.

Smooth Haired Pepe, 1/1000

Smooth Haired Pepe, 1/1000

On the immediate heels of the success of Rare Pepe Wallet, we saw the development of several other experimental projects, each trying new things. Crypto Punks, Dada.NYC, and CurioCards were all very different from each other (and from Rare Pepe Wallet), as no real template for how art on the blockchain should work had fully been established. It is noteworthy that all of these blockchain 1.0 solutions were driven by the “decentralized” ethos, developed more as creative communities than by a real business model for making money. For me, this is the Golden Age of blockchain art - the era that attracted the OGs, the weirdos (said lovingly), and the truly creative mavericks in the space who were motivated more by creative experimentation than any obvious financial benefit.

London Tacos, from left: Matt Hall (CryptoPunks), John Zettler (Rare Art Labs), Judy Mam (Dada.nyc), John Crain (SuperRare), Charlie Crain (SuperRare), Jon Perkins (SuperRare), Bea Ramos (Dada.nyc)

London Tacos, from left: Matt Hall (CryptoPunks), John Zettler (Rare Art Labs), Judy Mam (Dada.nyc), John Crain (SuperRare), Charlie Crain (SuperRare), Jon Perkins (SuperRare), Bea Ramos (Dada.nyc)

Blockchain art 2.0 started after CryptoKitties exploded and people saw that there was actually an opportunity to make money with digital art on the blockchain. A half dozen or so blockchain art marketplace startups launched with fairly similar functionality to one another. They were almost all based on Ethereum, featured slick professional interfaces, and streamlined the tokenization of art.

These 2.0 marketplaces were run more like businesses than the experimental grassroots community projects from the blockchain 1.0 days. They often have investors, legal advisors, advertising budgets, and corporate titles within their organizations. However, it seems to be the ones that are capable of building communities and providing a collector base to lesser known artists through unified marketing that are the most successful. Blockchain 2.0 offerings include SuperRare, KnownOrigin, Portion, RareArtLabs, and DigitalObjects. In blockchain 2.0, the offerings were similar enough on the surface that it was really the artists these startups were able to attract that separated them from one another more so than the tech.

Example of a clean Blockchain Art 2.0 interface (SuperRare) which contrast with the DIY 1.0 UX/UI

Example of a clean Blockchain Art 2.0 interface (SuperRare) which contrast with the DIY 1.0 UX/UI

There were several approaches to recruiting artists. The most successful seemed to be dropping gallery commissions for primary sales and adding a commission for artists on the secondary market (SuperRare), and consistent grassroots recruiting (Known Origin). Recruiting collectors, on the other hand, has proven a bit more difficult, as the launch of these blockchain art 2.0 offerings has coincided with the decline of the cryptocurrency markets. Most people have either temporarily jumped out of the cyptocurrency market to stop the bleeding or they are holding on, waiting for the market to recover before spending their currency.

That said, there are certainly more than a handful of highly dedicated cryptoart collectors, and the artists themselves have formed a tight community and frequently collect each other’s work. This is not unlike the behavior we have seen between artists in movements of the past, where bartering and trading artworks was common among them.

Launching a Blockchain Art Marketplace

Some early user created markets from the Pixura platform

Some early user created markets from the Pixura platform

Throughout the development of blockchain art 1.0 and 2.0, many artists have wanted to tokenize their own work and offer it on a blockchain market where they can control the look, messaging, and user experience. This makes sense, as artists work hard to brand their own image, and the promise of the blockchain was supposed to be that they could sell their own work without having a middleman or intermediary.

Until now I pointed artists to a half dozen marketplaces where the artist had no control over how their work was displayed and which artists their work was displayed next to. While some were fine with this and embraced the experiment, the lack of artistic control was a deal breaker for many other artists who cared greatly about the context in which their work was shown. As an example, if an artist is producing what they consider to be serious generative artworks, they may not want their work sandwiched randomly between floral still lifes and a thousand digital images of Bitcoin/Ethereum symbols. For many artists, this type of context matters a great deal.

With blockchain art 3.0, artists can take control over the entire process aided by tools that make it easy to tokenize artwork and do not require coding skills or technical knowledge. I cover two such tools in this article:

  • Pixura Platform (beta) - Allows anyone to issue and sell virtual items on the Ethereum blockchain, including art and rare digital collectibles

  • Freeport.io (pre-alpha) - Allows people to collect, create, and trade cryptogood assets issued on Counterparty using the Bitcoin blockchain

Far from competing with each other, we are lucky to have two complementary solutions that function on the Ethereum and Bitcoin blockchains respectively. What makes these two tools stand out for me is that both were developed by people who have already experienced success at scale in building their own active blockchain-based art marketplaces. It also helps that I know the developers behind both solutions personally, think highly of them, and am comfortable recommending their tools.

If you favor Ethereum and you are looking for something you can use starting today, Pixura is for you. They do charge fees, which may not scale as well for some use cases, but with those fees comes the support from a responsive team of experts working on the project full time.

If you prefer Bitcoin, want to pay zero fees, or want/need a completely open-source solution, FreePort.io is for you. There is some functionality already built in to Freeport, but it might be another month or so before it is fully functional - so that is another aspect to take into consideration.

We go into more details below. Feel free to jump to the solution you think might best apply to you.

Pixura - Tokenize Art and Launch a Blockchain Art Marketplace With Ethereum

The Pixura Platform is the same team and codebase behind the SuperRare marketplace. They have been fast to launch, eager to solicit user feedback, and quick to add meaningful features. As a result, SuperRare is among the fastest-growing platforms from the blockchain art 2.0 era, and artists have earned roughly $100K in the first year of the platform’s existence.

According to a recent interview with Pixura/SuperRare CPO Jon Perkins:

Pixura is a wide open platform – anyone can launch a smart contract and create their own NFTs without writing any code. We’ve already seen a bunch of interesting projects get created in one week, and I expect to see hundreds more by the end of the year. We are also working on some exciting collaborative partnerships, which will be announced later in the year.

I decided to launch my own marketplace on Pixura to see how easy/difficult it would be. I was pleasantly surprised - the entire operation from start to finish took less than an hour (including visual customization) and cost me under $30.

I put together this short tutorial to walk you through the process. The tutorial assumes you already have a MetaMask wallet account and at least $27 in Ethereum in your wallet.

Here are the steps to launch your own blockchain art marketplace on Pixura:

  • First go to the Pixura mainnet link: https://platform.pixura.io/

Screen Shot 2019-02-26 at 10.40.34 AM.png
  • Then click on the “Launch a Collection” button

Pix_1.jpg
  • Choose “Ethereum Mainnet” to launch a functioning marketplace

Pix_2.jpg
  • Sign in to Pixura via your Gmail account

Pix_0.jpg
  • Connect to MetaMask

Pix_3.jpg
  • Launch your smart contract

Pix_4.jpg
  • Pay the $25 fee (plus gas) to launch your marketplace

Pix_5.jpg
  • Confirm smart contract deployment

Pix_6.jpg
  • You can check Etherscan for the transaction details

Pix_7.jpg
  • Click on your project (on the right side of the screen)

Pix_8.jpg
  • Click on “Add New Collectible”

Pix_9.png
  • Name your collectible and add an image

Pix_10.jpg
  • Add as much custom metadata as you like (this is a nice feature)

Screen Shot 2019-02-26 at 11.07.41 AM.png
  • Price and launch your collectible

Screen Shot 2019-02-26 at 11.07.31 AM.png
  • Customize the look of your marketplace

You can see the results of my marketplace here. I have a bunch of ideas for what I actually want to do with my marketplace, but it is just a couple of test images for now.

Hopefully you found the Pixura interface for creating a marketplace to be as user friendly as I did. I think its simplicity is its strong point. I am also a big fan of the ability to add new properties, and I know that the Pixura team provides great tech support.

While I like that Pixura gives me more branding autonomy than putting my work directly into SuperRare, there is still a sense that my marketplace is one of many marketplaces within Pixura. This is similar to how I might have my own Etsy shop, but it still lives next to all the other Etsy shops on the Etsy parent site. In some ways this is a plus because people coming to see other Pixura marketplaces have a higher likelihood of stumbling onto my marketplace.

But what if I want a marketplace with 100% branding control where nobody else’s logo shows up and I am not clustered with other marketplaces? Pixura assured me a feature to run a completely white labeled version is on the road map in the near future. But there are some other options as well.

If you are a little more technical, looking for complete autonomy from branding, want to avoid paying any fees, prefer an open-source solution, and can get by without a lot of tech support, then you may want to wait a month or so to explore Freeport as an option.

Freeport.io - Tokenize Art and Launch a Blockchain Art Marketplace With Bitcoin

Screen Shot 2019-02-25 at 3.03.51 PM.png

At the time of this writing, Freeport is pre-alpha and has about a month to go before it will be ready for marketplace creation. I decided to include it anyway because it offers a really nice counterpart to the Pixura platform.

Freeport is the brainchild of Joe Looney, the developer behind Rare Pepe Wallet. Joe is creating Freeport as a completely open-source solution (MIT License), so if you are technical, you can use all the code to do whatever want with it. But Freeport is specifically designed with less technical people in mind. With just a little Bitcoin, from a single interface you will be able to:

  • Create your asset (CounterParty)

  • Upload your art (Imgur)

  • Attach it to the asset (CounterParty)

  • Search a directory (DigiRare.com)

  • Put orders up to sell through the DEX (decentralized exchanged)

Joe has brilliantly structured Freeport to use several existing best-in-class, off-the-shelf solutions, including Imgur, CounterParty, and DigiRare. These decisions were born out of necessity to simplify maintenance and upkeep (Joe is building Freeport for fun in his free time), but this strategy may turn out to be Freeport’s greatest strength. As Joe puts it:

The Bitcoin blockchain is good at creating scarce digital assets (via Counterparty) and then allowing the uncensorable transfer of them. It is not good for storing images. Even with IPFS, any projects utilizing it are generally running their IPFS node for storage. The only way to guarantee that images stored via IPFS are available is to maintain a node and host them yourself, and at that point what are you really even doing? With Freeport, as the developer I don’t need to run any additional software to host images because an image hosting service (Imgur initially) will be hosting them for me. My plan is to also include options to use other hosting services and eventually allow artists to specify custom image locations.

Since I am building Freeport in my free time, I don’t want the responsibility of curating questionable content. One of the problems with something like IPFS or self-hosted storage is that you, the developer, maintain that responsibility. To eliminate that additional work, I’ve leveraged a hosted storage that has its own code of conduct. It also demonstrates that “decentralized storage” is a fun thing to have, but it’s not absolutely necessary. Immutability is achieved by including a hash of the image as well as the image location (Imgur URL) as part of the asset information stored on the Bitcoin blockchain via Counterparty. If Imgur were to become unavailable, the artist has the ability to update the image location, however the hash remains unchained. This means if the artist changes the contents of the image, it is obvious from the record that it’s not the original. Imgur is great at providing the means for everyone to see the image initially and the foreseeable future. However, over time, it becomes the responsibility of the issuer and asset holders to retain the image themselves.

Looney also takes advantage of CounterParty on the back end for token issuance and Bitcorns creator Dan Anderson’s excellent DigiRare site which is designed to provide a directory to view all art and collectibles on the Bitcoin blockchain.

While Freeport is still a few weeks off from launching, you can install the beta as a Chrome browser extension and be among the first to use it when it is ready for prime time.

To install Freeport.io:

  • Download the Chrome extension

  • Go to chrome://extensions/ in your Chrome browser.

  • Make sure “Developer Mode” is selected and click on "Load Unpacked"

  • Select the directory "Chrome Extension"

Be sure to follow Looney on Twitter at @wasthatawolf  for updates on the additional functionality in Freeport as it becomes available.

Summary

Hopefully you found this article/tutorial helpful and you are off to the races building your own marketplace and tokenizing your own art and collectibles. I don’t think you can really do wrong by going with either Pixura or Freeport. Hopefully I have outlined the differences between the two enough that you know which one is right for you. Here is a quick summary:

  • Availability

    • Pixura is live and you can launch a marketplace today

    • Freeport is in alpha and will be ready in roughly a month

  • Blockchain

    • Pixura lives on the Ethereum blockchain

    • Freeport lives on the Bitcoin blockchain

  • Support

    • Pixura provides support to paying customers

    • Freeport: Joe provides support when he can (this is his side project)

  • Fees

    • Pixura charges $25 to launch a market, $1 to launch a collectible, and takes a 3% fee for all transactions on your marketplace

    • Freeport is a community project with zero fees

  • Architecture

    • Pixura utilizes the same proprietary code used on SuperRare

    • Freeport leverages a combination of solutions (Bitcoin, CounterParty, DigiRare, Imgur) and is open source under the MIT license

Conclusion

It is a really exciting time for those of us that have been following the development of art and collectibles on the blockchain. You no longer need to understand the complexities of writing your own smart contracts to launch your own art digital collectibles marketplace, and that should be huge in driving mainstream adoption for creators.

However, I believe the next big problem is going to be growing the number of collectors. One of the great advantages of participating in a marketplace like SuperRare as an artist is they do all the marketing for you. I think some artists may realize that putting their art “on the blockchain” does not necessarily translate to more sales. You still need to find someone interested in buying/collecting your work. And the number of people who know how to buy art using cryptocurrency is even smaller than the number of people who know how to buy art with fiat (regular currency). An increase in the number and variety of digitally scarce objects we can collect could bring in new collectors to the market, but it could also flood the market and reduce demand.

I’m optimistic that an increase in “scarce digital goods” in the gaming market could help drive adoption and understanding for the blockchain art market as well. At least in the short term, I think we’ll see a spike as people explore these new tools and innovate in ways that nobody has thought of yet. And hopefully we’ll see a bit more of the weird blockchain 1.0 spirit come back to the community.

Thanks for reading, as always if you have questions or ideas you can reach out to me directly at jason@artnome.com.

Subscribe to the Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
2 Comments

AI Artist Robbie Barrat and Painter Ronan Barrot Collaborate on “Infinite Skulls”

February 6, 2019 Jason Bailey
Ronan-Robbie-27-22-cm_2.JPG

It is early in the year, but the most compelling show for art and tech in 2019 may already be happening. AI artist and Artnome favorite Robbie Barrat has teamed up with renowned French painter Ronan Barrot for a fascinating show that lives somewhere in the margin between collaboration and confrontation.

The L'Avant Galerie Vossen emailed Robbie last April after seeing his AI nude portraits and asked if he would be willing to fly out to Paris to work with Ronan. Robbie agreed and flew out last July to meet with Ronan, and the two have been working together ever since. The show titled “BARRAT/BARROT: Infinite Skulls“ opens Thursday, February 7th, and literally features an “infinite” number of skulls.

Robbie-22-27-cm_3.JPG

Why Skulls?

For the last two decades, it has been artist Ronan Barrot’s tradition to use the remaining paint on his palette to paint a skull each time he stops, interrupts, or finishes a painting. As it was explained to me, the skulls are like a side process of the main painting, it’s like when you clean out your motor after driving for miles and miles. Ronan now estimates that he has painted a few thousand of these, and this massive visual data set of painted skulls was perfect for AI artist Robbie Barrat to use in training his GANs (generative adversarial networks).

GANs are comprised of two neural networks, which are essentially programs designed to think like a human brain. In our case, we can think of these neural networks as being like two people: first, a "generator," whom we will think of as an art forger, and second, as a "discriminator," whom we will think of as the art critic. Now imagine that we gave the art forger a book with 500 skulls painted by Ronan as training material that he could use to create a forgery to fool the critic. If the forger looked at only three or four of Ronan’s paintings, he may not be very good at making a forgery, and the critic would likely figure out the forgery pretty quickly. But after looking at enough of the paintings and trying over and over again, the forger may actually start producing paintings good enough to fool the critic, right? This is precisely what happens with GANs in AI art.

Ronan-Robbie-27-22-cm_4.JPG

In Robbie’s words:

I trained the network on the skulls. They are all the same shape, the same size, the same orientation, and they are all looking the same way. The results were good, but they were very similar to Ronan’s original skulls. We have the show chopped up into different epochs, and that is Epoch One, training directly on his skulls.

For Epoch Two I thought about how the coolest part about using GANs is that your getting a weird machine viewpoint of artwork.  But feeding in all the skulls with the same layout is sort of like you are telling the machine how to look at the paintings. You’re giving it a very fixed perspective and a very normal perspective that we have already seen before.

So for Epoch Two, I basically played around with feeding the machine the skulls completely independent of any rotation or perspective, so the machine sees skulls that are all flipped around and stretched out. I’m using the same model, but the number of skulls in the training set jumped from 500 to 17,000 skulls. And the results are really, really good. It makes these really strange images that you would never expect. You can tell that they are skulls, but they really are not familiar. Ronan really loves those. He really likes to correct some of the skulls. He’ll say something like, ‘I like this one but it’s not right,’ or ‘There is never an image I am completely satisfied with,’ so he corrects it. He also does interpretations of them.

I also think that the Epoch Two skulls raise very interesting questions about authorship - since the network has learned exclusively from Ronan, but the outputs don't strongly resemble his work.

I asked Robbie about Ronan’s initial reaction to his work and how the relationship played out.

We are like opposites. He does not like the fact that my work is digital. He said the pixel is sad. And he really was skeptical about it. And right after I visited Paris, he was a little bit hesitant if he wanted to do the show because the French painters have the conception of technology and capitalism being the enemy. But now he is really excited about the show. But I think what is important to remember is that this is more like a confrontation than a collaboration. There are collaborative parts of it, but we really are sort of at odds.

Ronan explained to me that at first, he could not see where Robbie was making any decisions in the AI process. Like many, he thought that the “AI” and the “machine” was doing all the work and making all the choices. But quickly after working with Robbie and seeing that there is “choice and desire” in his work, he decided “the pixel is no longer sad.” But adds Ronan:

Of course it is not the same, I am not expecting the same thing from AI as I am from a painting. Both worlds are contiguous, but not the same. They are not the same rules. I hate the very idea of naturalism. As if everything was equivalent to everything else. I love the idea that there are two sets of rules, which allow us to play differently.

RonanCorrection-on-Robbie_3.JPG

Ronan also pointed out that he does not keep all of his skull paintings. He curates them and many times he paints over the ones he does not like. He sees this curation process as not entirely unlike Robbie’s process of choosing which of the AI skulls to keep from the nearly unlimited number he can produce using GANs.

While the two have come to understand and respect each other’s working methods, there is a lot of interesting dialogue between them on what is an actual painting vs. something that is just an image of a painting. According to Ronan:

There is always difference between a painting and an image of a painting. And now [using GANs] there is an image of a painting that does not exist.

Sometimes I dream about the painting I want to do, and when I have done it, it is completely different. This indicates the direction, but you have to make your own way. And that is why the paintings will be presented as one by Ronan and then one by Robbie. Because then they become a mirror. And the question is, who is mirroring who? Originally they were skulls, but they become real vanities because of this idea of the mirror. With traditional vanities there is always a skull in the mirror which gives you the idea of time passing. Originally when I showed my skulls, each one was a painting on its own. But when paired with the works by Robbie, it creates a kind of double.

Simon Renard de St. André, Vanitas. Unknown.

Simon Renard de St. André, Vanitas. Unknown.

Interestingly, Robbie agrees with Ronan that the individual images being produced by the GAN are just images of paintings (and in some cases, images of paintings that do not exist). But Robbie adds that he sees the trained GAN itself as the artwork. According to Robbie:

Ronan is right when he says that the AI skulls are "images of artwork" instead of artworks themselves. In my opinion, the actual artwork is the trained GAN itself, and the outputs are really just fragments or little glimpses of that (the trained GAN is almost just a compressed version of all the possible AI skulls).

Ronan-Robbie-27-22-cm_3.JPG

Robbie often compares his process of working with GANs to that of the artist Sol LeWitt who is famous for writing out “rule cards” or algorithms for humans to execute to create his drawings.

Sol LeWitt Rule Card

Sol LeWitt Rule Card

Robbie explains:

The Sol LeWitt metaphor applies in multiple ways in GAN art. The data set is like the rule card, with rules created through curation - and the network interprets these to make art. But additionally, the network itself is also like the rule card, and the individual generations are just different interpretations/executions of those rules. This is in line with the idea that the individual works are just "tokens" of something larger - they're shadows of the network, the actual artwork.

At the same time, if the network itself is the piece of art, it's a very strange one, since it cannot be viewed or comprehended entirely (unlike the set of rules responsible for traditional generative artworks). We can only get small glimpses of it at a time. I'm not aware of any other type of art where this is true.

Robbie-Ronan-27-22-cm_1.JPG

I have a lot of admiration for Ronan and his work - it seems almost unfair to Ronan to compare his work to the "images of artwork" output by the network. There's something present in the process of a traditional painter that I feel I'm missing as an artist - I'm not sure if it's dedication, rigor, the use of simple tools and not some complex machine, or something else entirely. Without being overly dramatic, there is something very honorable about how a very traditional painter operates; especially today when everything else is surrounded by technology. In short, I think that if I had to choose between the two types of skulls regardless of process or context, I would choose Ronan's skulls as my favorite. At the same time the Epoch Two AI skulls raise so many questions that I'm interested in - so including process/context, I'm more interested in them.

I’m an artist, I make work. But I am not the best at art history, I don’t have any traditional training, I don’t know how to paint or sketch or anything like that. I definitely do sympathize with Ronan’s view of digital work. Maybe he has seen a lot of low-quality digital work or he just doesn’t like the medium. It makes me wish that I was better at non-digital art.

I asked Ronan if he sees Robbie’s work as art or as inspiration for art.

Robbie introduced his own decisions and desires and changed the training images and the algorithms to make the work closer or further from the work I have done so far. It’s always interesting to bring something from outside the box into the realm of art. In the beginning, that can be seen as a threat. But in the end, it helps whatever is going on. If there is choice, if you can dream a little, it’s art. The skulls lend themselves well to AI and art because of the idea of the vanity of death. They therefore remain in ambiguity. And it is a disturbing ambiguity, the uncanny. Some will say it is about death and some will say it is about whatever, but I like maintaining this ambiguity in art. In the beginning I was worried that it was not possible to be free with AI. You can never say “that is not art, it is only a tool.” You have to find how to be free every time.

000035.JPG

I asked Robbie how he finds this “freedom” in GANs and what makes good GAN art. He shared:

I really don’t like work that relies too heavily on the medium, like a watercolor painting where the whole interesting thing about the painting is that it is a watercolor and it relies on watercolor effects. My mom always called those “medium turds” or “watercolor turds.” I think the same applies to GANs where if it is reliant on the medium and the medium is the cool thing, then that’s not really art - it’s more like a tech demo. I think that the people that are making really cool work with GANs are using it in ways that are not obvious.

For example, in the show we have a box with a peephole in it, and when you look in, it will generate a skull and it will display it for like five seconds and then it will add an input vector to the “do not use list.” So basically you are going to be the only person to ever see that skull… ever. I think that is cool because it’s different and it’s new and it’s not too reliant on the GAN just being a GAN.

You Can’t Hand Someone an Apple and Call Yourself a Chef.

RonanCorrection-on-Robbie_2.JPG

Not only is the artwork from Infinite Skulls of higher quality than anything I have seen from AI so far, the confrontation between the two artists and the resulting work forged through their conflict are the perfect visual symbol for the clash between AI and the traditional world at large.

I rarely anthropomorphize artificial intelligence and machine learning and prefer to think of these new technologies as augmenting human capabilities rather than replacing humans. But others have pushed me, asking, “Who is augmenting who?” in the relationship between AI and artists. If the relationships between AI and humans is symbiotic, then who is the host and who is the parasite? Though it may sound harsh, I think it is natural that people should ask themselves a similar question of the relationship between Ronan and Robbie, even if there is no clear answer.

000085.JPG

While the two artists end up getting along and respecting each other’s methods in the end, each has to see the other as fuel or a raw material or ingredient to consume for their own artistic self-preservation. In both cases the artists are actively consuming the others work into their own as an ingredient, which is a different relationship than mere inspiration.

Ronan frames Robbie’s work as “photos of paintings that do not exist yet”, ostensibly because he himself has yet to create them, emphasizing that he is not happy with any of the works Robbie’s GAN produces until he “corrects” them. Note that Ronan also called Robbie and his AI “a guest in the studio” several times during our interview, which suggests a more passive role than an that of an equal in artistic collaboration. To further explain this relationship, Ronan explains, “It was like having a new guest in a jazz club,” again casting Robbie as a guest, or a “muse”, and not as a member of the band on the stage.

Similarly, Robbie has to treat Ronan’s 500 hundred skull paintings like unrefined wheat, grinding them down and further refining them to sufficiently anonymize them. He writes a program to randomize Ronan’s painting by stretching and flipping them to generate a less recognizable set of 17K training images from the initial 500 works before he can create art that is sufficiently different from Ronan’s to call it his own. Both must make a sacrifice of the other to produce their own work.

Bugs-Bunny_SHort.gif

Ronan is rightfully proud to have painted two thousand skulls in the last two decades, but Robbie and his GAN can produce billions of skulls seemingly overnight, transforming Ronan into a sympathetic, man vs. machine, John Henry-like character.

It’s tempting to cast the story as two artists who overcome their many differences (age, language, tools) and some initial friction to collaborate on works that are as much by one as by the other. But to ignore the dynamic tension between the two artists is to miss much of what is interesting in the work. It is fitting that they landed on the theme of the skulls as vanities (traditional artworks designed to remind us of our own mortality) as it serves as an excellent thematic umbrella. After all, we all eventually return to the soil, only to become the ingredients in someone else’s narrative.

Subscribe to The Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
4 Comments

2019 Art Market Predictions

January 27, 2019 Jason Bailey
President Barack Obama, Kehinde Wiley, 2018

President Barack Obama, Kehinde Wiley, 2018

Feel free to continue reading our 2019 predictions but please note that we have also recently published our 2020 art market predictions.

I’m a little late on my art market predictions this year, but I had too much fun with my 2018 art market predictions to keep my crystal ball in the closet. This year, I go deep on two trends that I think will dramatically transform the art market, not only in 2019, but for the next decade: first, increased diversity/inclusion in the art world, and second, the digital transformation of art.

I believe we are on a massive collision course between populations that are becoming increasingly diverse and an art history and art world that is still very white and very male.

Many have told me that nothing changes fast in the conservative art world. However, I am predicting nothing short of a “Moore’s Law” of diversity in art. I believe 2019 will bring double the protests and market shift towards equality from what we saw in 2018, and this doubling will continue annually until we reach visible signs of parity. I theorize that continued pressure on art museums will drive rapid cultural change which will then trickle down and transform the art market.

Equally radical, I believe rapidly evolving technology, specifically digitization, is shaping human lives faster and more dramatically than any other series of events in history. I predict that digital transformation of the art world will lead to the beginnings of the dematerialization of art (as is already happening with books and music). And I argue that rather than a rise in the commoditization of art, we are actually seeing the early beginnings of a move away from ownership by traditional definitions.

I predict that museums, galleries, and auction houses will realize improving diversity/inclusion and focusing on the rapidly shifting intersection of art + tech is the key formula for increasing interest, engagement, and participation in the arts.

The rest of this post dives into why I hold these two beliefs. I try to take a first-principles look at art and its function in society, including its use in museums and private collections. I then take a look at what I believe are two important macro trends — a strong push for diversity and inclusion, and the digital revolution — and make predictions around the impact of those trends on art, its value, and its function in society.

Museums Are Serving an Increasingly Diverse Population

Source: William H. Frey Analysis of the the U.S. Census population projections released March 13, 2018, and revised September 6, 2018

Source: William H. Frey Analysis of the the U.S. Census population projections released March 13, 2018, and revised September 6, 2018

According to the Brookings Institution, the U.S. population is projected to become “minority white” by 2045. Additionally, Europe’s population as a percentage of the global population has been shrinking, moving from 28% in 1913 to 12% in present day, and is predicted to be just 7% by 2050.

As minority groups increasingly move towards forming a collective majority in the U.S. and Europe, it becomes increasingly important for museums to evaluate their collections and hiring policies to make sure they reflect the public they serve. This includes not only diversity in race, but working towards correcting longstanding gender inequalities, as well. There are many signs that there is still a lot of work to be done on both fronts and increasing pressure to get it done faster.

  • 85% of artists in major U.S. museums are white

  • Work by women artists makes up only 3–5% of major permanent collections in the U.S. and Europe

  • Less than 3% of museum acquisitions over the past decade have been of work by African American artists

  • Among museum curators, conservators, educators, and leaders, 84% are white, 6% are Asian, 4% are African American, 3% are Latina/o, and 3% have a mixed-race background

  • 46% … of U.S. museum boards are all white

  • 93% of U.S. museum directors are white

  • The top three museums in the world — the British Museum (est. 1753), the Louvre (est. 1793), and The Metropolitan Museum of Art (est. 1870) — have never had female directors

Sadly, but not surprisingly, the art market reflects these same biases.

  • 80% of the artists in NYC’s top galleries are white (and nearly 20% are Yale grads)

  • 77.6% of artists in the U.S. making a living from their work are white

  • Only five women made the list of the top 100 artists by cumulative auction value between 2011-2016

  • The discount for women’s art at auction is 47.6%; even removing the handful of “superstar” artists that skew the data, the discount is still significant at 28%

  • There are no women in the top 0.03% of the auction market, where 41% of the profit is concentrated

  • Overall, 96.1% of artworks sold at auction are by male artists

Despite the art world being disproportionately white, we are seeing trends of increased engagement across all minority groups in attending U.S. museums and galleries between 2012 and 2017.

Source: National Endowment for the Arts, The 2017 Survey of Public Participation in the Arts

Source: National Endowment for the Arts, The 2017 Survey of Public Participation in the Arts

Rather than back away from them out of frustration, people who feel like museums are not representing the public they serve are increasingly taking the fight into the museums. Here are just a few of the protests that were held in museums in 2018 alone:

  • Brooklyn museum hiring a white woman as chief curator for its African collection

  • Artist Michelle Hartney put up alternate wall labels at the MET highlighting Picasso’s and Gauguin’s poor treatment of women

  • Protests of the MET changing their admission policy as classist and nativist

  • Demonstrators filling the Whitney to protest its vice chairman’s ties to a tear gas manufacturer

  • Artists in a protest art show asked to have their work removed from the exhibition when the museum rented out their atrium to a defense contractor

  • Protests at the British Museum over an exhibit sponsored by BP

  • Digital artists held a guerrilla AR (augmented reality) exhibit in the MoMA making a statement against elitism and exclusivity

  • Photographer Nan Goldin led the charge to shame the Sackler family for its role in getting people hooked on OxyContin by staging protests in the Sackler wings of several museums, leaving pill bottles and staging “die-ins”

Nan Goldin and P.A.I.N. (Prescription Addiction Intervention Now) protesting the Sackler involvement with the Harvard Art Museums

Nan Goldin and P.A.I.N. (Prescription Addiction Intervention Now) protesting the Sackler involvement with the Harvard Art Museums

If it’s not bad enough that the majority of work in art museums is by white males, much of the work that is not by white males was stolen during colonization. A recent report estimates that 90% of African art is outside of the continent.

Between the 1870s and early 1900s, Africa faced European colonization and aggression through military force which included mass looting of African art and cultural artifacts. This art was brought back for display in museums in European countries, as well as in the U.S. There has been increased pressure to return the stolen art back to Africa, and in 2018, we saw several protests on this front. The group Decolonize This Place took the protest to the Brooklyn Museum with signs that read, “How was this acquired? By whom? For whom? At whose cost?” and protestors at RISD demanded a sculpture looted from the Kingdom of Benin be returned.

Decolonize this Place activists protesting in the Brooklyn Museum

Decolonize this Place activists protesting in the Brooklyn Museum

French President Emmanuel Macron set a new precedent when he commissioned research on how to handle France’s ~90,000 artworks from Africa. The result was a 109-page report recommending that France give back to Africa all works in their collections that were taken “without consent” from former African colonies.

France, of course, was not alone in colonization. Hundreds of thousands of African artifacts are housed in the U.K., Germany, Belgium, and Austria. The British Museum alone has over 200,000 items in its African collection. I predict pressure to return these artifacts (in the cases where they were ill-gotten) will only increase. I don’t expect people will settle for the “long-term loans” of works back to Africa that many museums are proposing in lieu of complete repatriation.

When Museums Signal Inclusion and Diversity, Good Things Happen

Museums have a lot of work to do to increase diversity and inclusion, but good things happen when they do, even when it is just symbolically.

In early 2017, Beyonce and Jay-Z shot a video for their track Apeshit in the Louvre. Before you write this off as insignificant, you should know that it had an immediate and enormous impact, with Louvre officials crediting the video for increasing their attendance by 25% from 2017 to an all-time record of ten million visitors in 2018.

No doubt Beyonce and Jay-Z resonate strongly with a young and diverse audience (with over 100 million albums sold combined), and their video likely brought some fresh faces to the Louvre.

Similarly, many of the most-heralded art exhibitions of 2018 featured female artists, suggesting a strong appetite for some diversity in our museums and galleries. These include:

  • Hilma af Klint - Guggenheim

  • Tacita Dean - National Gallery, National Portrait Gallery, and Royal Academy

  • Adrian Piper - MoMA

  • Berthe Morisot - Barnes Foundation

  • Anni Albers - Tate Modern

  • Vija Celmins - SFMOMA

  • Tomma Abts - Serpentine Sackler Gallery

Museums that want to see growth in attendance should follow the example set by the Louvre and others by finding public ways to signal that they are open to both artists and visitors of all races and genders, even if they still have work to do in diversifying their collections and staff. Showing some self-awareness can go a long way while en route to solving the problems long term.

Continued Pressure on Art Museums Will Drive Rapid Cultural Change That Will Transform the Art Market

The relationship between museums (as culture drivers and tastemakers) and galleries and collectors is highly interdependent. We know from studies that artists see major boosts in the market for their work when they are shown in major museum exhibitions.

“Auctions of valuable pieces tend to coincide with successful exhibitions.” Ahmed Hosny, Machine Learning For Art Valuation: An Interview with Ahmed Hosny

“Auctions of valuable pieces tend to coincide with successful exhibitions.” Ahmed Hosny, Machine Learning For Art Valuation: An Interview with Ahmed Hosny

Given this dependency, I believe that once museums accelerate the diversification of the work they show (under pressure from an increasing number of protests), we will see the value of the art rise dramatically in the market.

We are already seeing some early signs of the market correcting for its indefensible biases. In 2019, Kerry James Marshall broke the record for top-selling work by a living African American artist when his piece Past Times sold for $21.1M at Sotheby’s last May.

Past Times, Kerry James Marshall, signed and dated '97

Past Times, Kerry James Marshall, signed and dated '97

Likewise, Jenny Saville set the record for most expensive work sold by a living female artist for her 1992 painting Propped, which sold for $12.4M.

Propped, Jenny Saville, 1992

Propped, Jenny Saville, 1992

I believe these two records falling in the same year is just a very small signal of a massive market correction that will happen over the next two decades as we mature as a society and learn to see people as equals, regardless of race or gender. Those who move quickly to increase diversity will flourish, and those who don’t will risk losing their audience and becoming irrelevant.

Digital Transformation and the Dematerialization of Art

Phantom 5, 2018, Jeff Bartell

Phantom 5, 2018, Jeff Bartell

"Art is an experience, not an object." - Robert Motherwell

The second major force that I believe will shape the art world in 2019 (and for the next decade to come) is a strong trend towards embracing art + technology, and specifically around the digital transformation of art.

Source: International Telecommunications Union

Source: International Telecommunications Union

We are living during arguably the most dramatic technological transformation in human history, and with half the world online now, I believe the future of art is inevitably digital. With music, we saw the evolution from physical media like cassettes and CDs move to dedicated hardware like iPods and MP3 players, and then finally to streaming services like Spotify and Pandora.

Source: IFPI Global Music Report 2018

Source: IFPI Global Music Report 2018

We saw the same trend in publishing, with physical books losing market share to e-books and e-readers. Those devices were just an intermediary step to streaming audiobooks, which is now the the fastest-growing sector of publishing by far.

Source: APA (Audio Publishers Association)

Source: APA (Audio Publishers Association)

Despite rapid shifts towards digitalization in other fields, most of us still think of canvas on a wall when we hear the word “art.” This is ironic given the fact that Americans spend an average of 11 hours a day looking at screens and almost no time looking at the walls of their homes.

Source: TEFAF Art Market Report Online Focus 2017

Source: TEFAF Art Market Report Online Focus 2017

Galleries are struggling or closing down precisely at a time when interest in art is rising on Instagram and at international art fairs. But increased interest does not always mean increased sales. Writer Tim Schneider captured this shift in his review of Art Basel Miami last year when he asked:

…if the fastest, and perhaps only, organically growing audience for art is more interested in being around it for a week, a few days, or even a night at a time rather than in owning it for a high price for much longer, what does that mean for everyone else?

I think we are seeing some early signs that art consumption is shifting away from physical ownership, as we saw with books and music, and toward the experiential, ushered in by the digital.

For centuries, physical ownership of art was required to enjoy it. Art was a sign of wealth and power, and collecting art was about saying “I own this art.”

LUIGI FIAMMINGO – Portrait of patron Lorenzo de’ Medici, called The Magnificent, c. 1550

LUIGI FIAMMINGO – Portrait of patron Lorenzo de’ Medici, called The Magnificent, c. 1550

With the increase in availability of the internet, we have seen a rise in social media consumption of art. Sharing selfies at museums and art fairs on social media signals your taste and sophistication without having to own physical artworks. And while a few dozen people may see the art you purchased at a gallery and hung on the walls of your home, hundreds to thousands of people instantly see the art selfies you share on your social profile. This has enabled art appreciation to be less about saying “I own this art” and more about saying “I like this art.”

Me at the Boston ICA hoping some of Albert Oehlan’s “coolness” will transfer to me in this selfie posted on Instagram

Me at the Boston ICA hoping some of Albert Oehlan’s “coolness” will transfer to me in this selfie posted on Instagram

I believe as we become increasingly digital, the new message we send going forward will be “I support this artist.” As with the previous stages of “owning” and “liking,” “supporting” publicly links you back to the art and artists who you enjoy in a highly visible way. And having methods for supporting artists that do not require you to purchase or commission whole works of art greatly expands the pool of potential participants.

Few of us show off our CD collections these days; instead, we consume music through streaming and go to concerts where we take selfies and buy t-shirts that we share on social media as patronage proof points. I expect art to move in that direction, and would argue that it already has.

Physical possession of works that are created digitally provides no real advantage. Again, it is the same dynamic of dematerialization we are seeing in music and books. I gain very little by having a physical CD for every album I have access to on Spotify or a physical book for every story I have access to on Audible. Neither would be practical. With streaming, what I have lost in fetishizing tangible objects, I have gained with access to a number of albums and books nobody could have dreamed of 25 years ago.

Does an art streaming service in the mode of Audible or Spotify sound ludicrous? Well, generation one of art streaming has been around for almost a decade and has over one billion users.

Source: Instagram

Source: Instagram

I’m talking about Instagram, of course. But Instagram is really just the Napster of art streaming, as it falls short of supporting most artists. Nevertheless, it is a solid proof point of our insatiable appetite for the digital consumption of art. I predict we will soon see a combination of the proven distribution and consumption model of Instagram paired with patronage models like Patreon and Kickstarter.

Last year, we saw a lot of experimentation around new models for funding artists from several promising startups exploring blockchain. Dada.nyc, where I am an advisor, has over 160K registered artists in their community. Over 100K drawings have been produced in their social media platform where artists communicate with each other through drawings.

I started this drawing when I was in bed with Lyme disease in my knee. Artists from around the world responded. The conversation continues months later.

The Dada team is carefully working through how to create a market that does not just duplicate the current physical art market. They want to avoid building a system where only a few can afford to collect and an even smaller number of people are rewarded for their creative work. Dada dislikes the collecting of art as speculation and is constantly evaluating new models of patronage that can enable artists to focus on creating their work. Their goal is for the entire community to benefit each time patrons provide monetary support and to blur the lines between “patrons” and “artists,” as they believe creating art is beneficial for everyone.

Another blockchain art market that experienced significant traction and growth in 2018 is SuperRare. They provide a new revenue stream for artists (which helps fund creative projects) while giving patrons the ability to discover, buy, sell, and collect unique digital creations by artists from around the world.

Screenshot of my digital art collection in SuperRare

Screenshot of my digital art collection in SuperRare

SuperRare completed almost 6,000 transactions, generating 602.76 ETH to date (over $70K) in less than a year since their launch.

https://www.dapp.com/dapp/SuperRare

https://www.dapp.com/dapp/SuperRare

Sure, these are not Instagram numbers just yet, but having a handful of startups (others notables include, Portion.io, Known Origin, R.A.R.E. Art Labs, and Digital Objects) prove out the model is an important first step in building out any new market. It is also telling that despite the cryptocurrency crash, which devastated the majority of blockchain companies, all of these blockchain art markets are still in business and experiencing growth.

There are two important things to note about digital art markets like the ones above:

  • You don’t need to own the work to experience it. When I buy a work on SuperRare, I see the same image that everyone else can see for free.

  • Because of this, the joy in collecting digital art does not derive from denying other people access to art, but instead, in increasing access to art and artists you enjoy and want others to appreciate, as well.

Digital art is highly replicable and transmissible, so there is no benefit to keeping it to yourself. In fact, the value of the work (as with all art) only goes up as you share it more broadly. The message with collecting in a digital age is less and less “I want you to know how powerful I am - I own this thing that nobody else can own” and is instead “I want you to know I support this artist because their work is awesome, and I’m excited to share it with as many people as possible.”

So why buy digital art if everyone else can see the same image for free? It’s simple: Because you can’t expect artists to continue creating if you don’t support them. I believe it’s not the art itself that we should revere, but the people making it. Too often we celebrate and fetishize individual works of art long after the geniuses that created them have died penniless. Rather than cater to speculative or extrinsic values, I predict we will see several new digital art streaming services built on the intrinsic pleasure we derive from art. We’ve learned we don’t need to own a physical, re-sellable book in order to enjoy a great novel, or a piece of vinyl to appreciate music. The same will be true for art.

It is important to remember that ownership and speculation on the part of collectors is not a necessary ingredient to producing great art. That is just one way of making sure artists have enough money to survive and continue working, and there is a good chance it is not the most effective (nor the best) model for artists.

Of course, many artists will choose to continue to work in traditional physical media like painting and sculpture. We will always have amazing museums full of physical artwork, and I couldn’t be more thankful for that. There will always be galleries for buying and selling physical art, same as we still have brick-and-mortar bookstores and music stores. But I think the digital transformation of art is inevitable and coming faster than most people expect. I also strongly believe that this shift is healthy and presents an opportunity to reframe (pun intended) how we treat artists and consume art.

Summary and Conclusion

Lots of people I know don’t like making or reading predictions. The primary complaint I hear is that predictions are either “boring and accurate” or “entertaining but outrageous.” I, on the other hand, love making predictions. I feel like the process is similar to making art in that I can use both my imagination and my powers of observation and reasoning to show the world how I see things as they could be.

As a pseudo-futurist and techno-optimist, I relish the idea that we can build a world where an increasing number of people can participate in the joys that art has given me in my life. I believe the macro forces and trends that I am seeing in the world support that idea.

But I don’t want to let myself off the hook too easily here, so what am I actually saying that is measurable here in terms of predictions?

  • First, a “Moore’s Law” in diversity in art. Inclusion and diversity in art will double annually until we reach parity, as measured by:

    • An increase in the price of works sold by women and minorities at auction;

    • An increase in the number of women and minorities in positions of power at museums and in the art trade;

  • Second, an increase in the number of people interested in art without a corresponding increase in the number of collectors;

  • Third, the launch of at least one art streaming service in 2019 and a shift towards this model over the next decade;

  • And fourth, a shift from artists, art journalism, and art fairs to diversity and tech as the key topics for all of 2019.

I hope you enjoyed this year’s predictions! Whether you agree or disagree, I am always excited to hear from Artnome readers. Leave your thoughts in the comments below or hit me up on Twitter at @artnome or e-mail me at jason@artnome.com.

Subscribe to the Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
3 Comments

How Rembrandt and Van Gogh Mastered The Art of the Selfie

January 13, 2019 Jason Bailey
An average of every self-portrait painted by Rembrandt

An average of every self-portrait painted by Rembrandt

I recently read that the average millennial will take an astronomical 25,000 selfies in their lifetime — almost one per day. This got me thinking about the history of selfies. Before the invention of the camera, artists were the only ones capable of making selfies (I know, what a tragedy, right?). So in a weird way, you could argue Rembrandt — known for painting an enormous number of self-portraits — was the Paris Hilton of his day. Sound crazy? It’s not.

Self-portrait in a hat with white feathers, Rembrandt, 1635

Self-portrait in a hat with white feathers, Rembrandt, 1635

Portrait in a hat with pink feathers, Paris Hilton, 2018

Portrait in a hat with pink feathers, Paris Hilton, 2018

Though the number is somewhat contentious, Rembrandt was known to have created close to 100 self-portraits (over 40 of them as paintings). That may not sound like much by today’s selfie standards, but it’s huge when compared to the painters of his day — especially when you consider that it accounts for 10% of his total artistic output. Think about it: he is a painter by trade, and 10% of his time on the job was spent making paintings of himself. If that’s not some Paris Hilton-level selfie action, then I don’t know what is.

The Old Masters Loved Showing Their Bling

Self-Portrait with a Sunflower, Anthony van Dyck, 1663 (note the flexing of the bling he received from his patron, the English monarch Charles I)

Self-Portrait with a Sunflower, Anthony van Dyck, 1663 (note the flexing of the bling he received from his patron, the English monarch Charles I)

Maybe you are thinking, “Jason, Rembrandt is the greatest painter of all time. Surely he was motivated by something more noble than the vanity that motivates today’s selfie-snapping celebrities.” Well, actually, not so much. Consider this: old master portrait artists were often given gold necklaces from their wealthy patrons. This became such a big deal that the painter Titian decided to bling out his selfies by including the gold chains he received from his patron, the Emperor Charles V. It kicked off a fad among portrait artists including Van Dyck, Vasari, Reubens, Bandinelli, and others.

Self-Portrait, Titian, 1546 (wearing the golden chain that was given to him by the Emperor Charles V in 1533)

Self-Portrait, Titian, 1546 (wearing the golden chain that was given to him by the Emperor Charles V in 1533)

Self-Portrait, Baccio Bandinelli, 1530 (wearing a gold chain with a pendant bearing the symbol of the chivalric Order of St. James)

Self-Portrait, Baccio Bandinelli, 1530 (wearing a gold chain with a pendant bearing the symbol of the chivalric Order of St. James)

Not unlike today’s hip hop artists, these chains were a status symbol — they showed that an artist had “arrived” and was at the top of their game. Unfortunately, Rembrandt had no wealthy patrons when he was first starting out. Undeterred, he decided to “fake it till he made it” and painted imaginary gold chains on his self-portraits to suggest that he had a higher status and more power than he actually did. Talk about doctoring a selfie for purposes of vanity and status.

Self-Portrait with Beret, Gold Chain, and Medal, Rembrandt, 1640

Self-Portrait with Beret, Gold Chain, and Medal, Rembrandt, 1640

Jay-Z: Portrait with Ball Cap, Gold Chains, and Brooch

Jay-Z: Portrait with Ball Cap, Gold Chains, and Brooch

The Dutch Love Their Selfies

Selfie with Van Gogh’s 1888 Self-Portrait Dedicated to Paul Gauguin (on a research visit to Harvard Art Museum)

Selfie with Van Gogh’s 1888 Self-Portrait Dedicated to Paul Gauguin (on a research visit to Harvard Art Museum)

“They say—and I am willing to believe it—that it is difficult to know yourself—but it isn’t easy to paint yourself, either.” - Vincent van Gogh

In terms of selfies, Van Gogh was not far behind Rembrandt, having painted 35 self-portraits in just one short decade of activity. That is roughly 4.2% of his total output and more than three self-portraits a year!

Van Gogh believed that painting could be reinvented through portraiture and fantasized about building a colony of artists working together. He also knew that Japanese wood block printers often exchanged prints among each other and encouraged his besties Gauguin and Bernard to exchange self-portraits with him.

As Van Gogh wrote:

It clearly proves that they [Japanese wood block printers] liked one another and stuck together, and that there was a certain harmony among them [. . .] The more we resemble them in that respect, the better it will be for us.

Van Gogh had essentially come up with an old-school social network where people could share and comment on each other’s selfies, not unlike Instagram or Snapchat. He wrote to his brother Theo sharing his thoughts on the self-portraits he received from Gauguin and Emile Bernard. Here is what Van Gogh’s comments would have looked like in Instagram (using actual quotes from correspondence between the artists).

VanGogh_Instagram.png

Sadly, Van Gogh and Gauguin’s friendship famously soured, and Gauguin sold the portrait Van Gogh painted for him for about three hundred francs after making a few restorations.

Van Gogh’s portraits function like a visual diary. While his early works do not feature gold chains, they are painted in a dark Rembradt-esque palette and feature conservative clothing and a pipe, suggesting Van Gogh may still have been at least a little preoccupied with keeping up appearances.

Self-Portrait with Dark Felt Hat, Vincent Van Gogh, Paris, 1886

Self-Portrait with Dark Felt Hat, Vincent Van Gogh, Paris, 1886

Self-Portrait with Pipe, Vincent Van Gogh, Paris, 1886

Self-Portrait with Pipe, Vincent Van Gogh, Paris, 1886

Self-Portrait, Vincent Van Gogh, Paris, 1886

Self-Portrait, Vincent Van Gogh, Paris, 1886

By 1887 (just one year later), we see Van Gogh rapidly exploring self-portraits in the style of other artists, including influences from Impressionism, Pointallism, and Japanese woodblock prints. I believe we are also seeing a shift from portraits focused on external appearance towards portraits capturing his own psychological inner life.

Self-Portrait, Vincent Van Gogh, Paris, 1887

Self-Portrait, Vincent Van Gogh, Paris, 1887

Self-Portrait with Bandaged Ear, Vincent van Gogh, Arles, 1889

Self-Portrait with Bandaged Ear, Vincent van Gogh, Arles, 1889

Self-Portrait, Vincent van Gogh, Saint-Rémy, 1889

Self-Portrait, Vincent van Gogh, Saint-Rémy, 1889

Averaging Rembrandt and Van Gogh Self-Portraits

It dawned on us that with so many selfies, Rembrandt’s and Van Gogh’s self-portraits comprised a pretty cool data set. After brainstorming with Artnome data scientist Kyle Waters, we decided it would be cool to create “average” self-portraits of each artist by combining their paintings into a single image. Kyle settled on using a similar approach to the technique he employed in averaging Van Gogh’s paintings to show why Van Gogh changed his color palette.

We started by importing all the self-portrait images for Rembrandt and Van Gogh from the Artnome database and set them all to be the same 400 x 400 pixels in dimension. I know, kind of a sin to change the aspect ratios of famous paintings, but this made it easier for us to "add" each image together, i.e., taking the red value of the top left pixel in Self-Portrait with Bandaged Ear and adding it with the red value from the top left pixel of Self-Portrait Dedicated to Paul Gauguin, and so on. 

We then calculated the simple arithmetic average by dividing out the sum of pixels by the total number of paintings.

unnamed-13.png

We were pretty psyched with the results. You can check them out below.

The average Rembrandt self-portrait

The average Rembrandt self-portrait

The average Van Gogh self-portrait

The average Van Gogh self-portrait

While there is a lack of detail, I actually love the results. You can definitely make out which is Rembrandt and which is Van Gogh. Rembrandt’s composite features an earthy brown color palette, while Van Gogh’s yellows and blues average a greenish hue, with patches of orangey-red where his hair and beard were most commonly depicted. It is also clear from the composites that Rembrandt preferred to paint himself looking to our right, whereas Van Gogh most often looks to our left.

We thought the effect was pretty cool, so tried it on a few other thematic subcategories, including Van Gogh’s portraits of Madame Ginoux and his sunflowers, respectively, creating an average-based image for each.

Visual average of every portrait of Madame Ginoux by Van Gogh

Visual average of every portrait of Madame Ginoux by Van Gogh

Visual average of the sunflower paintings by Van Gogh

Visual average of the sunflower paintings by Van Gogh

As with the self-portraits, these were visually interesting, as you can still make out some of the features and shapes without one clear painting dominating.

Conclusion

Next time someone gives you a hard time for spending 15 minutes fussing with filters on your selfie, remind them that Rembrandt spent a full 10% of his career perfecting selfies. Who knows? Maybe there is even a market for an “old masters” app that lets people add gold chains to their selfies.

As always, thanks for reading. As we mentioned, this was inspired by Kyle Waters’ excellent work in averaging Van Gogh’s paintings for our post about the shifts in his color palette. We have a third post in the series that will focus on establishing “the average” Van Gogh work using several different techniques. Sign up for our newsletter and we’ll be sure to alert you when it goes live.

If you have questions or suggestions you can always reach me on Twitter at @artnome or email me at jason@artnome.com.

Subscribe to the Artnome newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
1 Comment

Painted Portraits Inspired By Neural Net Trained on Artist’s Facebook Photos

January 9, 2019 Jason Bailey
Crazy Eyes, Liam Ellul, 2018

Crazy Eyes, Liam Ellul, 2018

I’ve come to learn that if a person can run neural networks and has deep interest in art, they are probably a pretty creative and interesting person. Australian artist Liam Ellul is no exception.

Ellul recently shared a new portrait series he working on called Just Tell Me Who To Be on Twitter. His portraits are simultaneously of nobody in particular and yet also everybody in his life. The series explores identity through four 12”x12” acrylic-on-cotton paintings. Each painting was painted directly on printouts of images Ellul created by training a GAN (generative adversarial network) on 10k photographs from his Facebook account.

Without going into a full description of how GANs work (you can find that here), the process involves a neural network inventing new images based on a set of training images provided by the artist. So in Ellul’s case, he is essentially asking the GAN, “If I give you photos of all the important people and moments in my life, can you go and invent me some new people and moments?” After training, the GAN outputs a large number of potential images and stores them in “latent space,” which you can see in the animated GIF below.

Small snippet from the Interpolation video from Ellul’s GAN trained on his personal Facebook and Google photos.

Small snippet from the Interpolation video from Ellul’s GAN trained on his personal Facebook and Google photos.

Ellul then short listed a dozen of faces seen in the GIF above, printed them out, and laid them around his apartment for a few days. According to Ellul:

After extracting all the faces from my archives — the data preparation was somewhat manual — I found myself looking at thumbnails most of the time. Pre-processing in this context reminded me of mixing colors on a pallet, but instead of colors, I was mixing forms. It became pretty clear which ones had the strongest hold on me — then I painted them. The common thread was that the selected outputs I chose gave me an impression of something I identified with in a really deep way. Like, out of the latent space, it touched on something that I couldn’t have represented unless I saw it first.

Like Dropping Your Family Photo Album Into a Blender

Self Portrait (now alive), Liam Allul, 2018 - GIF alternating between the GAN printout and the finished painting

Self Portrait (now alive), Liam Allul, 2018 - GIF alternating between the GAN printout and the finished painting

Self Portrait (now alive), Liam Allul, 2018

Self Portrait (now alive), Liam Allul, 2018

Self Portrait (now alive), Liam Allul, 2018 - (reference image)

Self Portrait (now alive), Liam Allul, 2018 - (reference image)

Though some AI artists make their own training image sets — notably, Anna Riddler, with her painstaking photographic collection of tulips, and Helena Sarin, who trains GANs on her drawings and paintings — it is rare. For practical reasons (scale and availability), most AI artists select large public data sets to train GANs. However, because these public data sets are widely available, as are the GANs used to process them, there are signs that the results are becoming increasingly homogenous.

Ellul bucks this trend by not only using his own materials, but by using the most personal materials possible: photographs from his own life’s relationships, experiences, and memories, which are no doubt loaded with personal meaning and associations. He owns the material, in the truest sense of the word, as he has quite literally lived it. From Ellul:

It was a surprising realization just how much data I have created over my life and how effectively it can be harnessed in the creative process. Some look like me physically, but the face and expression I would never pull in a photo — it’s this surreal look that captures a feeling and encourages me to express it. Others look like a blend of me and a friend with similar surreal expressions.

Once I was happy with the outputs of the model, I spent a long while just watching the waves of eerily familiar faces that it produced. Often, I’d recognize a face as my own or fused with a close friend – despite never being captured with that expression – certain frames would perfectly resonate with a part of me when I saw them.

Number 3, Liam Ellul, 2018

Number 3, Liam Ellul, 2018

Number 3, Liam Ellul, 2018 (Source image from GAN)

Number 3, Liam Ellul, 2018 (Source image from GAN)

Fascinated by Ellul’s use of GANs as a departure point or inspiration for creating physical paintings, I asked him about his both his artistic and technical background.

Ellul shared that he has been creating portraits as a sort of visual journal since his grandfather first taught him to draw with charcoal when he was 10 years old (though he later switched to painting in acrylic). He initially went to school for law but realized “it wasn’t something I wanted to do professionally,” and he eventually shifted his focus to a rapidly growing interest in analytics. This led to Ellul and a friend launching “a small company focused on agricultural crop analysis and research.” It was there that Ellul learned about neural networks while testing predictive models for plant growth. Again from Ellul:

The first time I saw a GAN was 2017 in Alex Radford’s GitHub repo where he showed the generation of bedrooms, faces, and album art. My brain broke. Then mid-last year I saw the incredible high resolution faces you could get with GANs — something clicked in my brain and I felt compelled to do this portrait series.

Self Portrait (With My Friends), Liam Ellul, 2018

Self Portrait (With My Friends), Liam Ellul, 2018

Self Portrait (With My Friends), Liam Ellul, 2018 (Source image from GAN)

Self Portrait (With My Friends), Liam Ellul, 2018 (Source image from GAN)

Ellul now works in strategy and product development at Microsoft and creates his artwork on the side. I asked Ellul if he has any upcoming projects and if so what was next:

Yes! I love the adventurous nature of this area and the experience of running through a personal gauntlet to get these the paintings out! In terms of what’s next, I have two ideas bubbling away that are very much still coming together. Network design and exploring ways they can be linked together is something I will put more time into as I develop my approach. I am going also see if I can make the switch from acrylics to oils!

Conclusion

While the purist in me loves seeing work created digitally staying digital, I suspect we will increasingly see artworks executed in a variety of media as GANs come into their own as a tool for augmenting creativity (imagine what a GAN-inspired sculpture might look like). I think this is an interesting direction, and I’m encouraged by the exploration and work of artist/technologists like Ellul and his recent portraits.

As always, feel free to reach out to me at jason@artnome.com with any questions or suggestions. You can also hit me up on Twitter, my social media of choice, at @artnome.com.

Subscribe to the Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!


Comment

DeepDream Creator Unveils Very First Images After Three Years

January 2, 2019 Jason Bailey
Cats, one of the first DeepDream images produced by its inventor, Alex Mordvintsev

Cats, one of the first DeepDream images produced by its inventor, Alex Mordvintsev

In May of 2015, Alex Mordvintsev’s algorithm for Google DeepDream was waaay ahead of its time. In fact, it was/is so radical that its “time” may still never come.

DeepDream produced a range of hallucinogenic imagery that would make Salvador Dali blush. And for a month or so, it infiltrated all of our social media channels, all of the major media outlets, and even became accessible to anyone who wanted to make their own DeepDream imagery via a variety of apps and APIs. With the click of a button, I turned a photo of my wife into a bizarre gremlin with architectural eyes and livestock elbows.

Image I made using a DeepDream app in August, just three months after it was invented by Alex Mordvintsev

Image I made using a DeepDream app in August, just three months after it was invented by Alex Mordvintsev

And then — “poof” — DeepDream just kind of disappeared. It is the nature of art created with algorithms that when the algorithms are shared with the public, the effect quickly hits a saturation point and becomes kitsch.

I personally think DeepDream deserves a longer shelf life, as well as a lot of the credit for our current fascination with machine learning and art. So when Art Me Association, a non-profit organization based in Switzerland, recently asked if I wanted to interview Alex Mordvintsev, developer behind the DeepDream algorithm, I said “yes” without hesitation.

And when Alex shared that he recently found the very first images DeepDream had ever produced and then told me that he had never shared them with anyone, I could hardly contain myself. I immediately asked if I could share them via Artnome. Well, to be honest, I first asked if I could buy them for the Artnome digital art collection (collector’s instincts), but it turns out Google owns them and has let Mordvintsev share them through a Creative Commons (CC) license. Something tells me that Google probably doesn’t need my money.

Father Cat, May 26, 2015, by Alexander Mordvintsev

Father Cat, May 26, 2015, by Alexander Mordvintsev

For me, Mordvintsev’s earliest images from May, 2015, are as important as any other image in the history of computer graphics and digital art. I think they belong in a museum alongside Georg Nee’s Schotter and the Newell Teapot.

Custard Apple, May 16, 2015, by Alexander Mordvintsev

Custard Apple, May 16, 2015, by Alexander Mordvintsev

Why do I hold so much reverence for the early DeepDream works? DeepDream is a tipping point where machines assisted in creating images that abstracted reality in ways that humans would not have arrived at on their own. A new way of seeing. And what could be more reflective of today’s internet-driven culture than a near-endless supply of snapshots from everyday life with a bunch of cat and dog heads sprouting out of them?

I believe DeepDream and AI art in general are an aesthetic breakthrough in the tradition of Georges Seurat’s Pointillism. And to be fair, describing Mordvintsev’s earliest DeepDream images as “just a bunch of cat and dog heads emerging from photos” is as about as reductive as calling A Sunday on La Grande Jatte “a bunch of dots.”

That Mordvintsev did not consider himself an artist at the time and saw these images as a byproduct of his research is not problematic for me. Suerat himself once shared: “Some say they see poetry in my paintings; I see only science.” Indeed, to fully appreciate Mordvintsev’s images, it is also best to understand the science.

I asked Mordvintsev about the origins of DeepDream:

The story behind how I invented DeepDream is true. I remember that night really well. I woke up from a nightmare and decided to try some experiment I had in mind for quite a while at 2:00 AM. That experiment was to try an make a network to add details to some real image to do image super resolution. It turns out it added some details, but not the ones I expected. I describe the process like this: neural networks are systems designed for classifying images. I’m trying to make it do things it is not designed for, like detect some traces of patterns that it is trained to recognize and then trying to amplify them to maximize the signal of the input image. It all started as research for me.

I asked Alex what it was like to see his algorithm spread so quickly to so many people. I thought he might have regretted it getting “used up” by so many others, but he was far less shallow than me in this respect and took a broad-minded view of his impact:

I should probably have been involved in talking about it at that moment, but I was more interested in going deeper with my research and wanted to gain a deeper understanding of how things were working. But I can’t say that after three years of research that I understand it. So maybe I was over-excited in research at the moment.

I think it is important that everyone can participate in it. The idea that Iana, my wife, tries to convey is that this process of developing artificial intelligence is quite important for all the people and everyone can participate in it. In science, it isn’t about finding the answer, it is more about asking the right question. And the right question can be brought up by anybody.

The way I impacted society [with DeepDream] is that a lot of people have told me that they got into machine learning and computer vision as a result of seeing DeepDream. Some people even sent me emails saying they decided to do their Ph.D.s based on DeepDream, and I felt very nice about that. Even the well-known artist Mario Klingemann mentioned that he was influenced by DeepDream in an interview.

Indeed, I reached out to artist Mario Klingemann to ask him the significance of DeepDream for him and other prominent AI artists. He had this to say:

The advent of DeepDream was an important moment for me. I still remember the image of this strange creature that was leaked on reddit before anyone even knew how it was made and knew that something very different was coming our way. When the DeepDream notebook and code was finally released by Google a few weeks later, it forced me to learn a lot of new things; most importantly, how to compile and set up Caffe (which was a very painful hurdle to climb over), and also to throw my prejudices against Python overboard.

After I had understood how DeepDream worked, I tried to find ways to break out of the PuppySlug territory. Training my own models was one of them. One model I trained on album covers which, among others, had a "skull" category. That one worked quite nicely with DeepDream since it had the tendency to turn any face into a deadhead. Another technique I found was "neural lobotomy," in which I selectively turned off the activations. This gave me some very interesting textures.

Where I had seen sharing the code to DeepDream as a mistake, as it quickly over-exposed the aesthetic, Mordvintsev saw a broad and positive impact on the world which would not have been possible without it being shared. Mordvintsev also took some issue with my implication that DeepDream was getting “old” or had been “used up.” It turns out that my opinion was more a reflection of my lack of technical abilities (beyond using the prepackaged apps) than a reflection of DeepDream’s limitations as a neural net. He politely corrected me, saying:

Maybe you played with this and assumed it got boring. But lately, I started with the same neural network, and I found a beautiful universe of patterns it can synthesize if you are more selective.

I was curious why so many of the images had dog faces. Alex explained to me that he was using a pretrained network called ImageNet, a standard benchmark for image classification that was established around 2010. ImageNet includes 120 categories of dog breeds to showcase “fine-grained classification.” Because ImageNet dedicates a lot of its capacity to dog breeds, it triggers a strong bias in the data. Alex points out that others have applied the same algorithm to MIT’s Places Image Database. Images from the MIT database tend to highlight architecture and landscapes rather than the dogs and birds favored in the ImageNet database.

I asked Mordvintsev if he now considers himself an artist.

Yes, yes, yes, I do! Well, actually, we are considering my wife and I as a duo. Recently, she wanted to make a pattern for textiles and wanted a pattern that tiled well, and I sat down and wrote a program that tiled. And most generative art is static images on screen or videos, and we are trying to get a bit beyond that to something physical. We recently got a 2.5D printer that makes images layer by layer. I enjoy that a lot. But our artistic research lays mostly in this direction: moving away from prints into new mediums. Recently, we had our first exhibition with Art Me at Art Fair Zurich and we had sponsorship from Google. We are interested in showing our art to the world and trying to explain it to a wide audience.

Alex and Iana Mordvintsev Prepping to show their latest work at Art Fair Zurich

Alex and Iana Mordvintsev Prepping to show their latest work at Art Fair Zurich

While I appreciated DeepDream from the beginning, I felt it became kitsch too quickly as a result of being shared so broadly. Speaking with Alex makes me second guess that. It’s now clear to me that Alex did the world a service by making his discovery so broadly available and that he still sees far more potential for the DeepDream neural net (and he would know). There are some critics who just don’t “get” AI art, but as Seurat said: “The inability of some critics to connect the dots doesn't make Pointillism pointless.”

Above: Alex Mordvintsev’s NIPS Creativity Art Submission

As always thanks for reading! If you have questions, suggestions, or ideas you can always reach me at jason@artnome.com. And if you haven’t already, I recommend you sign up for the Artnome newsletter to stay up to date with all the latest news.

Subscribe to The Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
1 Comment

How Artnome Stumbled Into Writing About The Three Biggest Art Stories of 2018

December 31, 2018 Jason Bailey
Cover.jpg

Reading all the “2018 art year in review” articles over the last few days has really helped hammer home for me that the three most talked-about stories in art in 2018 were:

  • Blockchain and art

  • The Banksy shredding

  • The AI art sold at Christie’s

While I don’t think of Artnome as a traditional art news site, we had some of the earliest stories on all three topics. We also ended up among the top of Google’s search results for all three stories. How did this happen?

Well, it was mostly dumb luck. But I always enjoy and appreciate it when blog authors and entrepreneurs candidly share the stories and the data from behind the curtain. So this post will blend our 2018 year in review with a back stage look at Artnome, warts and all.

Part One: Blockchain, Art, and “Being There”

Someone really needs to remake the movie Being There because it is a great cultural touchstone, but most people are too young to get the reference these days. In this rags-to-riches movie, a simple-minded, sheltered gardener named Chance, who knows only about gardening and what he’s learned from daytime TV, is forced to leave his home and enter the great big world after his employer passes away. Through a series of hilarious twists, Chance the gardener becomes “Chauncey Gardner” after he is mistaken as an upper-class gentleman. The story culminates with Chance giving the president of the United States basic gardening advice, which the president reads into as sage wisdom on the nation’s economy.

In 2018, I felt like the Chauncey Gardner of the art world, only instead of learning all I know from television, I had the internet. It started at the very end of 2017 when I spent half a day researching and writing about blockchain and art. I published a short blog post called The Blockchain Art Market Is Here and went to bed thinking nobody would ever read it. The next day I searched “blockchain art” on Google, only to discover my article had somehow stumbled to the top of the search results (where it stayed for most of the year). Within days I was getting dozens of emails with detailed questions about art and blockchain.

Traffic for the my article The Blockchain Art Market is Here followed a similar trajectory to the Cryptomarkets… down and to the right

Traffic for the my article The Blockchain Art Market is Here followed a similar trajectory to the Cryptomarkets… down and to the right

Then came interview requests and invitations to speak on panels at conferences all around the world as an expert in blockchain and art. I had somehow become an accidental expert despite knowing very little. To fix this, I went to a lot of conferences and spoke with a lot of folks that were much smarter than I am. I wrote a few more articles, started a podcast, and I learned enough to be able to moderate two panels in London at Christie’s Art + Tech event. It helped that my panels were loaded with brilliant, dynamic, cutting-edge thinkers. All I really had to do was get out of their way.

Blockchain had exploded in the art world, and I had just written the right article at the right time. Whether you cared about blockchain or not, you were forced to express your opinion or risk being left out of the conversation.

Then the cryptocurrency market started to tank, and the only people left talking about blockchain and art either truly believed in it and were building really cool stuff or were late to the party and did not realize the hype train had left the station.

With no more requests for speaking engagements, I went back to doing what I enjoy most: writing about the crazy stuff at the intersection of art and tech that I find fascinating.

Part Two: AI Art Gets Awesome

Robbie Barrat, AI Generated Nude Portrait #1, 2018

Robbie Barrat, AI Generated Nude Portrait #1, 2018

Early in 2018, I became obsessed with the Twitter feed of @DrBeef_ , a hyper-creative teenage artist named Robbie Barrat from West Virginia. Back in April, we became friends after I interviewed him and purchased some of his AI Nudes. I was a huge fan, and Robbie was (and still is) really generous in helping me better understand how artists are using GANs (generative adversarial networks) to make really cool new art.

However, almost nobody read my interview with Robbie (AI Art Just Got Awesome) in the first week, so I didn’t really think much of it. Then two months after I initially published the article, I noticed a spike in the number of people reading the interview.

Traffic for the my interview with Robbie Barrat, AI Art Just Got Awesome

Traffic for the my interview with Robbie Barrat, AI Art Just Got Awesome

Two things had happened. First, a bunch of other media outlets had picked up on Robbie’s work. Second, Christie’s was heavily promoting that it was going to be the first to sell an AI artwork at auction. The work they were selling was by a French art collective called Obvious, whom I’d also been friendly with on Twitter.

Portrait of Edmond Belamy, 2018, Obvious

Portrait of Edmond Belamy, 2018, Obvious

Unfortunately, Obvious had made some poorly thought out public claims about the AI being responsible for making the art, implying no real human involvement. The media ate that up and ran like crazy with it, undercutting the brilliant work that many AI artists had been doing for years by further suggesting humans had no role in creating AI art. Additionally, Obvious had borrowed heavily from the work of Robbie Barrat and they did not do a great job of crediting him. This made them a pariah among the AI art community.

Still, I sympathized with Obvious. There was no way they could have predicted that they would end up on the world stage having their every word and action scrutinized. So when Hugo Caselles-Dupré, the tech lead from Obvious, confided in me that the media’s version of the story was out of control and he wanted to come clean with the real story to smooth things over with the AI art community, I obliged. The interview, initially published under the title The AI Art At Christie’s Is Not What You Think, went for well over an hour and was the first article where Obvious acknowledged that they borrowed heavily from Robbie Barrat.

Traffic from my interview with Obvious technical lead Hugo Caselles-Dupré titled The AI Art At Christies Is Not What You Think

Traffic from my interview with Obvious technical lead Hugo Caselles-Dupré titled The AI Art At Christies Is Not What You Think

The interview drew more attention when it was cited by many other outlets, including Verge, The Art Newspaper, Artsy.net, and Smithsonian.

In less than a year, I had stumbled from becoming an accidental expert in blockchain at the exact right time to becoming an accidental expert in AI art at the exact right time. When I had first started writing about AI art and GANs in April, I had no reason to believe anyone in the mainstream would care. Now I am headed to Bahrain this March to moderate two panels on AI and creativity with a bunch of the artists I really admire, including Robbie Barrat. Life is strange and unpredictable.

Part Three: Myth Busting Banksy

Banksy (3).jpg

The self-shredding Banksy was the perfect “art” story for the mainstream media. If you only know one or two living artists, chances are Banksy is one of them. And sadly, the most popular stories surrounding art typically focus on two areas: first, works that are either intentionally or accidentally destroyed; and second, works that sell for far more or far less than expected. So when the Banksy painting that was at auction at Sotheby’s went up in value by shredding itself during an auction, we had the perfect storm.

I felt a bit like an ambulance chaser writing about the Banksy shredding, but as a blogger interested in art and tech, it felt natural for me to write about the device and how I thought it worked (or didn’t work). This was a quick article - I polled my father and brothers (who are all engineers) on their thoughts and pumped an article out in an hour or two, and it became the most popular Artnome article of the year with over 43K page views.

Traffic from my article Myth Busting Banksy

Traffic from my article Myth Busting Banksy

You’ll notice this article did not have the SEO staying power of the others. Seemingly everyone weighed in on it for about a week and then forgot about it. In this case, the enormous spike in traffic was largely due to other better-known outlets picking up our story and linking back to it, most notably, Boing Boing and the AV Club (for which we are always grateful).

Part 4: To The Moon! …Maybe

Around early October, I started thinking I was on my way to 100K visitors a month, which felt mind blowing for a blog that averaged one post a month and mostly focused on data and art history (the above-mentioned stories notwithstanding). I fantasized about the traffic growth I might get if I wrote with more frequency. To find out, I stayed up later on weeknights and wrote on both weekend days instead of just one. Aaaaand… my traffic came crashing back to earth.

Pageviews by month across all of Artnome since the site started in June of 2017

Pageviews by month across all of Artnome since the site started in June of 2017

I had made two mistakes: A) I misread three good months of increasing traffic as a solid trend, and B) I assumed more content would automatically mean more traffic. This is a pretty bad mistake for a guy who has spent almost two decades in digital marketing for his day job. It’s much easier to see what happened if you look at it from a weekly or even daily view instead of monthly.

Pageviews by week across all of Artnome since the site started in June of 2017

Pageviews by week across all of Artnome since the site started in June of 2017

Pageviews by day across all of Artnome since the site started in June of 2017

Pageviews by day across all of Artnome since the site started in June of 2017

As becomes obvious on the daily pageviews chart, the bulk of my record-breaking months for traffic came from one or two days - not a sign of smooth and steady growth, just a few outliers. But I fell victim to seeing what I wanted to see rather than what was there.

Part 5: Onward… The Good News

So buried under the outliers, there is actually some really solid growth for Artnome in 2018. And it is built not on the huge number of people chasing stories about shredded Banksy paintings, but instead by really intelligent and creative people looking to learn more about art, tech, and data.

In fact, six of the top seven Artnome posts this year really had nothing to do with news at all. In particular, two articles I wrote on generative art, Why Love Generative Art and Generative Art Finds Its Prodigy, performed extremely well and continue to drive traffic.

Screen Shot 2018-12-31 at 2.46.52 PM.png

In many ways this is a relief. If the secret sauce to growing Artnome was to race against thousands of other news outlets to write a high volume of short articles about artworks getting damaged, I couldn’t compete (and wouldn’t want to).

Instead, I think there is a large audience of folks who want articles that go a bit deeper into tech and art, whether it is:

  • Using data to highlight new discoveries like our recent post on Van Gogh’s shift in color palette

  • Providing an in-depth look at the arms race for compute power in AI art

  • Drawing attention to the need for better data and analytics on art

  • Showing how forgery and misattribution flourish in the absence of good data

  • Highlighting innovators who are trying to make the world better for artists

  • Sharing stories about artists and art movements who don’t get nearly as much attention from the art world as they deserve

As for growing Artnome, I think I will listen to the sage wisdom that Chauncey Gardner gave to the president of the United states on growing the economy in the movie Being There.

Being_There.jpg

Thanks for reading. If you have thoughts or questions, you are always welcome to hit me up at jason@artnome.com. Here’s to wishing you and your family a happy and productive 2019!

Register for the Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
1 Comment

Artist Cryptograffiti Sets Auction Record For Least Expensive Art

December 28, 2018 Jason Bailey
Du5CSpCVYAAAnPf.png

“I’m excited about a future where micropayments are omnipresent. Artists paid by the view, writers by the poem, musicians by the listen.” - Cryptograffiti

In a recent auction designed to sell to the lowest bidder, artist “Cryptograffiti” sold his elegant work Black Swan, a collage made from a single dollar bill, for $0.000000037, making it the least expensive artwork ever sold at auction. To understand why, we recently spoke with the artist.

Like Banksy and Shepard Fairey, Cryptograffiti’s origin story begins with street art, only his has a uniquely Silicon-Valley twist. Around 2011, Cryptograffiti left a job at Apple to launch a startup inspired by a Myspace feature called “Top Eight.” His startup’s product allowed you to share your favorite photos in a tangible piece of hardware:

The “Top Eight” was really fascinating to me because this was back when social networks were really picking up steam. The psychology behind why that was such a popular feature was really interesting. I thought that if I could make a product that encapsulated that, then the product would also be popular… kind of a modern take on lockets. You can wear a photo, and then there was an app so you can also tell people which photo you were wearing and why.

It turned out that the person helping Cryptograffiti develop the app was really into Bitcoin and was interested in being paid in cryptocurrency. This got Cryptograffiti looking at Bitcoin and blockchain much closer.

It was pretty clear to him that blockchain and cryptocurrency would eventually make the old banking system obsolete, so he began exploring this idea of making art using materials from the dying banking system (old credit cards and paper money) to “help explain this new era that was coming” ushered in by currencies like Bitcoin.

Cryptograffiti then had an “a ha” moment in late 2012 when he learned about micropayments.

I started hearing about micropayments as being the future: essentially being able to pay for things in little bits that wouldn’t be possible otherwise because of the minimum fees that come with credit cards…

…I have artists in my family and I was aware of the trials and tribulations that they had in the traditional art world. So I started to think of different ways that crypto and micropayments could be used specifically for artists as new revenue channels, and that got me to thinking about doing street art with the QR codes attached, and if people liked the work, then they could send over some Bitcoin. There were a number of different things that made me want to go all in, and in 2013, my startup was only doing “okay,” and it just didn’t seem as fascinating as this new world that was laid out before me. So I just decided to really jump in. It was super risky, but I’m really glad that I did.

Seattle, example of Cryptograffiti’s street art made from credit cards and using QR codes to accept tips in Bitcoin

Seattle, example of Cryptograffiti’s street art made from credit cards and using QR codes to accept tips in Bitcoin

I asked Cryptograffiti to help me understand micropayments a little better, because while I love the idea in theory, I couldn’t understand how it would work if the transaction fees associated with making payments using cryptocurrencies would exceed the amount of money being spent with a micropayment.

A lot of it depends on how overloaded the system is. Back in 2012 and 2013, there was just not as much congestion going on. But if you look like a year ago at December, 2017, the fees were sky high for Bitcoin because there were a lot of transactions happening and miners could pick who they wanted to work with, and so settlement times were slower and the fees were higher. A lot of this was coming down to a scaling issue, but there are solutions in the works. That’s part of why I wanted to do something with the Lightning Network with my art, because there is so much talk about price in the mainstream media and really not much discussion outside the crypto circles about some of the solutions that people are working on.

Which brings us back to Cryptograffiti’s Black Swan setting the auction record for least expensive artwork sold at auction. The auction was designed to reward the lowest bidder (instead of the highest bidder) to draw attention to the increasing viability of micropayments now made possible by the Lightning Network. The Lightning Network speeds up Bitcoin transactions while reducing transaction costs. As Cryptograffiti describes it:

The “Black Swan” was a fun idea I had knowing that it would not be lucrative, to help spread awareness about the Lightning Network. For those that don’t know, the Lightning Network is a payment channel layer on top of Bitcoin to help alleviate some of these payment scaling issues. So essentially, you can open up a channel with someone, make payments with them, and it will get settled up with the blockchain later on when the channels are closed, and so it helps with the congestion. There are no fees and it’s very quick. It’s really groundbreaking stuff. If it works, then it is going to bring about this era of micropayments that I yearned for from the beginning. Doing things like paying for reading an article or paying by the song or by the view for an artwork, these are all interesting ideas that haven’t been able to happen yet because of the payments to middlemen like credit cards.

Cryptograffiti’s Black Swan shown next to a tiny potted plant.

Cryptograffiti’s Black Swan shown next to a tiny potted plant.

The Black Swan itself is clever and aesthetically appealing. It’s a tiny work measuring in at 1.44 in x 1.75 in (3.55 cm x 4.44 cm) and features Cryptograffiti’s signature style of collage using older forms of physical currency, in this case, a single dollar bill. But I find the larger performance to be the most engaging part of the work.

For example, the video Cryptograffiti created to promote the artwork reminds me of a bootstrapped version of the massive marketing campaigns put out by Sotheby’s and Christie’s to promote and elevate works they bring to auction.

The special protective case, the white glove treatment, and the soundtrack (Mozart’s Eine Kleine Nachtmusik) all create a hilarious parody of the exclusivity and seriousness with which we treat important artworks at auction, which in turn drives home the absurdity of selling Black Swan for as little as possible in an auction designed as a race to the bottom.

Beyond the brilliant marketing campaign, I see the auction itself as an essential part of the artwork. For me, Black Swan is a conceptual or performance art piece with the swan itself serving as just one part of the performance.

Cryptograffiti’s Black Swan selling at auction for $0.000000037

Cryptograffiti’s Black Swan selling at auction for $0.000000037

In a world where it is nearly statistically impossible for artists to be “discovered” and self-shredding Banksys and controversial AI art capture headlines, artists seeking a larger audience for their work could learn from Cryptograffiti. In many ways, Cryptograffiti’s savvy Black Swan marketing campaign is indistinguishable and inseparable from the artwork itself.

Subscribe to the Artnome Newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
Comment

New Data Shows Why Van Gogh Changed His Color Palette

December 24, 2018 Jason Bailey
Vincent Van Gogh, Wheatfield With a Reaper, September, 1889

Vincent Van Gogh, Wheatfield With a Reaper, September, 1889

When most people think of Van Gogh, the first color that comes to mind is a warm, radiant, golden yellow. Yellow sunflowers, yellow fields of grain, even the yellow moon in Starry Night.

But Van Gogh’s paintings did not start out that way. As late as 1885, roughly halfway through the short decade Van Gogh spent painting, he was still in his Dutch period, painting works like The Potato Eaters, which feature dark, muddled, grays, browns, and greens.

Vincent Van Gogh, The Potato Eaters, 1885

Vincent Van Gogh, The Potato Eaters, 1885

Curious about this shift from dark to light, we decided to use data visualization techniques to better isolate the moment of transition to a bright yellow color palette.

As a first step, we sorted every Van Gogh painting by year and then calculated the simple arithmetic average by dividing out the sum of pixels by the total number of paintings. In layman’s terms, we created the “average” Van Gogh painting for each year he was active. We were hopeful that this could pick up some of the subtleties in his shifting color palette over time. There were too few works to get a solid average in the first two years, so we shortened the range to 1882-1890.

1882

1882

1885

1885

1883

1883

1886

1886

1884

1884

1887

1887

1888

1888

1889

1889

1890

1890

In looking at the series of images, there is an unmistakable shift towards a lighter yellow palette starting in 1888.

We are not first to notice this. There are two popular theories around why Van Gogh shifted his color palette.

  • Illness/medication leading to “yellow vision”

  • Influence from the French Impressionists while working in Paris

We will briefly look at both theories below and then offer up our own.

Did Van Gogh Suffer From “Yellow Vision”?

One popular theory behind the shift in Van Gogh’s color choices is that he might have suffered from xanthopsia, or “yellow vision.” Xanthopsia is a “color vision deficiency in which there is a predominance of yellow in vision due to a yellowing of the optical media of the eye.” When caused by glaucoma, this can also include halos and flickering, which many think explains why Van Gogh depicts light as radiating outward, as in The Night Cafe (1888) and The Starry Night (1889).

Vincent Van Gogh, The Night Café, 1888

Vincent Van Gogh, The Night Café, 1888

Vincent Van Gogh, The Starry Night, 1889

Vincent Van Gogh, The Starry Night, 1889

Others believe that Dr. Gachet, the physician who treated Van Gogh in his final months at Auvers-sur-Oise, may have treated Van Gogh’s seizure’s with digitalis extracted from the plant fox glove, which is also known to cause yellow-blue vision and halos as a side effect.

Vincent Van Gogh, Portrait of Dr. Gachet, 1890 (note the fox glove plant shown in the portrait)

Vincent Van Gogh, Portrait of Dr. Gachet, 1890 (note the fox glove plant shown in the portrait)

Another frequently cited reason for the shift in Van Gogh’s color palette was his move to Paris in 1886. It is generally assumed that he was inspired by the bold use of color by the French Impressionists.

We were not convinced by the medical reasoning behind the shift in Van Gogh’s color palette and we could not think of any French Impressionists that painted with colors nearly as bold as Van Gogh, so we decided to take a look at some other possibilities.

Did Van Gogh Use More Yellow Because He Moved to a Sunnier Climate?

Van Gogh was a restless soul and moved around quite a bit. He also spent a lot of time painting outdoors, especially in his later years. As someone who famously struggled with mood swings, we thought location, and more importantly, weather patterns may have impacted his use of color.

To test this, we created composite images averaging every painting Van Gogh created from each of the major locations he worked from and compared them to weather patterns from those regions. We think the results are quite remarkable.

The Hague

The Hague

The Hague

The Hague

Arles

Arles

Arles

Arles

Nuenen

Nuenen

Nuenen

Nuenen

SAINT-REMY

SAINT-REMY

SAINT-REMY

SAINT-REMY

Paris

Paris

Paris

Paris

Auvers-sur-Oise

Auvers-sur-Oise

Auvers-sur-Oise

Auvers-sur-Oise

Look at the spike in sunshine in Arles as compared to all previous locations! We feel pretty confident that it was the warm weather and bright colors of southern France that influenced Van Gogh’s shift towards bolder colors, not “yellow vision” or exposure to the French Impressionists as previously thought.

Not only did Van Gogh literally see the world bathed in yellow sun while in Arles and Saint Remy, he was also able to get outside more often as there were literally more sunny days. I don’t think it is unreasonable to assume exposure to the sun and outdoors may also have lifted his mood, causing him to brighten his palette in response, as well.

Charting the average painting by location also brought some interesting things related to timing to our attention. To make these timing issues even clearer, we created the chart below. Each bar is colored using the average color for paintings created in that specific region. The chart then shows the order in which Van Gogh lived in each region and the total number of paintings produced there.

VanGogh_Artnome_Chart.png

Note that from this chart we can very clearly see that Van Gogh’s palette shifted after, not during, his time spent with the French Impressionists. Let’s call that myth busted.

It is also clear from the chart that Van Gogh’s palette turned yellow years before he became a patient of Dr. Gachet at Auvers-sur-Oise. Second myth… also busted.

This leaves our theory of increased exposure to sunlight, which the data has shown to support. Of course correlation doesn’t necessarily imply causation, and we can’t say for sure that the weather caused Van Gogh’s palette to switch. But we feel it is a stronger hypothesis than the ones currently out there and Van Gogh certainly left us plenty of clues to support our belief that increased sun was his inspiration for using more yellow. Our favorite:

“How wonderful yellow is. It stands for the sun.” - Vincent Van Gogh

Vincent Van Gogh, Vase with 15 Sun Flowers. 1888

Vincent Van Gogh, Vase with 15 Sun Flowers. 1888

Conclusion

At Artnome, we are big believers that data and new analytical tools can and should be used to provide new context for important art and artists. In this case, we feel relatively confident that by using data visualization, we have ruled out the two most popular explanations for Van Gogh’s shift in color palette: first, illness/medication; and second, Impressionism. We have also used weather data and geo-data to propose a more reasonable theory behind the switch to a bright yellow palette: When Van Gogh moved to the south of France, he experienced a significant increase in sunny days. His world became dramatically brighter and more yellow; he simply painted what he saw (while adding his own artistic license).

This article would not be possible without the hard work and analysis of Artnome data scientist Kyle Waters. He has made an enormous contribution to Artnome in a short amount of time, and we are excited to continue working with him. In an upcoming article, he will go into more detail around how he calculated the average images for Van Gogh in detail.

Also, if you like this article, you may also enjoy our other nerdy art articles on:

  • Inventing the Future of Art Analytics

  • Quantifying Abstraction

  • Searching All 1800+ Of Munch’s Paintings With Machine Learning

As always, thanks for reading and for your support. Feedback is always welcome and you can contact us at jason@artnome.com



Register for the Artnome newsletter

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
14 Comments

Machine Learning Art: An Interview With Memo Akten

December 16, 2018 Renée Zachariou

Memo Akten, Learning to see: We are made of star dust (#2), 2017

“A deep neural network making predictions on live camera input, trying to make sense of what it sees, in context of what it’s seen before. It can see only what it already knows, just like us. Trained on images from the Hubble telescope. (not 'style transfer'!)”


“If we do not use technology to see things differently, we are wasting it.”
- Memo Akten

I met Memo Akten before he grabbed the train to London, where he is currently developing some exciting projects and pursuing a PhD in machine learning.

R: Memo, I first contacted you for an article I was writing on artificial intelligence and the art market (you can read it here). The timing was too tight, though, so I’m glad we’re meeting today to discuss your art practice more broadly! But let’s start with AI anyway: as an artist who has long been active in this field, I am curious to have your analysis on the field as it is now. Can you briefly explain who you are and what you do?

M: Broadly speaking, I work with emerging technologies as both a medium and a subject matter, looking at its impact on us as individuals, as an extension of our mind and body, and its impact on society, culture, tradition, ritual, etc.

Simple harmonic motion #12 for 16 percussionists . Live at RNCM

These days I’m mostly thinking about machines that learn, machines that think; perception, cognition, bias, prejudice, social and political polarization, etc. The current rise of big-data-driven, so-called ‘AI’ acts as a rather apt mechanism through which to reflect on all of this.

I generally try to avoid using the term ‘AI’ - unless I’m specifically referring to the academic field - as it’s very open to misinterpretation and unnecessarily egregious disagreement over terminology. Once, after a panel, I had a member of the audience approach me, and rather angrily explain to me that AlphaGo (DeepMind’s software which beat the world champion Go player) could not be considered ‘AI’ because it had no ‘sense of self,’ which is okay, I guess. But it’s also why instead I say these days I work with machine learning, a term that’s easier to define – a system which is able to improve its performance on a particular task as it gains experience. More specifically, I work with deep learning, a form of machine learning which is able to operate on vast amounts of ‘raw,’ high-dimensional data, to learn hierarchies of representations. I also think of it as the process of extracting meaningful information from big data. A more encompassing term which can refer to what we usually mean by ‘AI’ these days is ‘data-driven methods or systems,’ and specifically ‘big-data-driven methods or systems.’ 

R: So what you’re interested in is not the technology itself, but the effect on society? If, let’s say, pigeon catching was the latest tech revolution, would you be working on that instead? 

M: If it impacted our world in such a massive way as the current big-data-driven systems do, I probably would. For example, I’m also very interested in the blockchain, but I do not feel it is as urgent a topic. Maybe it will be in a few years… (especially with the energy consumption!).

R: AI-generated art surely feels like a hot topic right now with the recent market hype around the Obvious sale at Christie’s [an AI generated painting that fetched $432,000 in October, 2018]. What do you make of it? 

M: First, I’d like to set the context for this discussion by bringing to attention the fact that the art market is a place where, with the right branding, you can sell a pickled shark for $8 million. The art market is ultimately the purest expression of the free, open market. The price of an object is determined by how much somebody is willing to pay for it, which is not necessarily related to its cultural value.

I decided not to talk about this before the auction because I feel the negative press and pushback from other folks in the field created too much controversy and fueled the hype. Articles came out daily with opinions from experts, and I’m sure all of this hype inflated the price [the painting was initially estimated at $8-10 thousand].  

There’s a spectrum of approaches to the practicalities of making work in this field with generative deep neural networks:

  • Train on your own data with your own (or heavily modified) algorithms

  • Train on your own data with off-the-shelf (or lightly modified) algorithms (e.g. Anna Ridler, Helena Sarin)

  • Curate your own data and use your own (or heavily modified) algorithms (e.g. Mario Klingemann, Georgia Ward Dyer)

  • Curate your own data and use off-the-shelf (or lightly modified) algorithms

  • Use existing datasets and train with heavily modified algorithms

  • Use existing datasets and train with off-the-shelf (or lightly modified) algorithms (this is what Obvious has done)

  • Use pre-trained models and algorithms (e.g., most DeepDream work, the recent BigGAN, etc.)

Personally, I think it is possible to make interesting work around each of these poles (and I have tried every single one!). But as you get towards the end of the spectrum, you’ll need to work harder to give it a unique spin and make it your own. And I think a very valid approach is to conceptually frame the work in a unique way, even if using existing datasets, or even pre-trained models.

Robbie [Barrat], a young artist, was very upset that Obvious stole his code (which was open source with a fully permissive license at the time). It’s true that they used his code, especially to download the data. But it’s important to remember that the code which actually trains and generates the images is from [ML developer/researcher] Soumith Chintala, which Robbie had forked [copied] from. And the data is already online and open (in fact, I had also trained the exact same models on the exact same data, and I know others did, too). What actually shapes the output and defines what the resulting images look like is the data - which is already out there and available to download - and the algorithm - which, in this case, is a Generative Adversarial Network (GAN) implemented by Chintala. Anybody who puts that same data through that same algorithm (whether it’s Chintala’s code, or other implementations, even in other programming languages) will get the exact same (or incredibly similar) results.

I’ve seen some comments suggesting that the Obvious work was intentionally commenting on this issue of authorship, perhaps in a lineage of appropriation art, similar to Richard Prince’s Instagram Art, etc. But I don’t think that is the case, judging by Obvious’ interviews and press release. Instead, Obvious seems to be going down the ‘can a machine make art?’ angle, which is a very interesting question. Lady Ada Lovelace was already writing about this in 1843, and there have been countless debates, writings, musings, and works on this since then. So personally, I would look for a little bit more than just a random sample from a GAN as a contribution to that discussion. Like I mentioned, what somebody is willing to pay for an artifact is not necessarily related to its cultural value. If a student were to make this work, I would try to be very positive and encouraging, and say, “Great work on figuring out how to download the code and to get it to run. Now start exploring and see where you go.”

On a side note, I’m not a huge fan of the label ‘AI art,’ because I’m not a fan of the term ‘AI,’ but beyond that, because the term ‘AI art’ is somehow infused with the idea that only the art being made with these very recent algorithms is ‘AI art’, whatever that means. I definitely do not consider myself an ‘AI artist.’ If anything, I’m a computational artist, since computation is the common medium in all of my work. People make art by writing software, and have done for 60 or so years (I’m thinking John Whitney, Vera Molnar, etc), or even more specifically, Harold Cohen was making ‘AI art’ 50 years ago. In a tiny corner of the computational art world, Generative Adversarial Networks (GANs) are quite popular today, because they’re relatively easy to use, and for very little effort, produce interesting results. Ten to fifteen years ago I remember delaunay triangulation to be very popular, because again, for relatively little effort, you could produce very interesting and aesthetically pleasing results (and I’m guilty of this, too). And in the ‘80s and ‘90s, we saw computational artists using Genetic Algorithms (GA), e.g., William Latham, Stephen Todd, Karl Sims, Scott Draves, etc. (On a side note, GA is a subfield of AI. So technically they are all AI artists, too.) Computational art will continue, it will grow, the tool palette available to computational artists will expand. And it’s fantastic that new algorithms like GANs attract the attention of new artists and lure them in. But I will just avoid the term ‘AI art’ and call them computational artists or software artists or generative artists or algorithmic artists. 

R: That’s it for market sentiment, then. Let’s focus on your practice again. What projects are you currently working on?

M: There’s a few angles that I’m pursuing, all very research-oriented. First is a theme that I’ve been investigating for a while now, which is looking at how emerging technologies – in this case, deep learning – can augment our ability to creatively express ourselves, particularly in a realtime, interactive manner with continuous control - analogous to playing a musical instrument, like a piano. How can I create computational systems, now using deep learning, that give people meaningful control and enable them to feel like they are able to creatively and even emotionally express themselves?

From a more conceptual angle, I’m interested in using machines that learn as a way to reflect on how we make sense of the world. Artificial neural networks [systems of hardware and/or software very loosely inspired by, but really nothing like, the operation of neurons in biological brains] are incredibly biased and problematic. They’re complicated, but can be very predictable, as well. Just like us. I don’t mean artificial neural networks are like our brain. I mean I just like using them as a mirror to ourselves. We can only understand the world through the lens of everything that we’ve seen or heard or read before. We are constantly trying to make sense of everything that we experience based on our past experiences. We see things not as they are, but as we are. And that’s what I’m interested in exploring and exposing.  Some of my work tries to combine both of these (and other) themes. E.g., my Learning to See series tries to do this, as both being a system for realtime expression, a potential new form of filmmaking and digital puppetry, but also ultimately demonstrates this extreme bias. One who has only ever seen thousands of images of the ocean will see the ocean everywhere they look.

As a more distilled version of this perspective, in 2017 I made a Virtual Reality (VR) piece FIGHT!. It doesn’t use neural networks or anything like that, actually. It uses the technology of VR, but is about as opposite to VR as is possible, I think. In the headset, your eyes are presented with monocularly dissimilar (i.e., very different) images. Your brain is unable to integrate the images together to create a single cohesive 3D percept, so instead the two rival images fight for attention in your conscious awareness. In your mind’s eye, you will not see both images blended, but the two rival images flicker back and forth as they alternate in dominance. In your conscious experience, your mind will conjure up animated swipes and swirly transitions – which aren’t really there. And this experience is unique and different for everybody, as it depends on your physiology. Everybody is presented with the exact same images, but everybody “sees” something different in their mind. And it’s impossible for me to know or see or ‘empathize’ with what you see. And of course, this is actually always the case, not just in this VR experience, but in daily life, in everything that we experience. We just forget that and assume that everybody experiences the world in the same way we do.

While I’m interested in these themes from a perceptual point of view, the underlying motivation with these kinds of subjective experiences is to expose and investigate cognitive bias and polarization. I come from Turkey, which is currently torn in two over our current president. In the UK, where I’ve been living for 20 years, the Remain/Brexit campaign has also radically split society. There seems to be a trend where people in one camp attribute the other camp’s political views to them being ‘stupid.’ E.g. I’m very much for remaining in the EU, but it disturbs me when I see other ‘remainers’ believe that the only possible explanation that somebody might have to have voted to leave the EU is because they’re either stupid or racist (or both). I can’t see the world in such simple black-and-white terms. I’m sure many (or at least some) leavers have a line of reasoning which may be more intricate than just being ‘stupid’ or ‘racist,’ even if I don’t agree with it. And if we refuse to acknowledge that, we can’t have a discussion, we’ll never be able to reconcile our differences. We’ll be driven further apart, and ultimately things will only get worse.  

R: Can you tell a bit more about the PhD you’re currently doing at Goldsmith’s University? Is it purely technical?

M: My idea going into the PhD was very ambitious. I wanted to weave together art, neuroscience, physics, information theory, control theory, systems theory, perception, philosophy, anthropology, politics, religion, etc., but that turned out to be a bit ambitious, at least for a first PhD. Now it’s narrowed down to being more technical. And like I mentioned before, for the past few decades, I have been trying to create systems that enhance the human experience, particularly of creative expression. What I’m interested in are realtime, interactive, closed feedback loops with continuous control.

This is also how we sense the world. E.g., our eyes are constantly scanning, receiving signals, moving, receiving signals, moving. And the brain integrates all of that information, and that’s how we perceive and understand the world. This is also how we embody our tools and instruments, through action-perception loops. This is how we can embody something like a bicycle or a car, or from a creative self-expression point of view, it’s how we embody something like a piano: we hit a key, hear a note, feel it and respond to it. Eventually, we get to a stage where we don’t think about what we’re playing, we just feel it, it becomes an extension of the body, and the act of playing becomes an emotional act in itself. I don’t feel a tool like Photoshop has that level of immediacy or emotional engagement, once you click on the menu dropdown, etc…

I am looking to use deep learning in that context, to achieve meaningful, expressive continuous control. The way generative deep learning mostly works right now is, for example, you run training code on a big set of images, then you run the generation code, and it generates images. It’s like a black box where you can only press one button: ‘generate something.’ Of course, there are some levels of control you could have. You can control the training data you feed it, you can pick an image and tell the code to create similar images. And in recent years, there have been more ways of controlling the algorithm. But very few of these methods are immediate, realtime closed feedback loops with continuous control. This is both a computational challenge and a system design challenge, as current systems are simply not built with this in mind (though it is a growing field, so that’s very exciting).

R: We’ve talked a lot about machine learning, how about we flip that on its head: can machines teach us something? 

M: Yes, definitely! We can look at today through an anthropological timescale: what’s happening in 2018 is not disconnected from what happened 100 or 10,000 years ago. When Galileo took a lens and made a telescope to look at the stars, he literally allowed us to look at the world in a whole new light. We cannot be the same after that. Well, that would have worked better if the Church hadn’t stepped in. If we do not use technology to see things differently, we are wasting it.

Take word embeddings, for example [a set of techniques that maps words and phrases to vectors of real numbers]. There’s a well-known model trained on three billion words of Google News. The program does not know anything to begin with, it doesn’t know what a verb is, it has no idea of grammar, but it eventually creates semantic associations.  So it learned about gender, for example, and you can run mathematical operations on words like king – man + woman => queen. It’s learnt about the prejudices and biases encoded in three billion words of news, a reflection of society. Who knows what else is in that model. I wrote a few twitter bots to explore that space, actually. @wordofmath and @wordofmathbias

But even Google autocomplete is a really powerful way of looking at what our collective consciousness is thinking or feeling. I wrote a poem about this in 2014. It’s a collaboration with Google (the search engine, not people working at Google), the keeper of our collective consciousness. And actually it’s more a collection of prayers.

A very powerful project in this realm I really like is by Hayden Anyasi. He was disturbed by the way newspapers selected images to accompany news stories, so he created an installation that takes a picture of your face and then creates a news story about you, based on the data it was trained on: a large dataset of newspaper articles. So if you’re an attractive young white woman, the story generated might be about winning some contest or something. If you’re a young black man, the story is more likely to be about crime. Some people might think that this just reflects reality, but unfortunately, that expectation is exactly the problem, as there are situations where images have been selected to accompany stories not because they are related to the story, but simply because that’s what the expectation was. In Hayden’s own words: “A young man's face was used as the lead image in a story about a spate of crimes. Despite being cleared of any involvement, his picture was later used again anyway. Did his face meet the standard expectations of what a criminal should look like?” It’s easy to dismiss these things when you’re not affected, but when you see it like this, this kind of art punches you. 

R: Speaking of scary things, there’s a lot of anxiety around technology these days. Would you say you’re a techno-optimist?

M: I’m definitely not very optimistic.  I’m not worried about the singularity or the ‘intelligence explosion’ or robots taking over. To me, that seems more like a marketing trick that’s good for business, to sell books, and to get funding from people who are so rich that the only thing which scares them are so-called ‘existential risks’ which will affect all of humanity, even people as rich and powerful as themselves. On a related note though, autonomous weapons are indeed a major genuine concern, and algorithmic decision-making systems are already in use and proving to be hugely problematic. I do believe algorithms could have the potential to be less prejudiced and fairer than humans on average, but they have to be thoroughly regulated, open source, open data, and preferably developed by non-profit organizations who are doing it only because they believe they can develop fairer systems which will be beneficial to everybody. And by ‘they’ I am referring to not just computer scientists, but a diverse team of experts across many disciplines, backgrounds, and life experiences who collectively have a much greater chance of thinking about and foreseeing the wider impact of these systems once deployed. Closed source, closed data systems developed by for-profit companies which are not well regulated is an absolute recipe for disaster. 

But I worry more about “unknown unknowns” that can come out of nowhere and have a huge impact. Here’s a dystopia for you: what if, in the future, the link between genotype and phenotype [how a particular trait is coded in our DNA and expressed through environmental conditions] was mastered (it is something that is being heavily researched right now)? And imagine that combined with CRISPR (or its successor), there was a service which allowed you to boost your baby’s IQ to 300+. And imagine that this service was incredibly expensive, something which only a select few could afford. What kind of world would that be? I don’t necessarily believe that this exact scenario will happen, but I’m sure we will face similar situations.

On the other hand, if we are ever to cure Alzheimer’s or leukemia, it will undoubtedly be with the help of similar data-driven methods. Even the recent discovery of gravitational waves produced by colliding neutron stars is a massive undertaking in data analysis and extracting information (the detection of a tiny blip of signal) in a massive sea of background noise. Machine learning encompasses the act of extracting meaningful information from data, and so any breakthrough in machine learning will impact any field which is data-driven. And in this day and age, everything is data-driven: physics, chemistry, biology, genetics, neuroscience, psychology, economics, and even politics. So it’s impossible to predict the unknown unknowns. Who knows, maybe someday we’ll be able to photosynthesize!

But I do have a streak of optimism. However, what I'm optimistic about is not technology, but us, and a potential shift in values. If we look at the overall evolutionary arc of human morals going back thousands of years, it seems there is a trend towards expanding our circle of compassion to be more inclusive. We used to live in small tribes, and neighboring tribes would be at war. We've now expanded those tribes to the size of countries. This is still far from perfect, especially with the current rise of nationalism, but the overarching long-term trend is a positive one, if it carries on in the same direction (and that is a big open ‘if’). We've now legally recognized that half of the population - women - are the equal of men and deserve the same rights, whether it be for voting, working, healthcare, education, etc. It’s quite shocking that this has only happened so recently, in the last hundred years or so. And so the effects have unfortunately not yet fully permeated into our culture and day-to-day lives, but I think it's inevitable that it will happen. Likewise, we’ve abolished slavery, we legally recognize all humans to be equal. Again, unfortunately, this has happened shockingly recently, so we are absolutely nowhere near being at a level where the day-to-day practice is satisfactory. But again, hopefully, the overall long-term trend is moving in a desirable direction. And this last century has even seen massive efforts to include non-human animals in our circle of compassion, whether it be vegetarianism or veganism or animal rights, in general. 

So while I’m not overly optimistic, the only glimmer of hope that I am able to potentially see for the future is not via any particular technology saving us, but hopefully a gradual shift in values which will head towards prioritizing the well being of all living things, as opposed to just a select few at the massive expense of others. The big open question is, apart from whether this will happen or not, is how soon will this happen. And how much damage will we have inflicted before we realize what we've done.

R: Thanks a lot for our chat! To wrap up, do you have any reading recommendations to dig deeper into machine learning art?

M: A few years ago I collated a list of resources which I had used to get up to speed.

At the time, there weren’t many introductory or beginner-friendly materials. It was more academic books and full-on online university courses. But in the past few years, as deep learning became really popular, loads of new ‘beginner-friendly’ materials came online. So this is probably quite out of date, but for those willing to invest time, I’m sure a lot of this will help build a strong foundation.

But since collating that list, a fantastic resource that is now available is Gene Kogan’s Machine Learning for Artists. It’s full of amazingly useful, beginner-friendly info. And another resource which I have not personally used, but I’ve heard very good things about, is Fast.ai.

Comment
← Newer Posts Older Posts →
Get The Newsletter
Thank you!
Blog RSS

You Might Also Like:

Featured
Primary_Image (1).png
Field Guide - Imagined Specimens and Ecosystems
Read More →
Vanishing_NFT.png
Back Up Your NFT Art or It Could Disappear
Read More →
Blake.png
Why Museums Should Be Thinking Longer Term About NFTs
Read More →
Screen Shot 2021-06-02 at 3.49.24 PM.png
GreenNFTs Hackathon Brings New Ideas, Awareness, and Solutions
Read More →
Screen Shot 2021-05-23 at 10.31.52 AM.png
Constructive Instability - The Art of Lucas Aguirre
Read More →
Museum_NFTs.png
What Makes a Museum Object NFT Valuable Beyond the Scope of the Technology?
Read More →
A sample of the highest selling NFTs on the SuperRare marketplace
In Search of an Aesthetics of Crypto Art
Read More →
Hic Et Nunc Brings True Spirit Of Web Art To The Here And Now
Hic Et Nunc Brings True Spirit Of Web Art To The Here And Now
Read More →
newplot (6).png
Who Is In Your SuperRare Network?
Read More →
TWOSOLDIERSATWARI.png
Artists Rally to Support #EndSARS
Read More →
mint_3.png
Interview with Generative Artist Kjetil Golid
Read More →
12647ec4427e16c13b1a19fda327b7f2.jpg
Interview With Generative Artist Jared Tarbell
Read More →
complex2.jpeg
The Game of Life - Emergence in Generative Art
Read More →
How_To_Become_A_Successful_Artist_Warhol.png
How To Become A Successful Artist
Read More →
Can Machine Learning Predict the Price of Art at Auction?
Can Machine Learning Predict the Price of Art at Auction?
Read More →
2020 Art Market Predictions
2020 Art Market Predictions
Read More →
Screen Shot 2020-01-12 at 12.27.04 PM.png
Artnome - 2019 Year in Review
Read More →
Augmenting Creativity - Decoding AI and Generative Art
Augmenting Creativity - Decoding AI and Generative Art
Read More →
Tabula Rasa - Rethinking the intelligence of machine minds
Tabula Rasa - Rethinking the intelligence of machine minds
Read More →
Can AI Art Authentication Put An End To Art Forgery?
Can AI Art Authentication Put An End To Art Forgery?
Read More →

POWERED BY SQUARESPACE