Human art is dead.

Lex Sokolin
6 min readSep 13, 2015

Long live software art.

How neural networks were taught to surpass the greatest human artists in less than a month.

All Your Painters Are Belong to Us

The first photograph ever was taken in 1826. It was crude and took several days of exposure to achieve a poorly composed and grainy image of a roof.

First recorded photograph (Wikipedia)

Painters could not have seen it coming, this cold metallic threat of photography. How could a technology ever rival the vivid human energy captured by a master painter? But here we are two hundred years later. Portraiture is dead. Glass lenses and digital hard drives capture the human visage in bits and bytes, by the millions every day.

We stand today yet on another precipice. This quiet revolution is a thief that enters the night. Its guide is a timid developer geeking out on the similarities between the human brain and machine learning. It is the engineer teaching a software brain the sense of image recognition.

“A Neural Algorithm of Artistic Style”

On August 26, 2015, a team of data scientists succeeded in capturing creativity itself (academic paper link). Using a similar approach to how Google’s Deep Dream visualized phantom animals in photos, the team identified and unearthed the styles of human expression across several key variables.

Within a few days, software artists around the world deployed neural network implementations of the algorithms that could render any image with the skill of a master. A few days thereafter, a twitter bot was launched (link), giving this Promethean power to the masses. By September 8th, a massive amount of existing artistic style was indexed and automated (link). From Kandinsky and Picasso, to Egyptian hieroglyphs and Japanese prints, our fledgling AI has learned how to see and render the world as would a human artist.

Studies of style by Kyle McDonald. Left picture dictates style, picture of Marilyn and mountain is the target. See source.

Robot Artists Have An Intuition

What are the next logical steps in this evolution? Doubters will surely have a myriad objections. For example, aren’t these transformations simply a visual filter on existing content? Or, that teaching an algorithm to “mine” for art is not the same as creating it.

Not so fast.

What is the creative process itself? The creative process, beyond mere rendering and illustration, is the combination of styles and influences in new and surprising ways. To take a famous example, here is Picasso sitting in his studio with artifacts of African Art, which led to his signature abstracted style in the Les Demoisells d’Avignon on the right.

African art filled Picasso’s Bateau-Lavoir studio in 1908. (Musee Picasso photo) and Picasso’s Les Demoiselles d’Avignon (MOMA, New York). The persuasive influence of tribal arts — particularly African — on modern painters and sculptors, has been recognized for many years and refered to as Modern Primitivism. Source.

This masterpiece is a combination of the following variables:

  • Tribal art style
  • Cubist art style
  • A real life image or recollection of four women
  • A trained sense of visual aesthetics

Take now the beautiful but perhaps crude output of the neural networks.

Could an artificial mind produce this type of outcome?

My answer is a resounding yes. First, train the network on two existing art styles. Second, process them both on an image of four nudes. Third, take the outputs and refine the strength of different variables until results are in line with a human sense of visual aesthetics.

But what about this type of creative endeavor, but in a style that we have never seen or experienced?

Of course. With thousands of artists and approaches indexed, infinite permutations are possible.

Could it render this new experience not once, in an arbitrary rectangular frame, but over time in augmented or virtual reality?

Within two weeks from the publishing of this groundbreaking paper the answer is a loud, fundamental YES.

A flavor of the future to come from Kyle McDonald. Source here.

Machine learning does not provide “the right answer”. It gives a set of outcomes based on variables with associated probabilities similar to human intuition. The AI has to be taught — refined and rewarded for what we deem to be the right behavior or aesthetic sense — very much like a human child.

But what we see today is already unbelievable. It is the development of artistic intuition and a skill to render it across any form factor that carries software and sees the world.

Role of the Human

What then is left to us? What should an artist focus on to create new and challenging work, when our algorithmic sisters are faster, smarter and infinitely more educated?

Excerpt from Henner’s 51 Military Outposts. This is a process of algorithmic generation and human selection of aesthetic work

A quote comes to mind from a curator of digital art in relation to the work of Mishka Henner. Henner had used algorithms to find 51 US military bases using Google Maps, and selected striking and beautiful abstract compositions of the subject matter. Google Maps is infinite. And like a needle in a haystack, meaning and beauty must be found by a trained eye.

Today the camera is connected to a complex network of software, protocols and online platforms,” said Katrina Sluis, curator of digital art at the Photographers’ Gallery in London. “When computers are taking photographs for other computers to view and interpret en masse, the role and significance of the individual image has shifted.” Artists like Mr. Henner who rely more and more on the robotic gaze of the Google Street View camera draw our attention to questions of privacy and surveillance. Ms. Sluis refers to them as “Web archaeologists” navigating an “increasingly computational culture” to find the element of human experience within it.

And so it must be with the neural networks, who will play and create beyond our wildest imagination. But it is up to us to teach and guide them, to frame their intuition into a brush that reaches for the sublime.

Updates:

(1) An algorithm has been written to combine artistic styles. It took less than a week to solve the Picasso dilemma:

Original Image on the left. (source)

(2) By September 28th, 2015, an iPhone app called Dreamscope incorporated the neural network stylings for a set of learned artists as filters. Thousands of people have signed up and are generating software artwork.

Interface from Dreamscope Editor.

Lex Sokolin is a New York-based artist, designer, and entrepreneur. He explores form, line, and narrative using the language of urban abstraction and new media. Recent work, including illustration, photography, and software art, can be seen at urban-aesthete.tumblr.com and urbanaesthete.com.

If you’ve enjoyed this post, please hit “recommend”. Feel free to say hello and introduce yourself on Twitter.

--

--

Lex Sokolin
Lex Sokolin

Written by Lex Sokolin

Entrepreneur building next-gen financial services @Consensys @Autonofintech @Advisorengine, JD/MBA @columbia_biz, editor and artist @inkbrick

Responses (2)