Technology’s Impact on Figurative Art

by David Molesky

Kazu Hiro working on his bust of Abraham Lincoln 2015.

During my 25 years in the field of figurative painting, I have observed two sharply conflicting schools of thought on the use of technology in studios. As a generation X’er who grew up before the Internet and cell phones, I have a nostalgia for simpler times. Still, I acknowledge both points of view.

I first experienced this divide in 2006, when I transitioned from two years of mentorship under the California classicist David Ligare (b. 1945) to an apprenticeship with Norwegian painter Odd Nerdrum (b. 1944). Ligare’s primary visual references are 35-millimeter color slides that he takes of his subjects during the “golden hour” just before sunset. The photographic specificity of this Mediterranean light as it reflects off Monterey Bay is a central concept in Ligare’s work.

With Nerdrum, by contrast, there is a strict taboo against reference photography; he warns that its use is “like a virus in the imagination of the painter.” Nerdrum and his circle prefer painting figures from life, situating them in spaces dreamed up by the imagination. Although the subject matter of both Ligare and Nerdrum could be called classical, their attitudes toward technology are diametrically opposed.

Luddites are a rarity among artists these days because most of them see technology as a powerful tool. Photo editing, gaming software, and 3D printers can save time and money by speeding up creative processes, facilitating the making of sanctioned copies, and improving precision beyond human capabilities. It is only natural for us, as tool-using sapiens, to want to employ something that might bring us greater success and convenience. In the end, however, tools can never make up for a lack of skill and imagination.

Editing software such as Photoshop is now used widely by painters of my generation. These programs enable the combining and manipulating of multiple images as layers within a single composition. Imaginative realists such as Martin Wittfooth (b. 1981) create complex arrangements of figures and animals as mockups in Photoshop before they begin painting. Their densely packed compositions would be nearly impossible to stage in real-life photoshoots.

After gathering their own photographs of friends or pets, as well as images found on the Internet, these artists piece them together into digital collages. The component images’ hard edges can be blended, depending on how much time and effort the artists choose to spend at the computer before beginning the image on canvas. The collage proves handy at various stages of the painting process: as a guide for the initial composition transfer (which often involves a digital projector), and as a primary reference viewed on a monitor.

A NEW WAY TO SCULPT

The speed of technology is always accelerating. Photo editing software was invented only three decades ago, yet already it seems “old school.” Some contemporary artists make it a point to stay up-to-date with the most current tools available. Born in Japan and based in Los Angeles, Kazu Hiro (b. 1969) is a pioneer in this regard. Having discovered special effects makeup while still in high school, he is entirely self-taught and now the global leader in hyper-realistic sculpture and special effects prosthetics. After devoting 25 years to the film industry, Hiro left to focus on his own double-life-size busts of historical figures such as Abraham Lincoln. Recently Hollywood convinced Hiro to return to work on two films — Darkest Hour and Bombshell — each of which garnered him an Oscar.

Although Hiro’s process involves many state-of-the-art materials and technologies, he begins the old-fashioned way — by hand-sculpting a life-size bust in clay. Upon completion, this is captured by a 3D scanner and its 3D components are recorded as an “OBJ file.” Hiro takes this data and opens it in the “slicing” program for his 3D printer, where he can then easily enlarge the sculpture to twice its original size.

With programs like Cinema 4D and Zbrush, Hiro can manipulate this doubled-in-size scan to create files used to 3D-print the two plastic elements required to create a negative mold: a double-size copy of the original sculpture and an outer shell called a pour case. He then pours liquid silicone into the thin space between the inside of the pour case and the outside of the enlarged sculpture. When the silicone “cures,” what remains is a negative mold that has precisely recorded the outside of the printed sculpture.

Hiro admits the technology is good but not perfect. Sometimes the 3D printer can miss data so that details get diffused. To correct for technology’s shortcomings and to pack even more detail into the final piece, Hiro makes a second clay version. He starts by placing a half-inch-thick layer of clay into the negative mold. This layer will sit upon a plaster support core to prevent it from becoming misshapen. Once the clay is removed from the mold, Hiro works into the expanded surface.

Satisfied with the larger clay, Hiro takes another 3D scan. This is used to print a negative mold of the larger sculpture (a “jacket”), as well as a core slightly smaller than the sculpture. Hiro pours a silicone skin into the space between the jacket and the core to create a replica of the enlarged clay sculpture. He makes further enhancements using the makeup techniques he developed in Hollywood, including skin tones evoked with homemade silicone paint and the addition of hair.

DRIVEN BY DATA

Painters also turn to 3D-imaging programs to design their 2D compositions. Born in Italy and based in Los Angeles, Nicola Verlato (b. 1965) has been using video game and animation software since he saw the film Tron (1982) and noticed the resemblance between the vector graphics used for its special effects and Brunelleschi’s perspective drawings from the Renaissance. Like Kazu Hiro, Verlato starts in the old school manner — in his case, sketching with pencil. With Zbrush he translates his handmade marks into 3D data. Using gaming software, these digital models can be turned in space and illuminated with virtual lighting. It is even possible to explore different perspectives — high or low, near or far — by moving the point of view around the virtual space in relation to the digital model.

Verlato has noticed an overall shift toward “dematerialization,” in which everything is converted into digital language. As an artist, he is working hard to reverse this trend. To create his paintings and sculpture, Verlato rematerializes digital data gathered from the Internet — such as written stories, music, or film — and coalesces them into painting and sculpture. In his view, the resulting physical objects rightly belong further up the hierarchy of material experience.

Verlato has also been exploring the world of virtual reality. In his VR project The Merging, he has made an interactive experience for museumgoers that connects reality and the digital world. He hopes it will help engage a wider audience for art, especially for those unable to access museums and younger viewers already influenced by electronic devices.

There are other VR projects that hope to reinvigorate public interest in painting. One example is The Night Cafe — An Immersive VR Tribute to Vincent van Gogh (2015). Its video quality is intentionally shaky, almost vibrating, to simulate our being inside a moving, changing painting. The 2017 film Loving Vincent shares some of these characteristics, but the fact that its animation frames were hand-painted means that its detail is much more convincing. Digitally generated images often lack the textural detail and natural variation that result from handcraft. Mark-making is evidence of creative process and helps us to imagine the artist at work, thus enhancing our powers of empathy. Take, for example, Rembrandt’s rugged impasto, or the hollows left by Rodin’s thumb as it moved through clay.

Though Loving Vincent required squads of hard-working artists to create that handmade look, most tech entrepreneurs are replacing artists with software-based logic systems called algorithms. The Next Rembrandt, a project spearheaded by the Dutchman Bas Korsten and supported by such corporate giants as Microsoft and ING, made headlines worldwide in 2016 when it claimed to have digitally resurrected this Dutch master. Its premise sprang from the thought that “if you can take historical data and create something new out of it, why can’t you distill a painter’s artistic DNA out of his surviving artwork and create a new work out of that?”

The project team analyzed 346 of Rembrandt’s paintings to determine his most common attributes in regard to subject matter and composition. Using this data, they arrived at the most unremarkable example possible: a portrait, painted in the period 1632–42, of a Caucasian man with facial hair, aged 30–40, wearing a wide-brimmed black hat, black shirt, and white collar, facing right. Then the team collected high-resolution scans of every Rembrandt painting that matches that description. Analyzing this data, they created a “painting” of a right eye, then other facial features that they assembled by averaging geometric facial points.

The Next Rembrandt also made use of newly developed methods for scanning painting surfaces to analyze the texture of brushstrokes, which the team attempted to replicate through 3D printing multiple layers on a flat surface. The results might be convincing for those unaccustomed to studying paintings, but even Korsten admitted, “I think the expert eye sees that this isn’t a real Rembrandt.” Those familiar with the magic of a true Rembrandt surface, especially from his late period, will find this “averaged” look fails to convey his handling’s snowflake-like uniqueness.

WHO’S THE ARTIST?

While some algorithms claim to resurrect the dead, other tech entrepreneurs are “teaching” computers to become artists in their own right. Based in Washington, D.C., the American Pindar Van Arman (b. 1974) makes paintings using robotic arms guided by a “creative” algorithm that analyzes, extracts, separates, and assigns computational data to the style and content found in existing images. Then it can mix and match new pairings of these data sets to create new images. One such algorithm is Style Transfer, developed in 2015 by Germany’s Bethge Lab. Using a computerized model of our brain’s visual system, which the developers call “convolutional neural networks optimized for object recognition,” this algorithm separates visual components in a way akin to our “human operating system.”

Style Transfer requires massive amounts of computer memory and loading time, so most users scale down their image files to approximately 1000 x 1000 pixels. If sent to a printer, such files look unimpressive, which is why Van Arman instead uses robotics to create paintings. In addition, he has developed a software program, Cloudpainter, that remembers its past work and tries to improve it, causing its style to evolve over time. He explains, “Cloudpainter and the robots can see what they are doing because they use cameras to watch their work and make adjustments. I am trying to replicate human creativity, and now my computers are on the precipice of creative autonomy.”

Indeed, Van Arman genuinely believes his computers can be creative: “If I wanted to make something beautiful, I would just use a printer, but I am trying to get something more interpretive with more serendipity.” Skeptics debunk this possibility, of course. Ken Goldberg, an engineering professor at the University of California, Berkeley, points out, “As soon as you inject any kind of randomness into a program, you get behavior that you may not predict, but there is a distinction between that and saying the robot is being creative now.”

Recently Van Arman succeeded in creating a computer-generated painting he feels is authentically abstract, rather than directly representational or completely random. When asked about emotional content, he replies, “Obviously a robot cannot make emotional art until it is itself emotional. But that doesn’t mean, when we look at an artwork, we can’t get emotions from it.”

HOPE FOR THE FUTURE

Unfortunately, we still evaluate the power of artificial intelligence (AI) by its ability to deceive us. This precedent was set by the World War II code breaker Alan Turing, whose Turing Test evaluates a machine’s ability to exhibit intelligence and notes when it becomes indistinguishable from a human’s.

The humans who use AI technologies to create artworks see the algorithms as artists themselves. The way they discuss their process is revealing. Uttering the words “I teach robots how to paint”’ or “A new Rembrandt painting has been created by a 3D printer” reveals a fundamental departure from reality. The human who uses an algorithm to create a painting is an artist employing technology as a tool. The computer cannot be considered an artist unless, of course, it becomes able to invent its own mythologies, as imagined by Philip K. Dick’s 1968 novel Do Androids Dream of Electric Sheep?

We humans experience our consciousness at the same time as our intelligence. Computers have intelligence but no consciousness. Despite their best efforts, AI artists are finding that human creativity, even in the comparatively ancient fields of painting and sculpture, is one of the most difficult fields to replicate and automatize. The Israeli historian Yuval Noah Harari, author of the bestseller Sapiens: A Brief History of Humankind, notes, “Eventually computers will do everything faster, better, and safer than humans. The things most immune right now are the creative jobs.” With that, most creatives can exhale a sigh of relief.

Our attitude toward technology is one of the strongest forces shaping the global culture of the future. Indeed, it can be a positive force in our lives. Rather than using technologies to replace or replicate ourselves, we can use them to help us spend more time deepening our understanding of the qualities of being human that distinguish us so powerfully from machine intelligence.

Pin It on Pinterest