Maker Profile: Cosmo Wenman's 3D-Printed Art

By Norman Chanon March 18, 2013 at 9 a.m.

Cosmo Wenman is a California artist who has embraced 3D printing in a unique way. His sculptures are 3D-printed replicas of ancient works of art, capturing the look and feel of those works of old but reproduced in a way that could only be done with the advancements (and limitations) of modern technology. Here's how he does it.

Today's desktop 3D printers are more comparable to dot-matrix printers than high-dpi laser printers; there's a lot of room for improvement in the nascent but fast-growing technology. It's easy to imagine where that technology is headed: finer resolution printing, faster print heads, different materials, multiple colors. But that doesn't mean that the 3D-printed objects of today can't be more than disposable prototypes and knick knacks. It doesn't mean they can't be beautiful works of art.

Cosmo Wenman is a California artist who has embraced 3D printing in a unique way. His sculptures are 3D-printed replicas of ancient works of art, capturing the look and feel of those works of old but reproduced in a way that could only be done with the advancements (and limitations) of modern technology. His Horse of Selene piece, for example, was created by meticulously scanning a marble sculpture from the British Museum, and then split into 29 pieces that were printed with a MakerBot Replicator. You can even download the file to make one yourself. Just as ancient sculptors modeled their works after real people and objects, using the techniques and materials available at the time, Cosmo is doing the same to those pieces using his own invented techniques, propagating a system of artistic (and mechanical) reproduction.

We met Cosmo at this year's CES, where his work had returned from the 2012 London 3D printshow and was on display at the MakerBot booth. His enthusiasm for 3D printing was palpable, and as was his interest in how 3D printers change the possibilities of media ripping and remixing. I followed up with Cosmo a few weeks ago, and had the opportunity to pick his brain about his 3D print patinas, the academic and artistic applications of 3D printed museum replicas, and why he isn't concerned with the current imperfections of 3D printer technology.

I really see them as two sides of the same coin; 3D scanning and 3D printers are new physical input and output channels for digital design. For me they really come together through my longtime interest in economic “network effects”—the dynamics that kick in when a device gets more valuable to each owner the more people there are that own them. If you picture a 3D scanner, a 3D design interface, and a 3D printer as a unified, consumer-friendly device, I think the analogy to phones, fax machines, and personal computers is clear, and the potential for their rapid, widespread adoption is pretty tantalizing. Products and customs that exhibit or exploit network effects can flirt with exponential growth in value and social significance, and that’s very interesting to me. (Since this correspondence, MakerBot has announced the Digitizer 3D scanner as a consumer product in development.)

As a practical matter, I’d been watching 3D printers for a couple years, but I had zero interest in troubleshooting or maintaining a fussy machine—too much swearing. When the MakerBot Replicator came out in early 2012, I bought into the hype that it would be a machine I could just print with, without any hassle, and I’ve been blown away by its reliability. And, around that time, Autodesk released their free 123D Catch photogrammetry scanning system, which makes use of normal digital photographs to create 3D models. So I’ve been using them both, concurrently, almost from the start, which to me seems like a very natural coincidence, as these technologies really complement each other and appear to be tracking each other’s progress with adoption, capabilities, and ease of use.

How you go about scanning objects in public places, like the British Museum. Do you just walk in and take a bunch of photos with a digital camera from multiple angles?

I don’t have the patience or temperament (or credentials) to even contemplate initiating a months-long back and forth with a big institution to ask—the wrong person, most likely—for permission to scan, let alone try to duplicate and publish, an artifact or piece of artwork. So I just walk in the front door and play dumb. Sometimes museum staff ask why I’m taking so many photos. I tell them I’m doing a detailed study of the piece—which is true! I’ve also had staff think I was taking photos of the room itself and its security features, and how pieces were attached to the walls or floor. I explain that I’m just taking photos of the piece itself, and I offer to show them the shots. No problems so far.

When it comes to scanning, what is your process? How much work is done on site versus after the fact on a computer, and what references do you use to tweak a scan?

I use my eight-year-old Sony R1 camera, which has a nice lens and a big sensor, but any digital camera will work (there’s even an iPhone app for the 123D Catch system).

I usually take at least 200 to 250 photos per subject, but...there are a few tricky sculptures that I’ve taken nearly a thousand photos of.

Once I have a subject selected, I walk around it and find a zoom setting that will keep the entire object in frame from all sides, at all angles, for the entire shoot. That’s because zooming in and out seems to makes the 123D Catch system work much harder to interpret the results. I take photos along a continuous path from one position to the next, progressing around the subject several times, holding the camera at medium, high, then low angles; the software seems to prefer that to a random succession of positions.

In my experience, Catch seems to only make use of ninety or so photos and, if I recall correctly, only three megapixels of data per photo. But once I’ve already gone to the trouble of getting to a museum or somewhere out of the way, I usually take many, many more photos than that, and use my camera’s highest megapixel settings. I do my best to future-proof my scanning work so that I will be able to throw everything at the more powerful apps that will come down the line in the next couple years. I usually take at least 200 to 250 photos per subject, but if it’s a subject I really like, or if it’s very complex, I’ll take a lot more, just to make sure I have plenty to choose from later on. There are a few tricky or special sculptures that I’ve taken nearly a thousand photos of, each.

At the end of the shoot, I usually put something like a cellphone or lens cap next to the some distinct feature or straight edge of the subject and take a few more shots with it in frame. Later on, when I’m editing the model, I can use it as a scale reference. Or I’ll use a measuring tape if I remember to bring one. That’s it for on-site work.

Afterwards, I’ll usually upload 70 or so photos into the 123D Catch application—if it’s going to get results, they’ll show with that many shots. If it works, I’ll add more, up to around a hundred. Sometimes it’ll squeeze better results out of those extra photos and sometimes not so much. You can also give the system manual guidance if it is having trouble making sense of a scene, but I don’t spend much time doing that; in my experience, adding a few guide-points can help a little, but adding a lot of them doesn’t make for big improvements.

Processing with 123D Catch can be hit and miss. I don’t think any photogrammetry system handles shiny or very dark objects well. Featureless backgrounds seem to be a problem too. But when it works, it works very well, and the whole process can seem kind of spooky and surreal. My understanding is that 123D Catch is in a sort of public beta, and based on what it can do so far, I have high hopes for even better results soon.

After Catch, I’ll export the resulting mesh as an .obj file and bring it into MeshMixer, which is another free Autodesk application. It has some great tools for inspecting and patching holes and tiny errors in the model mesh that would otherwise create problems down the road.

From there I bring the repaired .obj file into Blender, the powerful, free, open-source 3D modeling and animation application, which is what I use to edit the model by sculpting as needed for added detail, or deleting parts I don’t want. I’ll usually refer to my photos while I’m trying to clean up any details that didn’t scan well.

Blender is also what I use to scale the model, cut it into pieces sized for my printer’s build volume, and export the printable pieces as .stl files.

Then I use ReplicatorG to orient the pieces—frequently turning them upside down to eliminate the need for printed supports for features that have overhangs. I make the print settings in ReplicatorG too; I usually print stuff completely hollow, four walls thick and around .18mm layer thickness, but sometimes much thicker for larger prints. Then I generate the G-code, then the .s3g files, and then, finally, copy those onto an SD card and put it in the printer and start printing. Then, lunch!

What's been your experience with home 3D printers like the MakerBot? You seem to embrace its limitations as a benefit in your work.

I’m not sure I embrace the limitations so much as I don’t worry about them. If I need to print something at 100 microns or less, I’m pretty sure I could do it with my Replicator. But I haven’t even tried.

I really see 3D printers as dematerializers: they radically decrease the value of an object’s physical properties relative to its design, the design’s provenance, and its meaning.

I went to a MakerFaire a couple years ago specifically to see the 3D printers that were going to be there. I remember pestering MakerBot co-founder Bre Pettis with twenty variations of “when is the resolution going to get better?” I’d read various 3D printer people talking up “New Aesthetic” type appreciation for seeing the print layer lines in their prints, and how cool those process artifacts were. At the time I thought that was just so much rationalization, but I’ve since changed my mind. Everything shows artifacts of manufacture, and my prints’ quality is just an artifact of what’s practical, for me, in early 2013. It’ll be different, maybe better, when someone else prints it, or when I print it again six months or a year later. It doesn’t matter. Now, with lots of hands-on time, especially combined with scanning, I really see 3D printers as dematerializers: they radically decrease the value of an object’s physical properties relative to its design, the design’s provenance, and its meaning.

I’m not trying to make anything remotely close to perfect. I try to make things that evoke something more than the object—that “spookiness” in the scanning process I mentioned; the weird, Promethean powers spreading via easy scanning, design and printing, and their intersection with fine art and antiquities—distant, painstakingly preserved and curated, soon-to-be-formerly rare artforms on a collision course with popular culture. That is much more interesting to me than the layer height or build volume of a particular print or printer. For art objects, to me, fixating on print quality really, really misses the big picture.

Your Alexander the Great and Horse of Selene models got a lot of attention. What were your goals when you started that project?

My goal was simply to show that consumer-grade 3D printers can produce objects of art worthy of display. I printed them life-size, because I thought doing so would be jarring; it would help break consumer-grade 3D printing out of the toy and trinket realm and make it all seem more real somehow. But I chose those archetypical subjects in particular to try to advance the idea that with 3D scanning and 3D printing, private collectors and museums have an opportunity to turn their collections into living engines of cultural creation. They can digitize their three-dimensional collections and project them outward into the public realm to be adapted, multiplied, and remixed. If I can do it with just a camera and some free software, the Getty, or the Louvre, or a wealthy collector can do it too. In fact, they’ve already done a lot of the scanning, they just haven’t done much of the publishing.

Photo Credit: Cosmo Wenman

But they should, in my opinion, because these technologies offer a way to break great art out of mausoleum-like settings, and put them where they can come alive and reach and influence many more people, in a vibrant, lively, and anarchic popular culture.

In the last couple months, those pieces from the British Museum and others I’ve done since have given me entrée to a few collectors, who get my elevator pitch that the first group of collectors to scan and freely publish their collections have an opportunity to be among the most influential art patrons of the next several hundred years.

Photo Credit: Cosmo Wenman

You've mentioned that you're interested in how 3d printing relates to media remixing. Can you explain a little about that?

There’s no telling what artforms will rise up out of that mess of sharing, copying, remixing, and piracy in the coming years.

So much music today is shaped by sampling, which is all made practical by the mass digitization of music and heavily influenced by that magical moment when Napster, the .mp3 file format, cheap storage, and mp3 players all started hitting the consumer market around the same time. I remember ripping all my CDs and sharing them, and downloading all the stuff I didn’t have yet. I feel the same impulse with 3D scanning, and hope for a similar, analogous mad scramble for digitizing and sharing 3D artwork. If there’s even a chance of that happening, I want to be a part of it.

There are millennia of beautiful physical forms that can be digitized, propagated, and remixed over and over again in perpetuity. That’s a lot of raw material to work with—the basis for an unlimited combinatorial explosion of adaptation and novelty. But there’s no telling what artforms will rise up out of that mess of sharing, copying, and remixing (piracy too) in the coming years. Maybe the world's back catalog of 3D art will show up lit in pixels on our screens, rematerialized in our living rooms, or embedded in our architecture or clothing. Mass scanning and publishing are the first steps towards finding out.

Photo Credit: Cosmo Wenman

Let's talk about your finishing process. How did you develop the metal material finishes and color patinas that make your models so striking? What is involved in that process?

I am working with several manufacturers to develop a line of finish products tailored for 3D printing. The goal is to offer users a large palette of materials, colors, and effects to use to transform plastic, resin, and ceramic prints into convincing and substantial finished objects. It all needs to be easy to use, but of a much higher quality than craft-store faux finish kits. The prints I’ve shown publicly so far have been coated with real bronze, brass, and iron, with authentic patinas, all achieved with room-temperature, easy to use materials. Right now it’s all under the working name “Alternate Reality Patinas,” and I’m working on bringing it to market very soon.

What do you think is the practical potential of 3d printing for fabricating at home and in the classroom? How about for the art and history world?

There’s huge potential for hands-on study of models that just wouldn’t be practical without 3D printing. One of the coolest prints I’ve done recently was for Louise Leakey’s educational initiative AfricanFossils.org, which is interested in getting good quality, affordable reproductions of fossils into classrooms so students can examine them directly. 3D printing them directly in the classroom seems like a natural solution. I printed and bronzed their scan of KNMER406, a 1.7 million-year-old hominid fossil for them as a proof of concept.

And recently at CES, I saw a blind man examining my life size Parthenon horse head scan/print with his hands—an application that hadn’t even occurred to me until that moment. Try doing that kind of art history study with the original.