A montage of scribbly cartoon faces, each conveying with distinct personality, would make any parent proud of their child's artistic creation...except a child didn't produce these faces; a computer algorithm did.

Meet the randomly-generated caricatures that are part of the Weird Faces Study, the product of a computer algorithm developed by media artist Matthias Dörfelt. Having an interest in both illustration and coding, Matthias designed the algorithm to follow a systematic order for constructing a cartoon face, even adding in noise to the shape of the hand to illicit a sense of a hand drawn face. Surprisingly, each face is not only recognizable, but has human qualities captured in the rudimentary sketches.

Describing the project on his website, Matthias writes that, "Even though the faces look hand-drawn, they are entirely expressed by algorithmic rules. Each face is random, each face is unique. Still, they look similar to my actual hand drawn faces." As you can see from his journal, this isn't an overstatement.

1. draw head shape 2. draw fold inside the head shape 3. find shape center, draw nose 4. draw eyes based on the nose position and radius to make sure that they don’t overlap 5. draw eyebrows based on the eye positions and radii 6. draw mouth based on eyes and nose to make sure they don’t overlap 7. draw cheeks based on head outline and head radius 8. draw ears on the head outline 9. draw hair on the head outline

Matthias also states that his motive was to "create something procedural that has a truly individual artistic touch to it and is not instantly recognizable as a generative art piece."

Facial identification is a vital function of the brain, so much so that an area of the temporal lobe known as the fusiform gyrus is dedicated to this task alone. The frequency with which people recognize faces in rocks on the beach, water stains, surfaces of planets, and clouds stands as evidence that our brain is pining to identify faces. It's no wonder then that computer scientists have been interested in making algorithms to recognize faces for years using minimal features for purposes as diverse as security systems, advertising, robotics and artificial intelligence.

This is possible because the human face has specific patterns and areas of contrast that are resolvable even by suboptimal cameras. These predictable patterns are relied upon by facial recognition technology that makes Facebook photo tagging possible as well as newer surveillance systems that can search through 36 million images per second matching faces. Instructing a computer to draw these features within standard constraints that passes our brain's face test was just a matter of time and ingenuity.

On the surface, Weird Faces may seem like just another cool media project that mashes together technology and art. But considering the relative ease with which the human face can not only be procedurally defined but imbued with a breadth of qualities in simple characterizations, it doesn't take much to see how close we are to true digital storytelling, that is, entertainment entirely generated by computers.

The market for digital storytelling will likely be huge. After all, animated films are now mainstream money makers with broad audiences, strong character development, extensive merchandising, and potential for serialization and spin-offs. They have minimal language barriers since characters are voiced over, and they typically are written for universal audiences in completely imagined worlds, which avoids many cultural barriers.

Yet, animated film budgets have become significantly larger over the years. Consider how the original Toy Story (1995) had a budget of $30 million. What about 15 years later when Toy Story 3 was released? $200 million. In an era of automation across various industries to lower cost, it's only a matter of time before the cost of a computers making a movie will be cheaper than humans.

And if you think that computers can't tell stories or be funny, wait a few years. In time, they'll become a reality too.

Taken together, the convergence of these technologies will lead us to a time when a computer can generate an entertaining story and animate it with very little human involvement. In light of that, the Weird Faces study offers just a glimpse at what we very well may be laughing at it the near future.

Discussion
—
7 Responses

DigitalGalaxy·
March 5, 2013 on 7:05 pm

That’s kinda neat, but there is a point at which the whole point of artistry is human creative expression. Like the hamburger making machine a while ago. Sure, it would be great to having robots making food, but if you really want to go to a nice restaurant, you want something made by a chef; it has those little touches. If you go to an art gallery, you want to see art that expresses emotion; its debatable whether or not robots can emulate that, now or in the future.

Not knocking the progam, i would be neat to a see a “computer generated cartoon”. But, at some point, we have to see the limits of computers; and the only limits that can be seen are the limits of human emotion, creativity, and cognition.

AI will start by mimicking basic human behaviors, the more complex ones. In time, all the little touches will be programmed in as well. After all, what is the limiting factor that could prevent this progression?

A robot will be able to emulate human emotion if humans are able to effectively translate emotion into algorithms with appropriate triggers. While current attempts to do this are elementary, the software will become more sophisticated in time, just as operating systems have.

I think one of the underlying presuppositions about robots and emotion is that we assume robots must conjure up feelings from somewhere (mind, soul, etc.) and because they lack this, they won’t be able to truly feel. This is rooted in how we think our emotions work. Based on all the new findings from brain research, I think its safe to say that our own emotions don’t work the way we currently believe they do.

Once the brain is reverse engineered, there’s a good chance emotions will be able to be modeled through algorithms, and then, robots will be able to express emotion, probably even more reliably than humans.

Million dollar idea: Imagine a child typing a story. As the story is read by the computer, the story becomes animated. Characters and environments are created based on general guidance of the child and the creativity/randomness of the computer.

That’s kinda neat, but there is a point at which the whole point of artistry is human creative expression. Like the hamburger making machine a while ago. Sure, it would be great to having robots making food, but if you really want to go to a nice restaurant, you want something made by a chef; it has those little touches. If you go to an art gallery, you want to see art that expresses emotion; its debatable whether or not robots can emulate that, now or in the future.

Not knocking the progam, i would be neat to a see a “computer generated cartoon”. But, at some point, we have to see the limits of computers; and the only limits that can be seen are the limits of human emotion, creativity, and cognition.

David J. Hill

AI will start by mimicking basic human behaviors, the more complex ones. In time, all the little touches will be programmed in as well. After all, what is the limiting factor that could prevent this progression?

A robot will be able to emulate human emotion if humans are able to effectively translate emotion into algorithms with appropriate triggers. While current attempts to do this are elementary, the software will become more sophisticated in time, just as operating systems have.

I think one of the underlying presuppositions about robots and emotion is that we assume robots must conjure up feelings from somewhere (mind, soul, etc.) and because they lack this, they won’t be able to truly feel. This is rooted in how we think our emotions work. Based on all the new findings from brain research, I think its safe to say that our own emotions don’t work the way we currently believe they do.

Once the brain is reverse engineered, there’s a good chance emotions will be able to be modeled through algorithms, and then, robots will be able to express emotion, probably even more reliably than humans.

CatP

So we’ll soon have computers that can recognize a joke, but will they actually find it funny?

Million dollar idea: Imagine a child typing a story. As the story is read by the computer, the story becomes animated. Characters and environments are created based on general guidance of the child and the creativity/randomness of the computer.