Image Tool Catches Fashion Industry Photo Alterations

A new photograph-analyzing tool quantifies changes made by digital airbrushers in the fashion and lifestyle industry, where image alteration has become the psychologically destructive norm.

"Publishers have legitimate reasons to alter photographs to create fantasy and sell products, but they've gone a little too far," said image forensics specialist Hany Farid of Dartmouth University. "You can't ignore the body of literature showing negative consequences to being inundated with these images."

In a Nov. 28 Proceedings of the National Academy of Sciences study, Farid and doctoral student Eric Kee debut a computational model developed by analyzing 468 sets of original and retouched photographs. From these, Farid and Kee distilled a formal mathematical description of alterations made to models' shapes and features. Their model then scored each altered photograph on a scale of 1 to 5, with 5 signifying heavy retouching.

To validate the scores, Farid and Kee then asked 50 people randomly picked through Amazon's Mechanical Turk task outsourcing service to evaluate the photographs. Computational and human scores matched closely. "Now what we have is a mathematical measure of photo retouching," said Farid. "We can predict what an average observer would say."

The researchers started developing their model after learning of the British government's plans to label photographic alterations in advertising. Psychologists have become vocally critical of such images: By employing an arsenal of retouching techniques, from unnaturally slimmed limbs to the old standby of cleaned-up skin, retouchers create unattainable standards of both beauty and normalcy, ultimately leading to self-destructive body image disorders.

"One criticism of the British legislation is that they were presenting a blunt instrument. Photographs would be labeled as retouched or not. Anybody knows that there's different types," said Farid. "It's an interesting scientific problem: How much is too much? That got us thinking about whether we could quantify this."

Farid and Kee's model doesn't precisely determine a numerical boundary between psychologically appropriate and inappropriate; that's a judgement call to be made by society, Farid said. But they do provide an objective metric for evaluating images and trends.

"You look at what photographs looked like in magazines 10 years ago, and there's a huge difference. And that is escalating," said Farid.

On the following pages are more images from the study.

Above:

Image Analysis:

Color-coded overlays show the intensity of alteration at different points on a retouched photograph.

Of Little Faith

Untruth in Advertising

This Olay advertisement (left) featuring the model Twiggy was banned in the United Kingdom for being misleading to consumers. At right is an unaltered photograph of Twiggy at the same age.

Extreme Alteration

At left, an extremely altered advertisement featuring Filippa Hamilton; at right, a less altered image.

Representative Images

Examples of unaltered (top) and altered (bottom) photographs used to train Kee and Farid's computational model.

Computer versus Human Perception, Round 1

For each column of unaltered (top) and altered (bottom) photographs, numbers signify the alteration scores generated by people (left) and Kee and Farid's computational analysis (right).

This set contains examples of photos for which scores diverged. This occurred primarily because people attach extra significance to facial alterations, so that even small changes are immediately noticed, said Farid.

"This isn't a limitation of our model, but the training data," he said. "We didn't see a lot of examples of this, but we could train the model on more images."

Computer versus Human Perception, Round 2

Each column of unaltered (top) and altered (bottom) photographs again have numbers signifying the average human (left) and computer (right) alteration scores. Unlike the previous example, however, the scores almost perfectly match.