A New Deep Learning App Can Transform Your Photo’s

Photo’s are fun but now deep learning can make them even more fun. There’s a new app your photos might benefit from, given a content photo and a style photo, the new deep learning app can transfer the style from one photo to another.

The photo-realistic “style transfer” algorithms developed aim at stylizing a target photo with the style of a reference photo with the constraint that the stylized photo should remain photo-realistic. While several methods exist for this task, they tend to generate spatially inconsistent stylizations with noticeable artifacts.

In addition, these methods are computationally expensive, requiring several minutes to stylize a VGA photo. In the arxiv paper a novel algorithm to address the limitations is presented. The proposed algorithm consists of a stylization step and a smoothing step. While the stylization step transfers the style of the reference photo to the content photo, the smoothing step encourages spatially consistent stylizations.

Unlike existing algorithms that require iterative optimization, both steps in this algorithm have closed-form solutions. Experimental results show that the stylized photos generated by this algorithm are twice more preferred by human subjects on average. Moreover, this method runs 60 times faster than current state-of-the-art approaches.

Here’s the Setup

The code was tested in the following environment.

OS: Ubuntu 16.04

CUDA: 9.1

Python 2 from Anaconda2

pytorch 0.3.0

Setup environment variables. You might already have them set up properly.

Example 2: Transfer the style of a style photo to a content photo with semantic label maps.

By default, the algorithm performs the global stylization. In order to give users control to decide the content–style correspondences for better stylization effects, it also supports the spatial control through manually drawing label maps.

Prepare label maps

Install the tool labelme and run the following command to start it: labelme

Start labeling regions (drawing polygons) in the content and style image. The corresponding regions (e.g., sky-to-sky) should have the same label.

The labeling result is saved in a “.json” file. By running the following command, you will get the label.png under path/example_json, which is the label map used in our code. label.png is a 1-channel image (usually looks totally black) consists of consecutive labels starting from 0.