SinGAN for Object Removal

The project’s goal is to enhance the SOTA work Singan (2019, Michaeli, Rott Shaham & Dekel) to generate samples based on real, natural image, while ignoring an object or an area in the original image in such way it won’t appear in the generated images.
SinGAN is an unconditional generative model, that can be learned from a single image. Given a single natural image, SinGAN is trained to capture the internal distribution of patches inside the image and then generate high quality images that carry the same visual content and preserve the same global structure as the original image with high diversity. In this work we added SinGAN the capability to ignore specific objects/areas in the original image, in such way the generated images will still preserve the same global structure as the original image, but now would lack the object.
Practically, one can think of a natural scene of a view, with an undesired figure somewhere in the background. If he would train the regular version of SinGAN on that image, with high probability he would get the undesired figure in the generated images. Using the new capability of SinGAN, he would be able to generate new samples of the same scene without the undesired figure.