Error-Tolerant Image Compositing

Abstract

Gradient-domain compositing is an essential tool in computer vision and its applications, e.g., seamless cloning, panorama stitching, shadow removal, scene completion and reshuffling. While easy to implement, these gradient-domain techniques often generate bleeding artifacts where the composited image regions do not match. One option is to modify the region boundary to minimize such mismatches. However, this option may not always be sufficient or applicable, e.g., the user or algorithm may not allow the selection to be altered. We propose a new approach to gradient-domain compositing that is robust to inaccuracies and prevents color bleeding without changing the boundary location. Our approach improves standard gradient-domain compositing in two ways. First, we define the boundary gradients such that the produced gradient field is nearly integrable. Second, we control the integration process to concentrate residuals where they are less conspicuous. We show that our approach can be formulated as a standard least-squares problem that can be solved with a sparse linear system akin to the classical Poisson equation. We demonstrate results on a variety of scenes. The visual quality and run-time complexity compares favorably to other approaches.

Keywords

Gradient-domain compositing Visual masking

Electronic Supplementary Material

The online version of this article (doi:10.1007/s11263-012-0579-7) contains supplementary material, which is available to authorized users.

Notes

Acknowledgements

The authors thank Todor Georgiev for the link with the Poisson equation, Kavita Bala and George Drettakis for their discussion about visual masking, Aseem Agarwala and Bill Freeman for their help with the paper, Tim Cho and Biliana Kaneva for helping with the validation, Medhat H. Ibrahim for the image of the Egyption pyramids, Adobe Systems, Inc. for supporting Micah K. Johnson’s research, and Ravi Ramamoorthi for supporting Michael Tao’s work. This material is based upon work supported by the National Science Foundation under Grant Nos. 0739255 and 0924968.