Over the past decades, numerous theories and studies have demonstrated that salient objects in different scenes often share some properties in common that make them visually stand out from their surroundings, and thus can be processed in finer details. In this paper, we propose a novel method for salient object detection that involves the transfer of the annotations from an existing example onto an input image. Our method, which is based on the low-level saliency features of each pixel, estimates dense pixel-wise correspondences between the input image and an example image, and then integrates high-level concepts to produce an initial saliency map. Finally, a coarse-to-fine optimization framework is proposed to generate uniformly highlighted salient objects. Qualitatively and quantitatively experiments on six popular benchmark datasets validate that our approach greatly outperforms the state-of-the-art algorithms and recently published works.