Subscribe to the latest research through IGI Global's new InfoSci-OnDemand Plus

InfoSci®-OnDemand Plus, a subscription-based service, provides researchers the ability to access full-text content from over 100,000 peer-reviewed book chapters and 26,000+ scholarly journal articles covering 11 core subjects. Users can select articles or chapters that meet their interests and gain access to the full content permanently in their personal online InfoSci-OnDemand Plus library.

When ordering directly through IGI Global's Online Bookstore, receive the complimentary e-books for the first, second, and third editions with the purchase of the Encyclopedia of Information Science and Technology, Fourth Edition e-book.

InfoSci®-Journals Annual Subscription Price for New Customers: As Low As US$ 5,100

This collection of over 175 e-journals offers unlimited access to highly-cited, forward-thinking content in full-text PDF and HTML with no DRM. There are no platform or maintenance fees and a guarantee of no more than 5% increase annually.

Abstract

Current image coding with image fusion schemes make it hard to utilize external images for transform even if highly correlated images can be found in the cloud. To solve this problem, we explain an approach of cloud-based image transform coding with image fusion methodwhich is distinguish from exists image fusion method. A fast and efficient image fusion technique is proposed for creating a highly generated fused image through merging multiple corresponding images. The proposed technique is based on a two-scale decomposition of an image into a low layer containing large scale variations, and a detail layer acquiring small scale details. A novel approach of guided filtering-based weighted average method is proposed to make full use of spatial consistency for merge of the base and detail layers. Analytical results represent that the proposed technique can obtain state-of-the-art performance for image fusion of multispectral, multifocus, multimodal, and multiexposure images.

Literature Review

A large number of image fusion methods (Wang, 2004)– (Mandic, 2009) have been proposed in literature. Among these methods, multi-scale image fusion (Cruz, 2004) and data-driven image fusion (Mandic, 2009) are very successful methods. They focus on different data representations, e.g., multi-scale co-efficient Crow, (Jan. 1984), (Rockinger, 1997), or data driven decomposition co-efficient (Mandic, 2009), (Zeng, 2012) and different image fusion rules to guide the fusion of co-efficient. The major advantage of these methods is that they can well preserve the details of different source images. However, these kinds of methods may produce brightness and color distortions since spatial consistency is not well considered in the fusion process. To make full use of spatial context, optimization based image fusion approaches, e.g., generalized random walks (Shen 2011), and Markov random fields (Rockinger, 1997) based methods have been proposed. These methods focus on estimating spatially smooth and edge-aligned weights by solving an energy function and then fusing the source images by weighted average of pixel values. However, optimization based methods have a common limitation, i.e., inefficiency, since they require multiple iterations to find the global optimal solution. Moreover, another drawback is that global optimization based methods may over-smooth the resulting weights, which is not good for fusion(Varshney, 2011).