Quality assessment of pansharpened images is traditionally carried out either at degraded spatial scale by checking the
synthesis property ofWald’s protocol or at the full spatial scale by separately checking the spectral and spatial consistencies.
The spatial distortion of the QNR protocol and the spectral distortion of Khan’s protocol may be combined into a unique
quality index, referred to as hybrid QNR (HQNR), that is calculated at full scale. Alternatively, multiscale measurements
of indices requiring a reference, like SAM, ERGAS and Q4, may be extrapolated to yield a quality measurement at the
full scale of the fusion product, where a reference does not exist. Experiments on simulated Pl´eiades data, of which
reference originals at full scale are available, highlight that quadratic polynomials having three-point support, i.e. fitting
three measurements at as many progressively doubled scales, are adequate. Q4 is more suitable for extrapolation than
ERGAS and SAM. The Q4 value predicted from multiscale measurements and the Q4 value measured at full scale thanks
to the reference original, differ by very few percents for six different state-of-the-art methods that have been compared.
HQNR is substantially comparable to the extrapolated Q4.

Recent remote sensing applications require sensors that provide both high spatial and spectral resolution, but this is often not possible for economic and constructive reasons. The "fusion" of images at different spatial and spectral resolution is a method widely used to solve this problem. Pan-sharpening techniques have been applied in this work to simulate PRISMA images. The work presented here is indeed part of the Italian Space Agency project “ASI-AGI”, which includes the study of a new platform, PRISMA, consisting of an hyperspectral sensor with a spatial resolution of 30 m and a panchromatic sensor with a spatial resolution of 5 m, for monitoring and understanding the Earth's surface. Firstly, PRISMA images have been simulated using images from MIVIS and Quickbird sensors. Then several existing fusion methods have been tested in order to identify the most suitable for the platform PRISMA in terms of spatial and spectral information preservation. Both standard and wavelet algorithms have been used: among the former there are Principal Component Analysis and Gram-Schmidt transform, and among the latter are Discrete Wavelet Transform and the “à trous” wavelet transform. Also the Color Normalized Spectral Sharpening method has been used. Numerous quality metrics have been used to evaluate spatial and spectral distortions introduced by pan-sharpening algorithms. Various strategies can be adopted to provide a final rank of alternative algorithms assessed by means of a battery of quality indexes. All implemented statistics have been standardized and then three different methodologies have been used to achieve a final score and thus a classification of pan-sharpening algorithms. Currently a new protocol is under development to evaluate the preservation of spatial and spectral information in fusion methods. This new protocol should overcome the limitations of existing alternative approaches and be robust to changes in the input dataset and user-defined parameters.

There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation, environment and etc. applications.

To improve the spatial resolution of a hyperspectral (HS) observation of a scene with the aid of an auxiliary multispectral (MS) observation, a new spectral unmixing-based HS and MS image fusion approach is presented in this paper. In the proposed fusion approach, linear spectral unmixing with sparsity constraint is employed, by taking the impact of linear observation model on linear mixing model into consideration. Simulative experiment is employed for verification and comparison. It is illustrated that the proposed approach would be more promising for practical utilization compared to some state-of-the-art approaches, due to its good balance between fusion performance and calculation cost.

In this paper, we present a new formulation of geometric active contours that embeds the local hyperspectral image information for an accurate object region and boundary extraction. We exploit self-organizing map (SOM) unsupervised neural network to train our model. The segmentation process is achieved by the construction of a level set cost functional, in which, the dynamic variable is the best matching unit (BMU) coming from SOM map. In addition, we use Gaussian filtering to discipline the deviation of the level set functional from a signed distance function and this actually helps to get rid of the re-initialization step that is computationally expensive. By using the properties of the collective computational ability and energy convergence capability of the active control models (ACM) energy functional, our method optimizes the geometric ACM energy functional with lower computational time and smoother level set function. The proposed algorithm starts with feature extraction from raw hyperspectral images. In this step, the principal component analysis (PCA) transformation is employed, and this actually helps in reducing dimensionality and selecting best sets of the significant spectral bands. Then the modified geometric level set functional based ACM is applied on the optimal number of spectral bands determined by the PCA. By introducing local significant spectral band information, our proposed method is capable to force the level set functional to be close to a signed distance function, and therefore considerably remove the need of the expensive re-initialization procedure. To verify the effectiveness of the proposed technique, we use real-life hyperspectral images and test our algorithm in varying textural regions. This framework can be easily adapted to different applications for object segmentation in aerial hyperspectral imagery.

Remote sensing data are an important source of information for a variety of applications, such as coastal mapping applications, monitor land use, and chart wildlife habitats, for example. One of the most important task for these data analysis is the segmentation. Segmentation means the action of merging neighbouring pixels into segments (or regions), based on their homogeneity or heterogeneity parameters. Traditional image segmentation methods looks for delineating discrete image objects with sharp edges, which cannot be always possible, mainly considering that many geographic objects, both natural and man-made, may not appear clearly bounded in remotely sensed images. A fuzzy approach seems natural in order to capture the structure of objects in the image and takes into account the fuzziness of the real world and the ambiguity of remote sensing imagery. The main goal of this work is define boundaries of objects in an image. This proposal aims to be faster than other segmentation approaches inside the TerraLib tools by considering only the neighbourhood of a selected pixel. This work proposes the use of image's tone and colour to select and define objects in remote scenes based on fuzzy rules. The fuzzy set is defined by an input tolerance level, which can be adjustable according to the desired granularity of the selection. The proposal methodology is not limited by the selection of only one object, that is, the mask can be designed by a set of objects with different features and tolerances. The algorithm also returns the objects size and proportion. The quality of the individual segmentation results is evaluated based on multi-spectral Landsat 5-TM, Landsat 7-ETM+ and CBERS data. This is done by visual comparison, which is supplemented by a detailed investigation using visual interpreted reference areas.

LiDAR is a remote sensing method which produces precise point clouds consisting of millions of geo-spatially located 3D data points. Because of the nature of LiDAR point clouds, it can often be difficult for analysts to accurately and efficiently recognize and categorize objects. The goal of this paper is automatic large-volume object region segmentation in LiDAR point clouds. This efficient segmentation technique is intended to be a pre- processing step for the eventual classification of objects within the point cloud. The data is initially segmented into local histogram bins. This local histogram bin representation allows for the efficient consolidation of the point cloud data into voxels without the loss of location information. Additionally, by binning the points, important feature information can be extracted, such as the distribution of points, the density of points and a local ground. From these local histograms, a 3D automatic seeded region growing technique is applied. This technique performs seed selection based on two criteria, similarity and Euclidean distance to nearest neighbors. The neighbors of selected seeds are then examined and assigned labels based on location and Euclidean distance to a region mean. After the initial segmentation step, region integration is performed to rejoin over-segmented regions. The large amount of points in LiDAR data can make other segmentation techniques extremely time consuming. In addition to producing accurate object segmentation results, the proposed local histogram binning process allows for efficient segmentation, covering a point cloud of over 9,000 points in 10 seconds.

The automatic detection of geometric features, such as edges and creases, from objects represented by 3D point clouds (e.g., LiDAR measurements, Tomographic SAR) is a very important issue in different application domains including urban monitoring and building reconstruction. A limitation of many methods in the literature is that they rely on rasterization or interpolation of the original grid, with consequent potential loss of detail. Recently, a second-order variational model for edge and crease detection and surface regularization has been presented in literature and succesfully applied to DSMs. In this paper we address the generalization of this model to unstructured grids. The model is based on the Blake-Zisserman energy and allows to obtain a regularization of the original data (noise reduction) which does not affect crucial regions containing jumps and creases. Specifically, we focus on the detection of these features by means of two auxiliary functions that are computable by solving specific differential equations. Results obtained on LiDAR data by solving the equations via Finite Element Method are presented.

This paper presents a semiautomatic method for rectilinear building roof boundary extraction, based on the integration of
high-resolution aerial image and LiDAR (Light Detection and Ranging) data. The proposed method is formulated as an
optimization problem, in which a snakes-based objective function is developed to represent the building roof boundaries
in an object-space coordinate system. Three-dimensional polylines representing building roof boundaries are obtained by
optimizing the objective function using the dynamic programming optimization technique. The results of our
experiments showed that the proposed method satisfactorily performed the task of extracting different building roof
boundaries from aerial image and LiDAR data.

Sentinel-2 is a multispectral, high-resolution, optical imaging mission, developed by the European Space Agency (ESA)
in the frame of the Copernicus program of the European Commission. In cooperation with ESA, the Centre National
d’Etudes Spatiales (CNES) is responsible for the image quality of the project, and will ensure the CAL/VAL
commissioning phase.
Sentinel-2 mission is devoted the operational monitoring of land and coastal areas, and will provide a continuity of
SPOT- and Landsat-type data. Sentinel-2 will also deliver information for emergency services. Launched in 2015 and
2016, there will be a constellation of 2 satellites on a polar sun-synchronous orbit, imaging systematically terrestrial
surfaces with a revisit time of 5 days, in 13 spectral bands in visible and shortwave infra-red. Therefore, multi-temporal
series of images, taken under the same viewing conditions, will be available.
This paper first briefly presents Sentinel-2 system, the design, the level-1 products, and the main geometric image quality
requirements: geolocation with and without ground control points, multi-temporal and multi-spectral registration. Then,
it presents the methods foreseen during commissioning: the viewing frames orientation, the focal plane mapping, the
global reference image generation. Finally, it presents the Sentinel-2 image simulation tool, used to provide data for the
validation of these developments.

In this paper we propose a method to get fine registration of high resolution multispectral images. The algorithm supposes that a coarse registration, based on ancillary information, has been already performed. It is known, in fact, that residual distortions remain, due to the combined effects of Earth rotation and curvature, view geometry, sensor operation, variations in platform velocity, atmospheric and terrain effects.
The algorithm grounds its main idea on the information-theoretic approach to register volumetric medical images of different modalities. Registration is achieved by adjustment of the relative position and orientation until the mutual information between the images is maximized. The idea is that the join information is maximized when the two images are at their best registration. This approach works directly with image data but in principle it can be applied in any transformed domain. While the original algorithm has been thought to make registration in a limited search space (i.e. translation and orientation), in the remote sensing framework the class of transformations is extended allowing scaling, shearing or a general polynomial model. The maximization of the target function is performed using both the stochastic gradient descent algorithm and the simulated annealing, since the former is known to occasionally deadlock in local maxima.
We have applied the algorithm on a SPOT-5 couple of images, achieving the registration of chips of size
256x256 pixels at time. Accuracy has been obtained comparing the results with the outcomes of a commercial software that adopts a sort of Normalized Cross-Correlation method. On 143 chips taken throughout the image, the final translation accuracy resulted well below 1 pixel and the rotation accuracy about 0.015deg.

The Meteosat Third Generation (MTG) Programme is the next generation of European geostationary meteorological
systems. The first MTG satellite, which is scheduled for launch at the end of 2018/early 2019, will host two imaging
instruments: the Flexible Combined Imager (FCI) and the Lightning Imager. The FCI will continue the operation of the
SEVIRI imager on the current Meteosat Second Generation satellites (MSG), but with an improved spatial, temporal and
spectral resolution, not dissimilar to GOES-R (of NASA/NOAA).
The transition from spinner to 3-axis stabilised platform, a 2-axis tapered scan pattern with overlaps between adjacent
scan swaths, and the more stringent geometric, radiometric and timeliness requirements, make the rectification process
for MTG FCI more challenging than for MSG SEVIRI. The effect of non-uniform sampling in the image rectification
process was analysed in an earlier paper. The use of classical interpolation methods, such as truncated Shannon
interpolation or cubic convolution interpolation, was shown to cause significant errors when applied to non-uniform
samples. Moreover, cubic splines and Lagrange interpolation were selected as candidate resampling algorithms for the
FCI rectification that can cope with irregularities in the sampling acquisition process.
This paper extends the study for the two-dimensional case focusing on practical 2D interpolation methods and its
feasibility for an operational implementation. Candidate kernels are described and assessed with respect to MTG
requirements. The operational constraints of the Level 1 processor have been considered to develop an early image
rectification prototype, including the impact of the potential curvature of the FCI scan swaths. The implementation
follows a swath-based approach, uses parallel processing to speed up computation time and allows the selection of a
number of resampling functions. Due to the tight time constraints of the FCI level 1 processing chain, focus is both on
accuracy and performance. The presentation will show the results of our prototype with simulated FCI L1b data.

The paper is devoted to the earth surface image formation by means of multi-matrix scanning cameras. The realized formation of continuous and spatially combined images consists of consistent solutions for radiometric scan correction, stitching and geo-referencing of multispectral images. The radiometric scan correction algorithm based on statistical analyses of input images is described. Also, there is the algorithm for sub-pixel stitching of scans into one continuous image which could be formed by the virtual scanner. The paper contains algorithms for geometrical combining of multispectral images obtained in different moments; and, examples illustrating effectiveness of the suggested processing algorithms.

This paper deals with lossy compression of images corrupted by additive white Gaussian noise. For such images, compression can be characterized by existence of optimal operation point (OOP). In OOP, MSE or other metric derived between compressed and noise-free image might have optimum, i.e., maximal noise removal effect takes place. If OOP exists, then it is reasonable to compress an image in its neighbourhood. If no, more “careful” compression is reasonable. In this paper, we demonstrate that existence of OOP can be predicted based on very simple and fast analysis of discrete cosine transform (DCT) statistics in 8x8 blocks. Moreover, OOP can be predicted not only for conventional metrics as MSE or PSNR but also for visual quality metrics. Such prediction can be useful in automatic compression of multi- and hyperspectral remote sensing images.

It is a difficult point to detect and recognize artificial targets under the disturbance of the complex ground clutter when remote sensing and detection to the earth. Using the different polarization information between artificial object and natural scenery, the ability to distinguish artificial targets from natural scenery can be promoted effectively. On account that the differences of polarization characteristics is an important factor in designing the target recognition method, this paper focuses attention on the application of remote sensing and reconnaissance and makes detailed research on the long wave infrared polarization characteristics of several typical metallic targets, such as aluminum plate and iron plate and the aluminum plate that be coated with black paint or yellow green camouflage. Then, the changing rules of the degree and angle of the long wave infrared polarization changing with the measurement temperature are analyzed and researched. Work of this paper lays the theoretical foundation for the design of remote sensing and detection system based on the infrared polarization information in the future.

At this stage of data acquisition, we are in the era of massive automatic data collection, systematically obtaining many
measurements, not knowing which data are appropriate for a problem at hand. In this paper, a feature selection approach
is discussed. The approach is based on the integration of a Genetic Algorithm and Particle Swarm Optimization. Support
Vector Machine classifier is used as fitness function and its corresponding overall accuracy on validation samples is used
as fitness value, in order to evaluate the efficiency of different groups of bands. The approach is carried out on the wellknown
Salinas hyperspectral data set. Results confirm that the new approach is able to automatically select the most
informative features in terms of classification accuracy within an acceptable CPU processing time without requiring the
number of desired features to be set a priori by users.

In this paper, we propose a strategy for ocean slick classification in SAR images operating in a hybrid-polarimetric mode. The proposed scheme is successfully applied to classify mineral and plant oil slicks in SAR data covering oil spill experiments outside Norway and the Deepwater Horizon incident in the Gulf of Mexico. Using the elements of a hybrid-polarimetric coherency matrix as features, we construct a random forest classifier from training data obtained from an SAR image covering an oil-on-water exercise in the North Sea. The results show that we area able to distinguish mineral oil from plant oil and low wind slicks, however, it is challenging to distinguish the mineral oil types emulsion and crude oil. Due to the potential of wide swath widths, we conclude that hybrid-polarity is an attractive mode for future enhanced SAR-based oil spill monitoring.

Point pattern matching (PPM) including the hard assignment and soft assignment approaches has attracted much attention.
The typical probability based method is Coherent Point Drift (CPD) algorithm, which treats one point set(named model
point set) as centroids of Gaussian mixture model, and then fits it to the other(named target point set). It uses the
expectation maximization (EM) framework, where the point correspondences and transformation parameters are updated
alternately. But the anti-outlier performance of CPD is not robust enough as outliers have always been involved in
operation until CPD converges. So we proposed an automatic outlier suppression mechanism (AOS) to overcome the
shortages of CPD. Firstly, inliers or outliers are judged by converting matching probability matrix into doubly stochastic
matrix. Then, transformation parameters are fitted using accurate matching point sets. Finally, the model point set is forced
to move coherently to target point set by this transformation model. The transformed model point set is imported into EM
iteration again and the cycle repeats itself. The iteration finishes when matching probability matrix converges or the
cardinality of accurate matching point set reaches maximum. Besides, the covariance should be updated by the newest
position error before re-entering EM algorithm. The experimental results based on both synthetic and real data indicate that
compared with other algorithms, AOS-CPD is more robust and efficient. It offers a good practicability and accuracy in
rigid PPM applications.

Using a dataset from the 2013 IEEE data fusion contest, a basic study to classify urban land-cover was carried out. The spectral reflectance characteristics of surface materials were investigated from the airborne hyperspectral (HS) data acquired by CASI-1500 imager over Houston, Texas, USA. The HS data include 144 spectral bands in the visible to near-infrared (380 nm to 1050 nm) regions. A multispectral (MS) image acquired by WorldView-2 satellite was also introduced in order to compare it with the HS image. A field measurement in the Houston’s test site was carried out using a handheld spectroradiometer by the present authors. The reflectance of surface materials obtained by the measurement was also compared with the pseudo-reflectance of the HS data and they showed good agreement. Finally a principal component analysis was conducted for the HS and MS data and the result was discussed.

Nowadays there is an increasing demand for detailed 3D modeling of buildings using elevation data such as those
acquired from LiDAR airborne scanners. The various techniques that have been developed for this purpose typically
perform segmentation into homogeneous regions followed by boundary extraction and are based on some combination of
LiDAR data, digital maps, satellite images and aerial orthophotographs. In the present work, our dataset includes an
aerial RGB orthophoto, a DSM and a DTM with spatial resolutions of 20cm, 1m and 2m respectively. Next, a
normalized DSM (nDSM) is generated and fused with the optical data in order to increase its resolution to 20cm. The
proposed methodology can be described as a two-step approach. First, a nearest neighbor interpolation is applied on the
low resolution nDSM to obtain a low quality, ragged, elevation image. Next, we performed a mean shift-based
discontinuity preserving smoothing on the fused data. The outcome is on the one hand a more homogeneous RGB image,
with smoothed terrace coloring while at the same time preserving the optical edges and on the other hand an upsampled
elevation data with considerable improvement regarding region filling and “straightness” of elevation discontinuities.
Besides the apparent visual assessment of the increased accuracy of building boundaries, the effectiveness of the
proposed method is demonstrated using the processed dataset as input to five supervised classification methods. The
performance of each method is evaluated using a subset of the test area as ground truth. Comparisons with classification
results obtained with the original data demonstrate that preprocessing the input dataset using the mean shift algorithm
improves significantly the performance of all tested classifiers for building block extraction.

In urban areas, the shadow cast by buildings, trees along the road, abundant objects and complex image texture make the extraction of the road on very high Resolution RGB aerial image very difficult and challenging. We propose a method of road extraction from RGB aerial image in the followings steps: Shadow removal, enhanced sobel transform, keypoints extraction based on Maximally Stable Extremal Regions (MSER), feature extraction based on Speeded Up Robust Features (SURF) and road construction based on multi-resolution segmentation. The experimental results show that the proposed method achieves a good result.

Road databases are known to be an important part of any geodata infrastructure, e.g. as the basis for urban planning or emergency services. Updating road databases for crisis events must be performed quickly and with the highest possible degree of automation. We present a semi-automatic algorithm for road verification using textured 3D city models, starting from aerial or even UAV-images. This algorithm contains two processes, which exchange input and output, but basically run independently from each other. These processes are textured urban terrain reconstruction and road verification. The first process contains a dense photogrammetric reconstruction of 3D geometry of the scene using depth maps. The second process is our core procedure, since it contains various methods for road verification. Each method represents a unique road model and a specific strategy, and thus is able to deal with a specific type of roads. Each method is designed to provide two probability distributions, where the first describes the state of a road object (correct, incorrect), and the second describes the state of its underlying road model (applicable, not applicable). Based on the Dempster-Shafer Theory, both distributions are mapped to a single distribution that refers to three states: correct, incorrect, and unknown. With respect to the interaction of both processes, the normalized elevation map and the digital orthophoto generated during 3D reconstruction are the necessary input – together with initial road database entries – for the road verification process. If the entries of the database are too obsolete or not available at all, sensor data evaluation enables classification of the road pixels of the elevation map followed by road map extraction by means of vectorization and filtering of the geometrically and topologically inconsistent objects. Depending on the time issue and availability of a geo-database for buildings, the urban terrain reconstruction procedure has semantic models of buildings, trees, and ground as output. Building s and ground are textured by means of available images. This facilitates the orientation in the model and the interactive verification of the road objects that where initially classified as unknown. The three main modules of the texturing algorithm are: Pose estimation (if the videos are not geo-referenced), occlusion analysis, and texture synthesis.

Most hyperspectral image (HSI) processing algorithms assume a signal to noise ratio model in their formulation which
makes them dependent on accurate noise estimation. Many techniques have been proposed to estimate the noise. A very
comprehensive comparative study on the subject is done by Gao et al. [1]. In a nut-shell, most techniques are based on
the idea of calculating standard deviation from assumed-to-be homogenous regions in the image. Some of these
algorithms work on a regular grid parameterized with a window size w, while others make use of image segmentation in
order to obtain homogenous regions. This study focuses not only to the statistics of the noise but to the estimation of the
noise itself.
A noise estimation technique motivated from a recent HSI de-noising approach [2] is proposed in this study. The denoising
algorithm is based on estimation of the end-members and their fractional abundances using non-negative least
squares method. The end-members are extracted using the well-known simplex volume optimization technique called NFINDR
after manual selection of number of end-members and the image is reconstructed using the estimated endmembers
and abundances. Actually, image de-noising and noise estimation are two sides of the same coin: Once we denoise
an image, we can estimate the noise by calculating the difference of the de-noised image and the original noisy
image.
In this study, the noise is estimated as described above. To assess the accuracy of this method, the methodology in [1] is
followed, i.e., synthetic images are created by mixing end-member spectra and noise. Since best performing method for
noise estimation was spectral and spatial de-correlation (SSDC) originally proposed in [3], the proposed method is
compared to SSDC. The results of the experiments conducted with synthetic HSIs suggest that the proposed noise
estimation strategy outperforms the existing techniques in terms of mean and standard deviation of absolute error of the
estimated noise. Finally, it is shown that the proposed technique demonstrated a robust behavior to the change of its
single parameter, namely the number of end-members.

We present a hyperspectral image enhancement technique that utilizes spectral angle information
to improve the local contrast of shadow regions and increases spatial resolution of the output
color image determined by the enhancement process. The proposed visibility improvement
technique is presented in a two-stage approach. The first stage of the algorithm improves the
contrast within the image, thus enhancing the textural details of the scene. To minimize the
effects of illumination variations on the visibility of objects in the scene, the spectral angle
mapper (SAM) is employed, which allows the local pixel information to be insensitive to
changes in illumination. A color restoration process is used to provide an enhanced color image
from computed spectral angle between the reference spectrum and unknown spectra. This step
enables us to colorize the output image along with the enhanced shadow regions. In the second
stage, the spatial resolution of the contrast enhanced image is increased by using single image
super resolution technique on the enhanced image. The super resolution technique employs a
nonlinear interpolation based on multi-level local Fourier phase features. The combination of the
enhancement, color restoration, and super resolution approaches provide better visibility of
objects in the shadow regions. The effectiveness of the proposed technique is verified using realworld
hyperspectral data.

Recent studies on global anomaly detection (AD) in hyperspectral images have focused on non-parametric approaches that seem particularly suitable to detect anomalies in complex backgrounds without the need of assuming any specific model for the background distribution. Among these, AD algorithms based on the kernel density estimator (KDE) benefit from the flexibility provided by KDE, which attempts to estimate the background probability density function (PDF) regardless of its specific form. The high computational burden associated with KDE requires KDE-based AD algorithms be preceded by a suitable dimensionality reduction (DR) procedure aimed at identifying the subspace where most of the useful signal lies. In most cases, this may lead to a degradation of the detection performance due to the leakage of some anomalous target components to the subspace orthogonal to the one identified by the DR procedure. This work presents a novel subspace-based AD strategy that combines the use of KDE with a simple parametric detector performed on the orthogonal complement of the signal subspace, in order to benefit of the non-parametric nature of KDE and, at the same time, avoid the performance loss that may occur due to the DR procedure. Experimental results indicate that the proposed AD strategy is promising and deserves further investigation.

Anomalous change detection (ACD) in HyperSpectral Images (HSIs) is a challenging task aimed at detecting a set of pixels that have undergone a relevant change with respect to a previous acquisition. Two main problems arise in ACD: a) the two multi-temporal HSIs are not radiometrically comparable because they are usually collected under different atmospheric/illumination conditions; b) it is difficult to obtain a perfect alignment of the two images especially when the sensor is mounted on airborne platforms. Several algorithms were proposed in the past to deal with the problem related to the radiometrical differences in the multi-temporal image pair. Most of them assumes the spatial stationarity of the atmospheric/illumination conditions in each of the two images and does not account for the possible presence of shadows. We propose a new ACD scheme that is robust to space-variant acquisition conditions. The ACD task is performed on two feature images extracted individually from each HSI. The feature images are selected to guarantee the robustness to the space-variant acquisition conditions in both the HSIs. They are the decision statistics provided by the RX anomaly detection algorithm applied individually to each HSI. In the paper, the advantages and the limits of the new ACD strategy are discussed and the results obtained by comparing the performance of such a strategy with that of a state-of-the-art ACD algorithm on real data are presented.

This work aims at developing an approach to the detection of changes in multisensor multitemporal VHR optical images. The main steps of the proposed method are: i) multisensor data homogenization; and ii) change detection in multisensor multitemporal VHR optical images. The proposed approach takes advantage of: the conversion to physical quantities suggested by Pacifici et. al.1 , the framework for the design of systems for change detection in VHR images presented by Bruzzone and Bovolo2 and the framework for unsupervised change detection presented by Bovolo and Bruzzone3. Multisensor data homogenization is achieved during pre-processing by taking into account differences in both radiometric and geometric dimensions. Whereas change detection was approached by extracting proper features from multisensor images such that they result to be comparable (at a given level of abstraction) even if extracted from images acquired by different sensors. In order to illustrate the results, a data set made up of a QuickBird and a WorldView-2 images - acquired in 2006 and 2010 respectively – over an area located in the Trentino region of Italy were used. However, the proposed approach is thought to be exportable to multitemporal images coming from passive sensors other than the two mentioned above. The experimental results obtained on the QuickBird and WorlView-2 image pair are accurate. Thus opening to further experiments on multitemporal images acquired by other sensors.

An automatic cloud masking is one of the first required processing steps since the operational use of satellite image time series might be hampered by undetected clouds. The high temporal revisit of current and forthcoming missions allows us to consider cloud screening as an unsupervised change detection problem in the temporal domain. Therefore, we propose a cloud screening method based on detecting abrupt changes in the temporal domain. The main assumption is that image time series follow smooth variations over land (background) and abrupt changes in certain spectral and spatial features will be mainly due to the presence of clouds. The method estimates the background and common surface changes using the full information in the time series. In particular, we propose linear and nonlinear least squares regression algorithms that minimize both the prediction and estimation error simultaneously. Then, significant differences in the image of interest with respect to the estimated background are identified as clouds. The use of kernel methods allows the generalization of the algorithm to account for higher-order (nonlinear) feature relations. After cloud detection, cloud-free time series at high spatial resolution can be used to obtain a better monitoring of the land cover dynamics and to generate more elaborated products. The proposed method is tested in a dataset with 5-day revisit time series from
SPOT-4 at high resolution and Landsat-8 time series.

This paper presents a novel semisupervised learning (SSL) technique defined in the context of ε-insensitive support vector regression (SVR) to estimate biophysical parameters from remotely sensed images. The proposed SSL method aims to mitigate the problems of small-sized biased training sets without collecting any additional samples with reference measures. This is achieved on the basis of two consecutive steps. The first step is devoted to inject additional priors information in the learning phase of the SVR in order to adapt the importance of each training sample according to distribution of the unlabeled samples. To this end, a weight is initially associated to each training sample based on a novel strategy that defines higher weights for the samples located in the high density regions of the feature space while giving reduced weights to those that fall into the low density regions of the feature space. Then, in order to exploit different weights for training samples in the learning phase of the SVR, we introduce a weighted SVR (WSVR) algorithm. The second step is devoted to jointly exploit labeled and informative unlabeled samples for further improving the definition of the WSVR learning function. To this end, the most informative unlabeled samples that have an expected accurate target values are initially selected according to a novel strategy that relies on the distribution of the unlabeled samples in the feature space and on the WSVR function estimated at the first step. Then, we introduce a restructured WSVR algorithm that jointly uses labeled and unlabeled samples in the learning phase of the WSVR algorithm and tunes their importance by different values of regularization parameters. Experimental results obtained for the estimation of single-tree stem volume show the effectiveness of the proposed SSL method.

Particulate matter (PM), emitted by vehicles in urban traffic, can greatly affect environment air quality and have
direct implications on both human health and infrastructure integrity. The consequences for society are relevant
and can impact also on national health. Limits and thresholds of pollutants emitted by vehicles are typically
regulated by government agencies. In the last few years, the interest in PM emissions has grown substantially
due to both air quality issues and global warming. Lidar-Dial techniques are widely recognized as a costeffective
alternative to monitor large regions of the atmosphere. To maximize the effectiveness of the
measurements and to guarantee reliable, automatic monitoring of large areas, new data analysis techniques are
required. In this paper, an original tool, the Universal Multi-Event Locator (UMEL), is applied to the problem of
automatically indentifying the time location of peaks in Lidar measurements for the detection of particulate
matter emitted by anthropogenic sources like vehicles. The method developed is based on Support Vector
Regression and presents various advantages with respect to more traditional techniques. In particular, UMEL is
based on the morphological properties of the signals and therefore the method is insensitive to the details of the
noise present in the detection system. The approach is also fully general, purely software and can therefore be
applied to a large variety of problems without any additional cost. The potential of the proposed technique is
exemplified with the help of data acquired during an experimental campaign in the field in Rome.

The detection of impervious surfaces is an important issue in the study of urban and rural environments. Imperviousness refers to water’s inability to pass through a surface. Although impervious surfaces represent a small percentage of the Earth’s surface, knowledge of their locations is relevant to planning and managing human activities. Impervious structures are primarily manmade (e.g., roads and rooftops). Impervious surfaces are an environmental concern because many processes that modify the normal function of land, air, and water resources are initiated during their construction. This paper presents a novel method of identifying impervious surfaces using satellite images and light detection and ranging (LIDAR) data. The inputs for the procedure are SPOT images formed by four spectral bands (corresponding to red, green, near-infrared and mid-infrared wavelengths), a digital terrain model, and an .las file. The proposed method computes five decision indexes from the input data to classify the studied area into two categories: impervious (subdivided into buildings and roads) and non-impervious surfaces. The impervious class is divided into two subclasses because the elements forming this category (mainly roads and rooftops) have different spectral and height properties, and it is difficult to combine these elements into one group. The classification is conducted using a decision tree procedure. For every decision index, a threshold is set for which every surface is considered impervious or non-impervious. The proposed method has been applied to four different regions located in the north, center, and south of Spain, providing satisfactory results for every dataset.

Synthetic Aperture Radar (SAR) is the most widely used sensor for ship detection from space but optical sensors are increasingly used in addition of these. The combined use of these sensors in an operational framework becomes a major stake of the efficiency of the current systems. It becomes also a source of the increased complexity of these systems. Optical and SAR signals of a maritime scene have many similarities. These similarities allow us to define a common detection approach presented in this paper. Beyond the definition of a single algorithm for both types of data, this study aims to define an algorithm for the detection of vessels of any size in any resolution images. After studying the signatures of vessels, this second goal leads us to define a detection strategy based on multi-scale processes. It has been implemented in a processing chain into two major steps: first targets that are potentially vessels are identified using a Discrete Wavelet Transform (DWT) and Constant False Alarm Rate (CFAR) detector. Second among these targets, false alarms are rejected using a multi-scale reasoning on the contours of the targets. The definition of this processing chain is made with respect to three constraints: the detection rate should be 100%, the false alarm rate should be as low as possible and finally the processing time must be compatible with operations at sea. The method was developed and tested on the basis of a very large data set containing real images and associated detections. The obtained results validate this approach but with limitations mainly related to the sea state.

Atmospheric motion vectors (AMV) in cloud-free region can not be obtained with current operational cloud-motion
tracking and water-vapor channel algorithms. The motivation of this study is to introduce a supplementary algorithm in
order to work out the low-level AMVs in the clear area with FY-2E long wave, window (10.3~11.5, 11.6~12.8 μm)
channel imagery. It has been shown that the weak signals indicating water vapor in “cloud-free region” can be extracted
from FY-2E long wave infrared imagery and may be used as tracers for atmospheric motion vectors. The algorithm,
named as Second Order difference method, has been raised in order to weaken the surface temperature interference to the
weak signals of water vapor in “cloud-free region” by means of split window and temporal difference calculations. The
results from theory analysis and cases study show that this method can make up for the wind data in regions lack of
cloud but rich of water vapor and comparison between the wind vectors from this method and the NCEP reanalysis data
shows a good consistency.

In this paper, we propose an unsupervised change detection method using the labeled co-occurrence matrix on multitemporal
SAR images. In SAR images, each land cover (LC) class has a distinct reflectivity to radar signals and presents
a specific backscattering value. Generally, the amplitude of the SAR images can be roughly clustered into three classes
according to the backscattering behaviors of the LC classes. The changes occurred between the images can be considered
as a backscattering variation that is changed from one backscattering class into another. As a result, we analyzed the
possible cases of the positive and negative backscattering variations, and merged the initial three backscattering classes
into two classes with the pixel in the medium backscattering class being attached to the strong backscattering class and
the low backscattering class respectively in a membership degree. Two pairs of fuzzy-label images are derived
accordingly, where each pair of fuzzy-label images are computed from the multi-temporal SAR data. The labeled cooccurrence
matrix is computed locally on each pair of fuzzy-label images by combining the membership values in a
conjunctive operator, and the autocorrelation feature is extracted. The classifications are implemented by Otsu Nthresholding
algorithm on the derived two autocorrelation features. The final binary change detection map is achieved by
combining the obtained two classification results. Experiments were carried on portions of multi-temporal Radarsat-1
SAR data. The effectiveness of the proposed approach was confirmed.

This paper investigates the problem of detecting changes on multitemporal SAR imagery in an unsupervised way. A
novel change indicator was developed to identify the temporal changes. It is computed by the local average of the
amplitude ratio comparing the exponentiation of the local average of the logarithm–transformed amplitude ratio.
Compared with the classical ratio of local means, the novel operator is more effective in identifying the changed pixels
even the local means are preserved. The classification is implemented by an automatic thresholding algorithm derived
from a new Riemannian metric defined in the differential geometry structure. The geodesic distance derived from the
new Riemannian metric provides a way to compare the distance between the probability distributions of the changed
class and the non-changed class. The probability density functions of the changed and non-changed classes are
characterized over the photometric variable. By maximizing the distance between the probability density distributions of
the two classes, the misclassification errors are minimized and the optimal threshold is achieved accordingly.
Experiments were carried on portions of multi-temporal Radarsat-1 SAR data. The obtained accuracies confirm the
effectiveness of the proposed approach.

Polarization orientation angle (POA) correction to compensate for terrain effects on polarimetric SAR data has been investigated in the literature. POA rotation can be derived from digital elevation model (DEM) and/or from polarimetric SAR (PolSAR) data through covariance/coherency analysis. A robust analytic model connecting PolSAR data and products (e.g., POA) to target/scene terrain characteristics can serve two main objectives. First is to correct and calibrate PolSAR data acquired from different SARs when DEM is available. Second is to model terrain through inverse solution of POAs derived from PolSAR data analysis. This formalism has been developed and is presented here. Effectiveness of the technique in providing both forward (POAs from DEM) and inverse (DEM from POAs) solutions is explored through imagery product examples and simulations.

The exploitation of a multi-temporal stack of SAR intensity images seems to provide satisfactory results in flood detection problems when different spectral signature in presence of inundation are observed. Moreover, the use of interferometric coherence information can further help in the discrimination process. Besides the remote sensing data, additional information can be used to improve flood detection. We propose a data fusion approach, based on Bayesian Networks (BNs) , to analyze an inundation event, involving the Bradano river in the Basilicata region, Italy. Time series of COSMO-SkyMed stripmap SAR images are available over the area. The following random variables have been considered in the BN scheme: F, that is a discrete variable, consisting of two states: flood and no flood; the n-dimensional i variable, obtained by the SAR intensity imagery; the m-dimensional γ variable, obtained by the InSAR coherence imagery; the shortest distance d of each pixel from river course. The proposed BN approach allows to independently evaluate the conditional probabilities P(i|F), P(γ|F) and P(F|d), and then to join them to infer the value P(F = flood|i, γ, d), obtaining the probabilistic flood maps (PFMs). We evaluate these PFMs through comparisons with reference flood maps, obtaining overall accuracies higher than 90%.

An automatic SAR and optical images registration method based on improved SIFT is proposed in this paper, which is a
two-step strategy, from rough to accuracy. The geometry relation of images is first constructed by the geographic
information, and images are arranged based on the elevation datum plane to eliminate rotation and resolution differences.
Then SIFT features extracted by the dominant direction improved SIFT from two images are matched by SSIM as
similar measure according to structure information of the SIFT feature. As rotation difference is eliminated in images of
flat area after rough registration, the number of correct matches and correct matching rate can be increased by altering
the feature orientation assignment. And then, parallax and angle restrictions are introduced to improve the matching
performance by clustering analysis in the angle and parallax domains. Mapping the original matches to the parallax
feature space and rotation feature space in sequence, which are established by the custom defined parallax parameters
and rotation parameters respectively. Cluster analysis is applied in the parallax feature space and rotation feature space,
and the relationship between cluster parameters and matching result is analysed. Owing to the clustering feature, correct
matches are retained. Finally, the perspective transform parameters for the registration are obtained by RANSAC
algorithm with removing the false matches simultaneously. Experiments show that the algorithm proposed in this paper
is effective in the registration of SAR and optical images with large differences.

In this paper we present the results of research carried out to assess the usefulness of wavelet-based measures of image texture for classification of panchromatic VHR satellite image content. The study is based on images obtained from EROS-A satellite. Wavelet-based features are calculated according to two approaches. In first one the wavelet energy is calculated for each components from every level of decomposition using Haar wavelet. In second one the variance and kurtosis are calculated as mean values of detail components with filters belonging to the D, LA, MB groups of various lengths. The results indicate that both approaches are useful and complement one another. Among the most useful wavelet-based features are present not only those calculated with short or long filters, but also with the filters of intermediate length. Usage of filters of different type and length as well as different statistical parameters (variance, kurtosis) calculated as means for each decomposition level improved the discriminative properties of the feature vector consisted initially of wavelet energies of each component.

The Ground Control Points (GCPs) are widely used in geometric correction for remote sensing imagery, and the distribution of them is a key factor which affects the accuracy and quality of image correction. In this paper, we propose a new sampling design method, called Smallest Singular Value-based Sampling (SSVS), to obtain the optimal distribution of the GCPs. When the geometric correction of remote sensing imagery is performed with a 2D or 3D polynomial function model, the estimation of geometric correction model parameters can be interpreted as an estimation of regression coefficients with a Multiple Linear Regression(MLR) model, whose design matrix depends on the coordinates of GCPs. From the perspective of regression model, the design matrix of MLR should be optimized to obtain the most accurate regression coefficients. In this paper, it has been proved that the Smallest Singular Value(SSV) of design matrix is inversely proportional to the upper bound of estimation errors. By choosing the optimal distribution of GCPs, the SSV of design matrix can be maximized and the upper bound of estimation errors can be minimized. Therefore, the SSV of design matrix is used as a criterion, and the objective of SSVS is to find the sample pattern that has the biggest SSV. In this paper, the simulation annealing is employed to search the optimal pattern. Two experiments were carried out to test SSVS. The results indicate that the SSVS is an effective GCPs sampling design method and can be applied to evaluate upper bound of estimation error.

Research on target detection in hyperspectral imagery (HSI) has drawn much attention recently in many areas. Due to the
limitation of the HSI sensor’s spatial resolution, the target of interest normally occupies only a few pixels, sometimes are
even present as subpixels. This may increase the difficulties in target detection. Moreover, in some cases, such as in the
rescue and surveillance tasks, small targets are the most significant information. Therefore, it is very difficult but
important to effectively detect the interested small target. Using a three-dimensional tensor to model an HSI data cube
can preserve as many as possible the original spatial-spectral constraint structures, which is conducive to utilize the
whole information for small target detection. This paper proposes a novel and effective algorithm for small target
detection in HSI based on three-dimensional principal component analysis (3D-PCA). According to the 3D-PCA, the
significant components usually contain most information of imagery, in contrast, the details of small targets exist in the
insignificant components. So, after 3D-PCA implemented on the HSI, the significant components which indicate the
background of HSI are removed and the insignificant components are used to detect small targets. The algorithm is
outstanding thanks to the tensor-based method which is applied to process the HSI directly, making full use of spatial
and spectral information, by employing multilinear algebra. Experiments with a real HSI show that the detection
probability of interested small targets improved greatly compared to the classical RX detector.

The paper presents accuracy comparison of subpixel classification based on medium resolution Landsat images, performed using machine learning algorithms built on decision and regression trees method (C.5.0/Cubist and Random Forest) and artificial neural networks. The aim of the study was to obtain the pattern of percentage impervious surface coverage, valid for the period of 2009-2010. Imperviousness index map generation was a two-stage procedure. The first step was classification, which divided the study area into categories: i) completely permeable (imperviousness index less than 1%) and ii) fully or partially impervious areas. For pixels classified as impervious, the percentage of impervious surface coverage in pixel area was estimated. The root mean square errors (RMS) of determination of the percentage of the impervious surfaces within a single pixel were 11.0% for C.5.0/Cubist method, 11.3% for Random Forest method and 12.6% using artificial neural networks. The introduction of the initial hard classification into completely permeable areas (with imperviousness index <1%) and impervious areas, allowed to improve the accuracy of imperviousness index estimation on poorly urbanized areas covering large areas of the Dobczyce Reservoir catchment. The effect is also visible on final imperviousness index maps.

In the paper, precise geometric correction of Landsat-8 images based on Kalman filter with ground control points (GCPs)
is described. The matching pixels, GCPs and systematic correction image are integrated to estimate the errors of the
position, velocity and attitude of the Landsat-8 satellite. Kalman filter was used for the optimal solution. Experiments
demonstrate that comparable accuracy could be reached applying Kalman filter in the purpose of precision mapping
using fewer GCPs when comparing to the least-square iteration method.

As for multi-channel SAR, the spectrum of the clutter is spatially-temporally coupled, and the echo of the moving target
is chirp signal. A novel method based on STAP and FrFT is proposed in this paper, which is used for moving target
detection and parameter estimation. Two steps are used for fast target detection : the coarse detection in low range
resolution and parameter estimation for the specific data where the moving target appears. This paper discusses the
principle of frequency STAP for clutter suppression firstly, and subsequently infers that the signal after clutter
suppression is chirp signal. Then FrFT is introduced to estimate the parameters of the output signal, which can be used to
estimate the parameters of the moving target. Finally, the process of the proposed method is introduced. Matching
function is constructed to compensate the phase deviation caused by movement and focus the moving target. The
effectiveness of the proposed method is validated by the simulation.

Scanning Laser Radar has been widely used in many military and civil areas. Usually there are relative movements between the target and the radar, so the moving target image modeling and simulation is an important research content in the field of signal processing and system design of scan-imaging laser radar. In order to improve the simulation speed and hold the accuracy of the image simulation simultaneously, a novel fast simulation algorithm is proposed in this paper. Firstly, for moving target or varying scene, an inequation that can judge the intersection relations between the pixel and target bins is obtained by deriving the projection of target motion trajectories on the image plane. Then, by utilizing the time subdivision and approximate treatments, the potential intersection relations of pixel and target bins are determined. Finally, the goal of reducing the number of intersection operations could be achieved by testing all the potential relations and finding which of them is real intersection. To test the method’s performance, we perform computer simulations of both the new proposed algorithm and a literature’s algorithm for six targets. The simulation results show that the two algorithm yield the same imaging result, whereas the number of intersection operations of former is equivalent to only 1% of the latter, and the calculation efficiency increases a hundredfold. The novel simulation acceleration idea can be applied extensively in other more complex application environments and provide equally acceleration effect. It is very suitable for the case to produce a great large number of laser radar images.

In this work sub-terahertz imaging using Compressive Sensing (CS) techniques for targets placed behind a visibly opaque barrier is demonstrated both experimentally and theoretically. Using a multiplied Schottky diode based millimeter wave source working at 118 GHz, metal cutout targets were illuminated in both reflection and transmission configurations with and without barriers which were made out of drywall. In both modes the image is spatially discretized using laser machined, 10 × 10 pixel metal apertures to demonstrate the technique of compressive sensing. The images were collected by modulating the source and measuring the transmitted flux through the apertures using a Golay cell. Experimental results were compared to simulations of the expected transmission through the metal apertures. Image quality decreases as expected when going from the non-obscured transmission case to the obscured transmission case and finally to the obscured reflection case. However, in all instances the image appears below the Nyquist rate which demonstrates that this technique is a viable option for Through the Wall Reflection Imaging (TWRI) applications.

The CERES (Clouds and Earth Radiant Energy System) instrument is a scanning radiometer with three channels for
measuring Earth radiation budget. At present CERES models are operating aboard the Terra, Aqua and Suomi/NPP
spacecraft and flights of CERES instruments are planned for the JPSS-1 spacecraft and its successors. CERES scans
from one limb of the Earth to the other and back. The footprint size grows with distance from nadir simply due to
geometry so that the size of the smallest features which can be resolved from the data increases and spatial sampling
errors increase with nadir angle. This paper presents an analysis of the effect of nadir angle on spatial sampling
errors of the CERES instrument.
The analysis performed in the Fourier domain. Spatial sampling errors are created by smoothing of features which
are the size of the footprint and smaller, or blurring, and inadequate sampling, that causes aliasing errors. These
spatial sampling errors are computed in terms of the system transfer function, which is the Fourier transform of the
point response function, the spacing of data points and the spatial spectrum of the radiance field.

Development of a prototype 3-D through-wall synthetic aperture radar (SAR) system is currently underway at Defence Research and Development Canada. The intent is to map out building wall layouts and to detect targets of interest and their location behind walls such as humans, arms caches, and furniture. This situational awareness capability can be invaluable to the military working in an urban environment. Tools and algorithms are being developed to exploit the resulting 3-D imagery. Current work involves analyzing signatures of targets behind a wall and understanding the clutter and multipath signals in a room of interest. In this paper, a comprehensive study of 3-D human target signature metrics in free space is presented. The aim is to identify features for discrimination of the human target from other targets. Targets used in this investigation include a human standing, a human standing with arms stretched out, a chair, a table, and a metallic plate. Several features were investigated as potential discriminants and five which were identified as good candidates are presented in this paper. Based on this study, no single feature could be used to fully discriminate the human targets from all others. A combination of at least two different features is required to achieve this.

An infrared (IR) radiation generation model of stars and planets in celestial background is proposed in this paper. Cohen's spectral template1 is modified for high spectral resolution and accuracy. Based on the improved spectral template for stars and the blackbody assumption for planets, an IR radiation model is developed which is able to generate the celestial IR background for stars and planets appearing in sensor's field of view (FOV) for specified observing date and time, location, viewpoint and spectral band over 1.2μm ~ 35μm. In the current model, the initial locations of stars are calculated based on midcourse space experiment (MSX) IR astronomical catalogue (MSX-IRAC) 2 , while the initial locations of planets are calculated using secular variations of the planetary orbits (VSOP) theory. Simulation results show that the new IR radiation model has higher resolution and accuracy than common model.

Human observers often achieve striking recognition performance on remotely sensed data unmatched by machine vision algorithms. This holds even for thermal images (IR) or synthetic aperture radar (SAR). Psychologists refer to these capabilities as Gestalt perceptive skills. Gestalt Algebra is a mathematical structure recently proposed for such laws of perceptual grouping. It gives operations for mirror symmetry, continuation in rows and rotational symmetric patterns. Each of these operations forms an aggregate-Gestalt of a tuple of part-Gestalten. Each Gestalt is attributed with a position, an orientation, a rotational frequency, a scale, and an assessment respectively. Any Gestalt can be combined with any other Gestalt using any of the three operations. Most often the assessment of the new aggregate-Gestalt will be close to zero. Only if the part-Gestalten perfectly fit into the desired pattern the new aggregate-Gestalt will be assessed with value one. The algebra is suitable in both directions: It may render an organized symmetric mandala using random numbers. Or it may recognize deep hidden visual relationships between meaningful parts of a picture. For the latter primitives must be obtained from the image by some key-point detector and a threshold. Intelligent search strategies are required for this search in the combinatorial space of possible Gestalt Algebra terms. Exemplarily, maximal assessed Gestalten found in selected aerial images as well as in IR and SAR images are presented.

In order to enhance the robustness of building recognition in forward-looking infrared (FLIR) images, an effective
method based on big template is proposed. Big template is a set of small templates which contains a great amount of
information of surface features. Its information content cannot be matched by any small template and it has advantages
in conquering noise interference or incompleteness and avoiding erroneous judgments. Firstly, digital surface model
(DSM) was utilized to make big template, distance transformation was operated on the big template, and region of
interest (ROI) was extracted by the way of template matching between the big template and contour of real-time image.
Secondly, corners were detected from the big template, response function was defined by utilizing gradients and phases
of corners and their neighborhoods, a kind of similarity measure was designed based on the response function and
overlap ratio, then the template and real-time image were matched accurately. Finally, a large number of image data was
used to test the performance of the algorithm, and optimal parameters selection criterion was designed. Test results
indicate that the target matching ratio of the algorithm can reach 95%, it has effectively solved the problem of building
recognition under the conditions of noise disturbance, incompleteness or the target is not in view.

The study of moving target detection has high research value and wide developing perspective. Considering of real-time detection of typical moving ground targets, a novel algorithm is proposed, which is based on background estimation via using Gaussian mixture model and reference background frame updating. Firstly the image gray of the target and background is supposed to obey Gaussian distribution, then the whole image is divided into three Gaussian distribution and estimated to form the reference image, finally detection results can be obtained via subtracting the reference image from current frame image. At the mean time the reference image is updated with time to keep the adaptability of the background image. Experimental results show that the algorithm is effective for moving ground targets such as vehicle.

The goal for this work is to evaluate the impact of utilizing shorter wavelet filters in the CCSDS standard for lossy and lossless image compression. Another constraint considered was the existence of symmetry in the filters. That approach was desired to maintain the symmetric extension compatibility of the filter banks. Even though this strategy works well for oat wavelets, it is not always the case for their integer approximations. The periodic extension was utilized whenever symmetric extension was not applicable. Even though the latter outperforms the former, for fair comparison the symmetric extension compatible integer-to-integer wavelet approximations were evaluated under both extensions.
The evaluation methods adopted were bit rate (bpp), PSNR and the number of operations required by each wavelet transforms. All these results were compared against the ones obtained utilizing the standard CCSDS with 9/7 filter banks, for lossy and lossless compression.
The tests were performed over tallies (512x512) of raw remote sensing images from CBERS-2B (China-Brazil Earth Resources Satellites) captured from its high resolution CCD camera. These images were cordially made available by INPE (National Institute for Space Research) in Brazil. For the CCSDS implementation, it was utilized the source code developed by Hongqiang Wang from the Electrical Department at Nebraska-Lincoln University, applying the appropriate changes on the wavelet transform.
For lossy compression, the results have shown that the filter bank built from the Deslauriers-Dubuc scaling function, with respectively 2 and 4 vanishing moments on the synthesis and analysis banks, presented not only a reduction of 21% in the number of operations required, but also a performance on par with the 9/7 filter bank. In the lossless case, the biorthogonal Cohen-Daubechies-Feauveau with 2 vanishing moments presented a performance close to the 9/7 integer approximation of the CCSDS, with the number of operations reduced by 1/3.

In this study, there is examined filtering based pansharpening methods which means of using several 2D FIR
filters in Fourier domain which implies that the filters are applied after taking 2D Discrete Fourier Transform
of both multispectral and panchromatic image and after the pansharpening process in Fourier domain, the resulting
pansharpened image is obtained with an inverse 2D DFT. In addition, these methods are compared with
commonly used fusion methods which are combined as modulation based and component substitution based
methods. The algorithms are applied to SPOT 6 co-registered image couples that were acquired simultaneously.
Couples are chosen for three different regions which are a city image (Gebze/Turkey), a forest image
(Istanbul/Turkey) and an agriculture field image (Sanliurfa/Turkey) in order to analyse the methods in different
regional characteristics. These methods are compared by the fusion quality assessments that have common
acceptance in community. The results of these quality assessments shows the filtering based methods had the
best scores among the traditional methods.

Supervised Change Detection Tool (SCDT) is an in-house developed tool in Emirates Institution for Advanced Science
and Technology (EIAST). The developed tool is based on Algebra Change Detection algorithm and multi-class Support
Vector Machine classifier and is capable of highlighting the areas of change, describing them, and discarding any falsedetections
that result from shadow. Further, it can collect the analysis results, which include the change of class an area
went through and the overall change percentage of each class defined, in a Microsoft Word document automatically.
This paper evaluates the performance of the SCDT, which was initially developed for DubaiSat-1 multispectral images,
on DubaiSat-2 multispectral and pansharp images. Moreover, it compares its performance opposed to Change Detection
Analysis (i.e. Post-Classification) in ENVI.

Due to both natural and anthropogenic causes the coastal primary sand dunes, keeps changing dynamically and
continuously their shape, position and extend over time. In this paper we use a case study to show how we monitor the
Portuguese coast, between the period 2000 to 2014, using free available multi-temporal Landsat imagery (ETM+ and
OLI sensors). First, all the multispectral images are panshaperned to meet the 15 meters spatial resolution of the
panchromatic images. Second, using the Modification of Normalized Difference Water Index (MNDWI) and kmeans
clustering method we extract the raster shoreline for each image acquisition time. Third, each raster shoreline is
smoothed and vectorized using a penalized least square method. Fourth, using an image composed by five synthetic
bands and an unsupervised classification method we extract the primary sand dunes. Finally, the visual comparison of the
thematic primary sand dunes maps shows that an effective monitoring system can be implemented easily using free
available remote sensing imagery data and open source software (QGIS and Orfeo toolbox).

In many modern optical systems, the resolution is limited not only by the diffraction caused by physical dimensions of the optics lens, but also by the CCD’s nonzero pixel size. Especially for the traditional incoherent illumination, the restriction of CCD pixel is greater than that of optical diffraction. Here we develop a novel approach to enhancing resolution beyond the limit set by CCD’s pixels, in which a two-dimensional and orthogonal encoding mask is attached before the imaging lens to modulate frequency on input target spectrum. Here we focus on the design about a 4f optical imaging system, considering the ability of Fourier transformation to achieve the equivalent conversion between space and frequency domain. And to prevent the loss of frequency in the overlapping regions when sampled by classical CCD, there must be some proportion between the spatial range of object plane and corresponding frequency plane. Meaning while, the wavefront aberration of Fourier lens needs to be controlled to fulfill the mathematical features of Fourier transformation. We apply to improving and revising the theoretical design for the encoding mask based on the design limit of opticalmechanical engineering, and we analyze the different orthogonal forms of encoding masks which can bring the spectra diffraction to the imaging area. According to the theoretical discussion, revision and algorithm simulation, the results in the preliminary testing system show that the encoding mask can be used to produce enhancement of resolution by a factor of 2 in-exchange for decreasing the field of view by the same factor.

The presence of haze reduces the accuracy of optical data interpretation acquired from satellites. Medium and high spatial resolution multispectral data are often degraded by haze and haze detection and removal is still a challenging and important task. An empirical and automatic method for inhomogeneous haze removal is presented in this work. The dark object subtraction method is further developed to calculate a spatially varying haze thickness map. The subtraction of the haze thickness map from hazy images allows a spectrally consistent haze removal on calibrated and uncalibrated satellite multispectral data. The spectral consistency is evaluated using hazy and haze free remotely sensed medium resolution multispectral data.