Landsat is a joint USGS and NASA space program for Earth Observation (EO), which represents the world’s longest running system of satellites for moderate-resolution. The European Space Agency (ESA) has acquired Landsat data over Europe, Northern Africa and the Middle East during the last 40 years.

A new ESA Landsat Multi-Spectral Scanner (MSS), Thematic Mapper (TM) and Enhanced Thematic Mapper Plus (ETM+) processor has been developed. This enhanced processor aligns historical Landsat products to the highest quality standards that can be achieved with the current knowledge of the instruments. The updated processor is mainly based on the USGS algorithm; however it has some different features that are detailed in this paper.

Current achievements include the processing and availability of approximately 860,000 new TM/ETM+ high-quality products between 1983 and 2011 from the Kiruna (S), Maspalomas (E) and Matera (I) archives; Matera includes data from the Fucino (I), Neustrelitz (D), O’Higgins (Antarctica), Malindi (Kenya), Libreville (Gabon) and Bishkek (Kyrgyzstan) ground stations.

The products are freely available for immediate download to the users through a very fast and simple dissemination service (at: https://landsat-ds.eo.esa.int/app/) and through ESA’s browsing system, EOLI. The remaining MSS data, dating back more than 40 years, will gradually become available during 2015 and 2016.

The ESA Landsat processor algorithm enhancement, together with the results of the ESA archive bulk-processing data regarding production, quality control and data validation are herein presented.

PLEIADES-HR is an earth observing system developed by the French National Space Agency, CNES. It consists of two satellites launched on December 2011 (PHR-1A) and December 2012 (PHR-1B). Each satellite is designed to provide optical 70 cm resolution panchromatic and 2.80m colored images to civilian and defense users.

During commissioning period of these satellites, thanks to their extreme agility, new calibration methods have been tested based on the observation of celestial bodies, and stars in particular. It has then been made possible to perform MTF and defocus measurement (in order to refocus), geometrical bias computation, focal plane assessment, absolute calibration, ghost images localization, micro-vibrations measurement, etc…

This article deals with the problem of satellite refocusing. By using images of stars, the problem can be considered as a phase diversity inverse problem. Significant evolution has been brought to the previous method developed during the commissioning period in order to improve accuracy and reduce operating constraints of the method.

ScaRaB (SCAnner for RAdiation Budget) is the name of three radiometers whose two first flight models have been launched in 1994 and 1997. The instruments were mounted on-board Russian satellites, METEOR and RESURS. On October 12th 2011, a last model has been launched from the Indian site of Sriharikota. ScaRaB is a passenger of MEGHA-TROPIQUES, an Indo-French joint Satellite Mission for studying the water cycle and energy exchanges in the tropics. ScaRaB is composed of four parallel and independent channels. Channel-2 and channel-3 are considered as the main ones. Channel-1 is dedicated to measure solar radiance (0.5 to 0.7 μm) while channel-4 (10 to 13 μm) is an infrared window. The absolute calibration of ScaRab is assured by internal calibration sources (black bodies and a lamp for channel-1). However, during the commissioning phase, the lamp used for the absolute calibration of channel-1 revealed to be inaccurate. We propose here an alternative calibration method based on terrestrial targets. Due to the spectral range of channel-1, only calibration over desert sites (temporal monitoring) and clouds (cross band) is suitable.

Desert sites have been widely used for sensor calibration since they have a stable spectral response over time. Because of their high reflectances, the atmospheric effect on the upward radiance is relatively minimal. In addition, they are spatially uniform. Their temporal instability without atmospheric correction has been determined to be less than 1-2% over a year. Very-high-altitude (10 km) bright clouds are good validation targets in the visible and near-infrared spectra because of their high spectrally consistent reflectance. If the clouds are very high, there is no need to correct aerosol scattering and water vapor absorption as both aerosol and water vapor are distributed near the surface. Only Rayleigh scattering and ozone absorption need to be considered. This method has been found to give a 4% uncertainty.

Radiometric cross calibration of Earth observation sensors is a crucial need to guarantee or quantify the consistency of measurements from different sensors. ScaRaB is compatible with CERES mission. Two main spectral bands are measured by the radiometer: A short-wave channel (0.2 to 4 μm) dedicated to solar fluxes and a Total channel (0.2 to 200 μm) for fluxes combining the infrared earth radiance and the albedo. The earth long-wave radiance is isolated by subtracting the short-wave channel to the Total channel.

Both Earth Radiation Budget missions (CERES and ScaRaB) have the same specification: to provide an accuracy of ~1% in the measurement of short-wave and long-wave radiances and an estimation of the short-wave and long-wave fluxes less than 10 W/m2. We use the CERES PAPS and Cross-Track SSF datasets for direct radiances and fluxes comparisons during two validation phases. The first one occurred during April 17th to June 8th (51 days) in 2012 and the second one occurred between March 22th and May 31st 2015. The first validation campaign has been held with the CERES team using the Terra FM2 data. The CERES PAPS mode was used to align the swath scan, in order to increase the collocated pixels between the two instruments. This campaign allowed us to validate the ScaRaB radiances and to refine the error budget. The second validation campaign aims to provide a temporal monitoring of ScaRab calibration.

The European Space Agency (ESA) and the Japan Aerospace Exploration Agency (JAXA) are co-operating to develop the EarthCARE satellite mission with the fundamental objective of improving the understanding of the processes involving clouds, aerosols and radiation in the Earth’s atmosphere.

The EarthCARE Multispectral Imager (MSI) is relatively compact for a space borne imager. As a consequence, the immediate point-spread function (PSF) of the instrument will be mainly determined by the diffraction caused by the relatively small optical aperture. In order to still achieve a high contrast image, de-convolution processing is applied to remove the impact of diffraction on the PSF. A Lucy-Richardson algorithm has been chosen for this purpose.

This paper will describe the system setup and the necessary data pre-processing and post-processing steps applied in order to compare the end-to-end image quality with the L1b performance required by the science community.

In the frame of the Copernicus program of the European Comission, Sentinel-2 will offer multispectral highspatial- resolution optical images over global terrestrial surfaces. In cooperation with ESA, the Centre National d’Etudes Spatiales (CNES) is in charge of the image quality of the project, and will so ensure the CAL/VAL commissioning phase during the months following the launch.

Sentinel-2 is a constellation of 2 satellites on a polar sun-synchronous orbit with a revisit time of 5 days (with both satellites), a high field of view - 290km, 13 spectral bands in visible and shortwave infrared, and high spatial resolution - 10m, 20m and 60m. The Sentinel-2 mission offers a global coverage over terrestrial surfaces. The satellites acquire systematically terrestrial surfaces under the same viewing conditions in order to have temporal images stacks. The first satellite has been launched in June 2015. Following the launch, the CAL/VAL commissioning phase will then last during 6 months for geometrical calibration.

This paper first provides explanations about Sentinel-2 products delivered with geometric corrections. Then this paper details calibration sites, and the methods used for geometrical parameters calibration and presents the first linked results. The following topics are presented: viewing frames orientation assessment, focal plane mapping for all spectral bands, first results on geolocation assessment, and multispectral registration. There is a systematic images recalibration over a same reference which will be a set of S2 images produced during the 6 months of CAL/VAL. As it takes time to have all needed images, the geolocation performance with ground control points and the multitemporal performance are only first results and will be improved during the last phase of the CAL/VAL. So this paper mainly shows the system performances, the preliminary product performances and the way to perform them.

For the production of Level2A products during Sentinel-2 commissioning in the Technical Expertise Center Sentinel-2 in CNES, CESBIO proposed to adapt the Venus Level-2 , taking advantage of the similarities between the two missions: image acquisition at a high frequency (2 days for Venus, 5 days with the two Sentinel-2), high resolution (5m for Venus, 10, 20 and 60m for Sentinel-2), images acquisition under constant viewing conditions. The Multi-Mission Atmospheric Correction and Cloud Screening (MACCS) tool was born: based on CNES Orfeo Toolbox Library, Venμs processor which was already able to process Formosat2 and VENμS data, was adapted to process Sentinel-2 and Landsat5-7 data; since then, a great effort has been made reviewing MACCS software architecture in order to ease the add-on of new missions that have also the peculiarity of acquiring images at high resolution, high revisit and under constant viewing angles, such as Spot4/Take5 and Landsat8. The recursive and multi-temporal algorithm is implemented in a core that is the same for all the sensors and that combines several processing steps: estimation of cloud cover, cloud shadow, water, snow and shadows masks, of water vapor content, aerosol optical thickness, atmospheric correction. This core is accessed via a number of plug-ins where the specificity of the sensor and of the user project are taken into account: products format, algorithmic processing chaining and parameters. After a presentation of MACCS architecture and functionalities, the paper will give an overview of the production facilities integrating MACCS and the associated specificities: the interest for this tool has grown worldwide and MACCS will be used for extensive production within the THEIA land data center and Agri-S2 project. Finally the paper will zoom on the use of MACCS during Sentinel-2 In Orbit Test phase showing the first Level-2A products.

Jointly with the European Commission, the Sentinel-2 earth observation optical mission is developed by the European Space Agency (ESA). Relying on a constellation of satellites put in orbit starting mid-2015, Sentinel-2 will be devoted to the monitoring of land and coastal areas worldwide thanks to an imagery at high revisit (5 days with two satellites), high resolution (10m, 20m and 60m) with large swath (290km), and multi-spectral imagery (13 bands in visible and shortwave infra-red).

In this framework, the French Space Agency (CNES: Centre National d’Etudes Spatiales) supports ESA on the activities related to Image Quality, defining the image products and prototyping the processing techniques.

Scope of this paper is to present the Ground Prototype Processor (GPP) that will be in charge of Level-1 production during Sentinel-2 In Orbit Acceptance phase. GPP has been developed by a European industrial consortium composed of Advanced Computer Systems (ACS), Magellium and DLR on the basis of CNES technical specification of Sentinel-2 data processing and under the joint management of ESA-ESTEC and CNES. It will assure the generation of the products used for Calibration and Validation activities and it will provide the reference data for Sentinel-2 Payload Data Ground Segment Validation.

At first, Sentinel-2 end-users products definition is recalled with the associated radiometric and geometric performances; secondly the methods implemented will be presented with an overview of the Ground Image Processing Parameters that need to be tuned during the In Orbit Acceptance phase to assure the required performance of the products. Finally, the complexity of the processing having been showed, the challenges of the production in terms of data volume and processing time will be highlighted. The first Sentinel-2 Level-1 products are shown.

In partnership with the European Commission and in the frame of the Copernicus program, the European Space Agency (ESA) has developed the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas.

The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbits. Sentinel-2 will offer a unique combination of global coverage with a wide field of view (290km), a high revisit (5 days with two satellites), a high resolution (10m, 20m and 60m) and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains). The first sentinel 2A has been launched on June 22nd, 2015, from Kourou, French Guyana.

In this context, the Centre National d’Etudes Spatiales (CNES) supports ESA to insure the cal/val commissioning phase, for Image Quality aspects.

This paper provides first, an overview of the Sentinel-2 system after the launch.

Then the articles focuses on the means implemented and activated in CNES to perform the In Orbit Commissioning, the availability and performances of the different devices involved in the ground segment : the GPP in charge of producing the level 1 files, the “radiometric unit” that processes sensitivity parameters, the “geometric unit” in charge of fitting the images on a reference map, MACCS that will produce Level 2A files (computing reflectances at the Bottom of Atmosphere) and the TEC-S2 that will coordinate all the previous software and drive a database in which will be gather the incoming Level 0 files and the processed Level 1 files.

Sentinel-2 is a multispectral, high-resolution, optical imaging mission, developed by the European Space Agency (ESA) in the frame of the Copernicus program of the European Commission. In cooperation with ESA, the Centre National d’Etudes Spatiales (CNES) is responsible for the image quality of the project, and will ensure the CAL/VAL commissioning phase. Sentinel-2 mission is devoted the operational monitoring of land and coastal areas, and will provide a continuity of SPOT- and Landsat-type data. Sentinel-2 will also deliver information for emergency services. Launched in 2015 and 2016, there will be a constellation of 2 satellites on a polar sun-synchronous orbit, imaging systematically terrestrial surfaces with a revisit time of 5 days, in 13 spectral bands in visible and shortwave infra-red. Therefore, multi-temporal series of images, taken under the same viewing conditions, will be available.

So as to ensure for the multi-temporal registration of the products, specified to be better than 0.3 pixels at 2σ, a Global Reference Image (GRI) will be produced during the CAL/VAL period. This GRI is composed of a set of Sentinel-2 acquisitions, which geometry has been corrected by bundle block adjustment. During L1B processing, Ground Control Points will be taken between this reference image and the sentinel-2 acquisition processed and the geometric model of the image corrected, so as to ensure the good multi-temporal registration.

This paper first details the production of the reference during the CALVAL period, and then details the qualification and geolocation performance assessment of the GRI. It finally presents its use in the Level-1 processing chain and gives a first assessment of the multi-temporal registration.

This paper presents a multi-scale framework for image destriping algorithms, allowing estimating image normalization coefficients adapted to stripe artifacts covering a large range of spatial frequencies. This algorithm can address destriping of push- and whisk-broom satellite images, which often present residual striping patterns along the scanning direction. The proposed method is however generic and can be applied to any image including unidirectional structured noise, e.g. vertically or horizontally. Only a single spectral image channel is required, whereas extension to multi-channel imagery is straightforward. It is an unsupervised method, which is essential to process any acquisition in an operational ground segment. This paper combines the proposed framework with a MAP-estimation-based state-of-the art destriping algorithm and presents applications to real satellite imagery.

We show the use of a simplified snapshot polarimetric camera along with an adaptive image processing for optimal detection of a polarized light beacon through fog. The adaptive representation is derived using theoretical noise analysis of the data at hand and is shown to be optimal in the Maximum likelihood sense. We report that the contrast enhancing optimal representation that depends on the background noise correlation differs in general from standard representations like polarimetric difference image or polarization filtered image. Lastly, we discuss a detection strategy to reduce the false positive counts.

Performance of denoising based on discrete cosine transform applied to multichannel remote sensing images corrupted by additive white Gaussian noise is analyzed. Images obtained by satellite Earth Observing-1 (EO-1) mission using hyperspectral imager instrument (Hyperion) that have high input SNR are taken as test images. Denoising performance is characterized by improvement of PSNR. For hard-thresholding 3D DCT-based denoising, simple statistics (probabilities to be less than a certain threshold) are used to predict denoising efficiency using curves fitted into scatterplots. It is shown that the obtained curves (approximations) provide prediction of denoising efficiency with high accuracy. Analysis is carried out for different numbers of channels processed jointly. Universality of prediction for different number of channels is proven.

The LIDAR technique has recently found many applications in atmospheric physics and remote sensing. One of the main issues, in the deployment of systems based on LIDAR, is the filtering of the backscattered signal to alleviate the problems generated by noise. Improvement in the signal to noise ratio is typically achieved by averaging a quite large number (of the order of hundreds) of successive laser pulses. This approach can be effective but presents significant limitations. First of all, it implies a great stress on the laser source, particularly in the case of systems for automatic monitoring of large areas for long periods. Secondly, this solution can become difficult to implement in applications characterised by rapid variations of the atmosphere, for example in the case of pollutant emissions, or by abrupt changes in the noise. In this contribution, a new method for the software filtering and denoising of LIDAR signals is presented. The technique is based on support vector regression. The proposed new method is insensitive to the statistics of the noise and is therefore fully general and quite robust. The developed numerical tool has been systematically compared with the most powerful techniques available, using both synthetic and experimental data. Its performances have been tested for various statistical distributions of the noise and also for other disturbances of the acquired signal such as outliers. The competitive advantages of the proposed method are fully documented. The potential of the proposed approach to widen the capability of the LIDAR technique, particularly in the detection of widespread smoke, is discussed in detail.

Registration of multi-modal remote sensing images is an essential and challenging task in different remote sensing applications such as image fusion and multi-temporal change detection. Mutual Information (MI) has shown to be successful similarity measure for multi-modal image registration applications, however it has some drawbacks. 1. MI surface is highly non-convex with many local maxima. 2. Spatial information is completely lost in the calculation of the joint intensity probability distribution. In this paper, we present an improved MI similarity measure based on a new concept in integrating other image features as well as spatial information in the estimation of the joint intensity histogram which is used as an estimate of the joint probability distribution. The proposed method is based on the idea that each pixel in the reference image is assigned a weight, then each bin in the joint histogram is calculated as the summations of the weights of the pixels corresponding to that bin. The weight given to each pixel in the reference image is an exponential function of the corresponding pixel values in a distance image and a normalized gradient image such that higher weights are given to points close to one or more selected key points as well as points with high normalized gradient values. The proposed method is in essence a kind of calculating similarity measure using irregular sampling where sampling frequency is higher in areas close to key-points or areas with higher gradients. We have compared the proposed method with the conventional MI and Normalized MI methods for registration of pairs of multi-temporal multi-modal remote sensing images. We observed that the proposed method produces considerably better registration function containing fewer erroneous maxima and leading to higher success rate.

Noise has to be taken into account in the algorithms of classification, target detection and anomaly detection. Recent studies indicate that noise estimation is also crucial in subspace identification of HSI. Several techniques were proposed for noise estimation including: multiple linear regression based techniques, spectral unmixing and remixing etc. The noise in HSI is widely accepted to be a spatially stationary random process. But the variance of the noise varies from one wavelength to another. Two types of noise are considered: the first one is the circuitry noise (thermal noise) which is signal independent. The second one is the photonic noise (shot noise) which is signal dependent. The latter is considered to be the dominant one. A reliable way to accurately estimate the noise requires the identification of a large uniform region in the image. To this end, we propose a region growing technique. At the end of this process, a certain number of regions with different sizes and uniformities are obtained. The next step consists of identifying the most uniform region having the largest area. Once the most uniform and largest region of the scene is identified the next step is to apply an ideal low pass filter to this region. This yields an estimate of the noise-free data, hence the noise itself by calculating the difference. It is also possible to apply the well-known scatter plot technique. Experiments suggest that the proposed scheme produces comparable results to its competitors. A major advantage of the technique is the automated identification of an homogenous region.

Small size object detection in vast ocean plays an important role in rescues after accident or disaster. One of the promising approach is a hyperspectral imaging system (HIS). However, due to the limitation of HIS sensor’s resolution, interested target might occupy only several pixels or less in the image, it’s difficult to detect small object, moreover the sun glint of the sea surface make it even more difficult. In this paper, we propose an image analysis technique suitable for the computer aided detection of small objects on the sea surface, especially humans. We firstly separate objects from background by adapting a previously proposed image enhancement method and then apply a linear unmixing method to define the endmember’s spectrum. At last, we use spectral angle mapping method to classify presented objects and thus detect small size object. The proposed system provides the following results for supporting the detection of humans and other small objects on the sea surface; an image with spectral color enhancement, alerts of various objects, and the human detection results. This multilayered approach is expected to reduce the oversight, i.e., false negative error. Results of the proposed technique have been compared with existent methods, and our method has successfully enhance the hyperspectral image, and detect small object from the sea surface with high human detection rate, shows the ability to further detection of human in this study). The result are less influenced by the sun glint effects. This study helps recognizing small objects on the sea surface, and it leads to advances in the rescuing system using aircraft equipped HIS technology.

Hyperspectral images (HSI) have high spectral and low spatial resolutions. However, multispectral images (MSI) usually have low spectral and high spatial resolutions. In various applications HSI with high spectral and spatial resolutions are required. In this paper, a new method for spatial resolution enhancement of HSI using high resolution MSI based on sparse coding and linear spectral unmixing (SCLSU) is introduced. In the proposed method (SCLSU), high spectral resolution features of HSI and high spatial resolution features of MSI are fused. In this case, the sparse representation of some high resolution MSI and linear spectral unmixing (LSU) model of HSI and MSI is simultaneously used in order to construct high resolution HSI (HRHSI). The fusion process of HSI and MSI is formulated as an ill-posed inverse problem. It is solved by the Split Augmented Lagrangian Shrinkage Algorithm (SALSA) and an orthogonal matching pursuit (OMP) algorithm. Finally, the proposed algorithm is applied to the Hyperion and ALI datasets. Compared with the other state-of-the-art algorithms such as Coupled Nonnegative Matrix Factorization (CNMF) and local spectral unmixing, the SCLSU has significantly increased the spatial resolution and in addition the spectral content of HSI is well maintained.

It has been generally accepted that hyperspectral remote sensing is more effective and provides greater accuracy than multispectral remote sensing in many application fields. EO-1 Hyperion, a representative hyperspectral sensor, has much more spectral bands, while Landsat data has much wider image scene and longer continuous space-based record of Earth's land. This study aims to develop a new method, Pseudo-Hyperspectral Image Synthesis Algorithm (PHISA), to transform Landsat imagery into pseudo hyperspectral imagery using the correlation between Landsat and EO-1 Hyperion data. At first Hyperion scene was precisely pre-processed and co-registered to Landsat scene, and both data were corrected for atmospheric effects. Bayesian model averaging method (BMA) was applied to select the best model from a class of several possible models. Subsequently, this best model is utilized to calculate pseudo-hyperspectral data by R programming. Based on the selection results by BMA, we transform Landsat imagery into 155 bands of pseudo-hyperspectral imagery. Most models have multiple R-squared values higher than 90%, which assures high accuracy of the models. There are no significant differences visually between the pseudo- and original data. Most bands have Pearson's coefficients < 0.95, and only a small fraction has the coefficients < 0.93 like outliers in the data sets. In a similar manner, most Root Mean Square Error values are considerably low, smaller than 0.014. These observations strongly support that the proposed PHISA is valid for transforming Landsat data into pseudo-hyperspectral data from the outlook of statistics.

Striping noise is a phenomenon intrinsic to the process of image acquisition by means of scanning or pushbroom systems, caused by a poor radiometric calibration of the sensor. Although in-flight calibration has been performed, residual spatially and spectrally coherent noise may perturb the quantitative analysis of images and the extraction of physical parameters.

Destriping methods can be classified in three main groups: statistical-based methods, digital-filtering methods and radiometric-equalisation methods. Their performances depend both on the scene under investigation and on the type and intensity of noise to be treated. Availability of simulated data at each step of the digital image formation process, including that one before the introduction of the striping effect, is particularly useful since it offers the opportunity to test and adjust a variety of image processing and calibration algorithms.

This paper presents the performance of a statistical-based destriping method applied to a set of simulated and to images acquired by the EO-1 Hyperion hyperspectral sensor. The set of simulated data with different intensities of coherent and random noise was generated using an image simulator implemented for the PRISMA mission.

Algorithm’s performance was tested by evaluating most commonly used quality indexes. For the same purpose, a statistical evaluation based on image correlation and image differences between the corrected and ideal images was carried out. Results of the statistical analysis were compared with the outcome of the quality indexes-based analysis.

Probabilistic graphical models have strong potential for use in hyperspectral image classification. One important class of probabilisitic graphical models is the Conditional Random Field (CRF), which has distinct advantages over traditional Markov Random Fields (MRF), including: no independence assumption is made over the observation, and local and pairwise potential features can be defined with flexibility. Conventional methods for hyperspectral image classification utilize all spectral bands and assign the corresponding raw intensity values into the feature functions in CRFs. These methods, however, require significant computational efforts and yield an ambiguous summary from the data. To mitigate these problems, we propose a novel processing method for hyperspectral image classification by incorporating a lower dimensional representation into the CRFs. In this paper, we use representations based on three types of graph-based dimensionality reduction algorithms: Laplacian Eigemaps (LE), Spatial-Spectral Schroedinger Eigenmaps (SSSE), and Local Linear Embedding (LLE), and we investigate the impact of choice of representation on the subsequent CRF-based classifications.

In this paper a new unsupervised top-down hierarchical classification method to partition airborne hyperspectral images is proposed. The unsupervised approach is preferred because the difficulty of area access and the human and financial resources required to obtain ground truth data, constitute serious handicaps especially over large areas which can be covered by airborne or satellite images.

The developed classification approach allows i) a successive partitioning of data into several levels or partitions in which the main classes are first identified, ii) an estimation of the number of classes automatically at each level without any end user help, iii) a nonsystematic subdivision of all classes of a partition Pj to form a partition Pj+1, iv) a stable partitioning result of the same data set from one run of the method to another.

The proposed approach was validated on synthetic and real hyperspectral images related to the identification of several marine algae species. In addition to highly accurate and consistent results (correct classification rate over 99%), this approach is completely unsupervised. It estimates at each level, the optimal number of classes and the final partition without any end user intervention.

A major drawback of most of the existing hyperspectral anomaly detection methods is the lack of an efficient
background representation, which can successfully adapt to the varying complexity of hyperspectral images. In
this paper, we propose a novel anomaly detection method which represents the hyperspectral scenes of different
complexity with the state-of-the-art representation learning method, namely auto-encoder. The proposed method
first encodes the spectral image into a sparse code, then decodes the coded image, and finally, assesses the coding
error at each pixel as a measure of anomaly. Predictive Sparse Decomposition Auto-encoder is utilized in the
proposed anomaly method due to its efficient joint learning for the encoding and decoding functions. The
performance of the proposed anomaly detection method is both tested on visible-near infrared (VNIR) and long
wave infrared (LWIR) hyperspectral images and compared with the conventional anomaly detection method,
namely Reed-Xiaoli (RX) detector.1 The experiments has verified the superiority of the proposed anomaly
detection method in terms of receiver operating characteristics (ROC) performance.

Anomaly detection is an important topic in the exploitation of hyperspectral data. Based on the Reed–Xiaoli (RX) detector and a morphology operator, this research proposes a novel technique for improving the accuracy of hyperspectral anomaly detection. Firstly, the RX-based detector is used to process a given input scene. Then, a post-processing scheme using morphology operator is employed to detect those pixels around high-scoring anomaly pixels. Tests were conducted using two real hyperspectral images with ground truth information and the results based on receiver operating characteristic curves, illustrated that the proposed method reduced the false alarm rates of the RXbased detector.

The Hyperspectral images due to its good spectral resolution are extensively used for classification, but its high number of bands requires a higher bandwidth in the transmission data, a higher data storage capability and a higher computational capability in processing systems. This work presents a new methodology for hyperspectral data classification that can work with a reduced number of spectral bands and achieve good results, comparable with processing methods that require all hyperspectral bands. The proposed method for hyperspectral spectra classification is based on the Hidden Markov Model (HMM) associated to each Endmember (EM) of a scene and the conditional probabilities of each EM belongs to each other EM. The EM conditional probability is transformed in EM vector entropy and those vectors are used as reference vectors for the classes in the scene. The conditional probability of a spectrum that will be classified is also transformed in a spectrum entropy vector, which is classified in a given class by the minimum ED (Euclidian Distance) among it and the EM entropy vectors. The methodology was tested with good results using AVIRIS spectra of a scene with 13 EM considering the full 209 bands and the reduced spectral bands of 128, 64 and 32. For the test area its show that can be used only 32 spectral bands instead of the original 209 bands, without significant loss in the classification process.

Whereas single class classification has been a highly active topic in optical remote sensing, much less effort has been given to the multi-label classification framework, where pixels are associated with more than one labels, an approach closer to the reality than single-label classification. Given the complexity of this problem, identifying representative features extracted from raw images is of paramount importance. In this work, we investigate feature learning as a feature extraction process in order to identify the underlying explanatory patterns hidden in low-level satellite data for the purpose of multi-label classification. Sparse auto-encoders composed of a single hidden layer, as well as stacked in a greedy layer-wise fashion formulate the core concept of our approach. The results suggest that learning such sparse and abstract representations of the features can aid in both remote sensing and multi-label problems. The results presented in the paper correspond to a novel real dataset of annotated spectral imagery naturally leading to the multi-label formulation.

This paper presents a novel compressed histogram attribute profile (CHAP) for classification of very high resolution remote sensing images. The CHAP characterizes the marginal local distribution of attribute filter responses to model the texture information of each sample with a small number of image features. This is achieved based on a three steps algorithm. The first step is devoted to provide a complete characterization of spatial properties of objects in a scene. To this end, the attribute profile (AP) is initially built by the sequential application of attribute filters to the considered image. Then, to capture complete spatial characteristics of the structures in the scene a local histogram is calculated for each sample of each image in the AP. The local histograms of the same pixel location can contain redundant information since: i) adjacent histogram bins can provide similar information; and ii) the attributes obtained with similar attribute filter threshold values lead to redundant features. In the second step, to point out the redundancies the local histograms of the same pixel locations in the AP are organized into a 2D matrix representation, where columns are associated to the local histograms and rows represents a specific bin in all histograms of the considered sequence of filtered attributes in the profile. This representation results in the characterization of the texture information of each sample through a 2D texture descriptor. In the final step, a novel compression approach based on a uniform 2D quantization strategy is applied to remove the redundancy of the 2D texture descriptors. Finally the CHAP is classified by a Support Vector Machine classifier with histogram intersection kernel that is very effective for high dimensional histogram-based feature representations. Experimental results confirm the effectiveness of the proposed CHAP in terms of computational complexity, storage requirements and classification accuracy when compared to the other AP-based methods.

With the rapid development of various satellite sensors, automatic and advanced scene classification technique is urgently needed to process a huge amount of satellite image data. Recently, a few of research works start to implant the sparse coding for feature learning in aerial scene classification. However, these previous research works use the single-layer sparse coding in their system and their performances are highly related with multiple low-level features, such as scale-invariant feature transform (SIFT) and saliency. Motivated by the importance of feature learning through multiple layers, we propose a new unsupervised feature learning approach for scene classification on very high resolution satellite imagery. The proposed unsupervised feature learning utilizes multipath sparse coding architecture in order to capture multiple aspects of discriminative structures within complex satellite scene images. In addition, the dense low-level features are extracted from the raw satellite data by using different image patches with varying size at different layers, and this approach is not limited to a particularly designed feature descriptors compared with the other related works. The proposed technique has been evaluated on two challenging high-resolution datasets, including the UC Merced dataset containing 21 different aerial scene categories with a 1 foot resolution and the Singapore dataset containing 5 land-use categories with a 0.5m spatial resolution. Experimental results show that it outperforms the state-of-the-art that uses the single-layer sparse coding. The major contributions of this proposed technique include (1) a new unsupervised feature learning approach to generate feature representation for very high-resolution satellite imagery, (2) the first multipath sparse coding that is used for scene classification in very high-resolution satellite imagery, (3) a simple low-level feature descriptor instead of many particularly designed low-level descriptor, such as SIFT descriptors and saliency, (4) evaluation on two satellite image datasets that come from different sensor sources.

Slovenia is one of the most forested countries in Europe. Its forest management authorities need information about the forest extent and state, as their responsibility lies in forest observation and preservation. Together with appropriate geographic information system mapping methods the remotely sensed data represent essential tool for an effective and sustainable forest management. Despite the large data availability, suitable mapping methods still present big challenge in terms of their speed which is often affected by the huge amount of data. The speed of the classification method could be maximised, if each of the steps in object-based classification was automated. However, automation is hard to achieve, since segmentation requires choosing optimum parameter values for optimal classification results.

This paper focuses on the analysis of segmentation and classification performance and their correlation in a range of segmentation parameter values applied in the segmentation step. In order to find out which spatial resolution is still suitable for forest classification, forest classification accuracies obtained by using four images with different spatial resolutions were compared.

Results of this study indicate that all high or very high spatial resolutions are suitable for optimal forest segmentation and classification, as long as appropriate scale and merge parameters combinations are used in the object-based classification. If computation interval includes all segmentation parameter combinations, all segmentation-classification correlations are spatial resolution independent and are generally high. If computation interval includes over- or optimal-segmentation parameter combinations, most segmentation-classification correlations are spatial resolution dependent.

In recent years, different algorithms for Hyperspectral Image (HI) analysis have been introduced. The high spectral resolution of these images allows to develop different algorithms for target detection, material mapping, and material identification for applications in Agriculture, Security and Defense, Industry, etc. Therefore, from the computer science’s point of view, there is fertile field of research for improving and developing algorithms in HI analysis. In some applications, the spectral pixels of a HI can be classified using laboratory spectral signatures. Nevertheless, for many others, there is no enough available prior information or spectral signatures, making any analysis a difficult task. One of the most popular algorithms for the HI analysis is the N-FINDR because it is easy to understand and provides a way to unmix the original HI in the respective material compositions. The N-FINDR is computationally expensive and its performance depends on a random initialization process. This paper proposes a novel idea to reduce the complexity of the N-FINDR by implementing a bottom-up approach based in an observation from linear algebra and the use of the Gram-Schmidt process. Therefore, the Simplex of Maximal Volume Perpendicular (SMV⊥) algorithm is proposed for fast endmember extraction in hyperspectral imagery. This novel algorithm has complexity O(n) with respect to the number of pixels. In addition, the evidence shows that SMV⊥ calculates a bigger volume, and has lower computational time complexity than other poular algorithms on synthetic and real scenarios.

The majority of pansharpening methods can be classified as spectral or spatial methods, depending on whether they are based on component substitution (CS) or multiresolution analysis (MRA). So far, the suitability of one class or methods rather than another has been seldom discussed. In this paper, through experiments on IKONOS and simulated Pléiades datasets, the authors demonstrate that the performances of spectral methods depend on the extent of spectral matching, measured by the coefficient of determination (CD) of the multivariate regression between MS and P. For data with simulated P, CD is very close to one and all methods perform almost identically. For true IKONOS datasets, the CD is few percent lower and spatial methods, once they have been optimized through the knowledge of the modulation transfer function (MTF) of the imaging system, are always more performing than spectral methods. Since spatial methods are unaffected by the spectral matching, they are preferable whenever such an issue is critical, e.g., for hyperspectral pansharpening.

This paper investigates the potential accuracy achievable for optical to radar image registration by area-based approach. The analysis is carried out mainly based on the Cramér–Rao Lower Bound (CRLB) on translation estimation accuracy previously proposed by the authors and called CRLBfBm. This bound is now modified to take into account radar image speckle noise properties: spatial correlation and signal-dependency. The newly derived theoretical bound is fed with noise and texture parameters estimated for the co-registered pair of optical Landsat 8 and radar SIR-C images. It is found that difficulty of optical to radar image registration stems more from speckle noise influence than from dissimilarity of the considered kinds of images. At finer scales (and higher speckle noise level), probability of finding control fragments (CF) suitable for registration is low (1% or less) but overall number of such fragments is high thanks to image size. Conversely, at the coarse scale, where speckle noise level is reduced, probability of finding CFs suitable for registration can be as high as 40%, but overall number of such CFs is lower. Thus, the study confirms and supports area-based multiresolution approach for optical to radar registration where coarse scales are used for fast registration “lock” and finer scales for reaching higher registration accuracy. The CRLBfBm is found inaccurate for the main scale due to intensive speckle noise influence. For other scales, the validity of the CRLBfBm bound is confirmed by calculating statistical efficiency of area-based registration method based on normalized correlation coefficient (NCC) measure that takes high values of about 25%.

Understanding the dynamics and processes of the ice sheets is crucial for predicting the behavior of climate change. A potential approach to achieve this is by using high resolution (HR) digital elevation models (DEMs) of the ice surface derived from remote sensing radar or laser altimeters. Unfortunately, at present HR DEMs of large portions of the ice sheets are not available. To address this issue, in this paper we propose a multisensor data fusion technique for the generation of a HR DEM of the ice sheets, which fuses two types of data, i.e., radargrams acquired by radar sounder (RS) instruments and ice surface elevation data measured by altimeter (ALT) instruments. The aim of the technique is to generate a DEM of the ice surface at the best possible horizontal resolution by exploiting the complementary characteristics of the RS and ALT data. This is done by defining a novel processing scheme that involves image processing techniques based on data rescaling, geostatistical interpolation and multiresolution analysis (MRA). The method has been applied to a subset of RS and ALT data acquired over a portion of the Byrd Glacier in Antarctica. Experimental results confirm the effectiveness of the proposed method.

In this paper, a new Spectral-Unmixing-based approach, using Nonnegative Matrix Factorization (NMF), is proposed to locally multi-sharpen hyperspectral data by integrating a Digital Surface Model (DSM) obtained from LIDAR data. In this new approach, the nature of the local mixing model is detected by using the local variance of the object elevations. The hyper/multispectral images are explored using small zones. In each zone, the variance of the object elevations is calculated from the DSM data in this zone. This variance is compared to a threshold value and the adequate linear/linearquadratic spectral unmixing technique is used in the considered zone to independently unmix hyperspectral and multispectral data, using an adequate linear/linear-quadratic NMF-based approach. The obtained spectral and spatial information thus respectively extracted from the hyper/multispectral images are then recombined in the considered zone, according to the selected mixing model. Experiments based on synthetic hyper/multispectral data are carried out to evaluate the performance of the proposed multi-sharpening approach and literature linear/linear-quadratic approaches used on the whole hyper/multispectral data. In these experiments, real DSM data are used to generate synthetic data containing linear and linear-quadratic mixed pixel zones. The DSM data are also used for locally detecting the nature of the mixing model in the proposed approach. Globally, the proposed approach yields good spatial and spectral fidelities for the multi-sharpened data and significantly outperforms the used literature methods.

Changes in vegetation cover, building construction, road network and traffic conditions caused by urban expansion affect the human habitat as well as the natural environment in rapidly developing cities. It is crucial to assess these changes and respond accordingly by identifying man-made and natural structures with accurate classification algorithms. With the increase in use of multi-sensor remote sensing systems, researchers are able to obtain a more complete description of the scene of interest. By utilizing multi-sensor data, the accuracy of classification algorithms can be improved. In this paper, we propose a method for combining 3D LiDAR point clouds and high-resolution color images to classify urban areas using Gaussian processes (GP). GP classification is a powerful non-parametric classification method that yields probabilistic classification results. It makes predictions in a way that addresses the uncertainty of real world. In this paper, we attempt to identify man-made and natural objects in urban areas including buildings, roads, trees, grass, water and vehicles. LiDAR features are derived from the 3D point clouds and the spatial and color features are extracted from RGB images. For classification, we use the Laplacian approximation for GP binary classification on the new combined feature space. The multiclass classification has been implemented by using one-vs-all binary classification strategy. The result of applying support vector machines (SVMs) and logistic regression (LR) classifier is also provided for comparison. Our experiments show a clear improvement of classification results by using the two sensors combined instead of each sensor separately. Also we found the advantage of applying GP approach to handle the uncertainty in classification result without compromising accuracy compared to SVM, which is considered as the state-of-the-art classification method.

This study presents a preliminary assessment of the potentialities of the COSMO-SkyMed® (CSK®) satellite constellation to accurately classify different crops. The experiment is focused on the main crops grown in the agricultural region of Marchfeld (Austria) namely carrot, corn, potato, soybean and sugar beet. A Support Vector Machine (SVM) classifier was fed with temporally dense series of backscattering coefficients extracted from a stack of CSK® GTC products. In particular, twenty one CSK® dual polarization (11 HH, 10 VH) images were acquired over the site for the growing season (early April – mid October) in Stripmap Himage mode, with a nominal incidence angle at scene center of 40°. A comparison of the classifications obtained at the two different polarizations are reported and the result are analyzed in terms of the achieved accuracies. The SVM method was able to classify all five crop types with an overall accuracy of 81.6% (Kappa 0.77) at VH polarization and of 84.5% (Kappa 0.80) at HH polarization. Sugar beet, potato and carrot were accurately identified with OA never less than 83% at both polarizations, whereas corn and soybean showed remarkably differences in terms of producer’s and user’s accuracies, probably due to particular agricultural practices adopted for these two crop species. These first results show that the CSK® capability of acquiring temporally dense data sets can accurately identify several crop types.

In this paper, we propose a change detection feature for an amplitude SAR image pair, based on both information theoretic (IT) assumptions and a CFAR criterion derived from the probabilistic model of the ratio image. In particular, the proposed method aims to introduce two main improvements with respect to the previous IT-based approaches. The first goal is to find a strategy to adaptively quantize the 2-D scatterplot instead of applying clustering. This is carried out by performing a preliminary partition of the image pixels according to a constant false alarm rate criterion that is based on the probabilistic model of the ratio image. The second goal is to test the proposed method in order to assess reliable performances in case of severe speckle noise and in case of small percentage of change within the scene. Therefore, experimental results have been carried out with simulated changes applied to synthetically-generated 1-look SAR images produced from an optical remote sensing image. True Cosmo-SkyMed SAR images have been also considered on a damage assessment scenario.

Change detection for high resolution Synthetic Aperture Radar (SAR) imagery requires advanced denoising mechanisms to preserve details and minimize speckle. In this work, we propose a change detector based on a Morphological Component Analysis (MCA) of the scattering mechanisms provided with fully polarimetric data sets. With MCA, the power of each scattering mechanism is decomposed into diverse image features. By introducing a priori knowledge of the content of the scenes, and exploiting both the scattering mechanisms and their corresponding shapes, we can significantly improve performance, with fewer false alarms introduced by clutter, focusing errors, and inconsistent acquisition geometries.

When the covariance matrix representation is used for multi-look polarimetric synthetic aperture radar (SAR) data, the complex Wishart distribution applies. Based on this distribution a likelihood ratio test statistic for equality of two complex variance-covariance matrices and an associated p-value are given. In a case study airborne EMISAR C- and L-band SAR images covering agricultural fields and wooded areas near Foulum, Denmark, are used in single- and bi-frequency, bi-temporal change detection with full and dual polarimetry data.

In this paper, we propose a game-theoretic tree matching algorithm for object detection in high resolution (HR) remotely sensed images, where, given a scene image and an object image, the goal is to determine whether or not the object exists in the scene image. To that effect, tree based representations of the images are obtained using a hierarchical scale space approach. The nodes of the tree denote regions in the image and edges represent the relative containment between different regions. Once we have the tree representations of each image, the task of object detection is reformulated as a tree matching problem. We propose a game-theoretic technique to search for the node correspondences between a pair of trees. This method involves defining a non-cooperative matching game, where strategies denote the possible pairs of matching regions and payoffs determine the compatibilities between these strategies. Trees are matched by finding the evolutionary stable states (ESS) of the game. To validate the effectiveness of the proposed algorithm, we perform experiments on both synthetic and HR remotely sensed images. Our results demonstrate the robustness of the tree representation with respect to different spatial variations of the images, as well as the effectiveness of the proposed game-theoretic tree matching algorithm.

Robust detection of vehicles in airborne data is a challenging task since a high variation in the object signatures – depending on data resolution – and often a small contrast between objects and background lead to high false classification rates and missed detections. Despite these facts, many applications require reliable results which can be obtained in a short time. In this paper, an object-based approach for vehicle detection in airborne laser scans (ALS) and photogrammetrically reconstructed 2.5D data is described. The focus of this paper lies on a robust object segmentation algorithm as well as the identification of features for a reliable separation between vehicles and background (all nonevehicle objects) on different scenes. The described method is based on three consecutive steps, namely, object segmentation, feature extraction and supervised classification. In the first step, the 2.5D data is segmented and possible targets are identified. The segmentation progress is based on the morphological top-hat filtering, which leaves areas that are smaller than a given filter size and higher (brighter) than their surroundings. The approach is chosen due to the low computational effort of this filter, which allows a fast computation even for large areas. The next step is feature extraction. Based on the initial segmentation, features for every identified object are extracted. In addition to frequently used features like height above ground, object area, or point distribution, more complex features like object planarity, entropy in the intensity image, and lineness measures are used. The last step contains classification of each object. For this purpose, a random forest classifier (RF) using the normalized features extracted in the previous step is chosen. RFs are suitable for high dimensional and nonlinear problems. In contrast to other approaches (e.g. maximum likelihood classifier), RFs achieves good results even with relatively small training samples.

A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.

Rooftop extraction from satellite/aerial imagery is an important geospatial problem with many practical applications. However, rooftop extraction remains a challenging problem due to the diverse characteristics and appearances of the buildings, as well as the quality of the satellite/aerial images. Many existing rooftop extraction methods use rooftop corners as a basic component. Nonetheless, existing rooftop corner detectors either suffer from high missed detection or introduce high false alarm. Based on the observation that rooftop corners are typically of L-shape, we propose an L-shaped corner detector for automatic rooftop extraction from high resolution satellite/aerial imagery. The proposed detector considers information in a spatial circle around each pixel to construct a feature map which captures the probability of L-shaped corner at every pixel. Our experimental results on a rooftop database of over 200 buildings demonstrate its effectiveness for detecting rooftop corners. Furthermore, our proposed detector is complementary to many existing rooftop extraction approaches which require reliable rooftop corners as their inputs. For instance, it can be used in the quadrilateral footprint extraction methods or in driving level-set-based segmentation techniques.

Detection of region boundaries is a very challenging task especially in the presence of noise or speckle as in synthetic aperture radar images. In this work, we propose a user interaction based boundary detection technique which makes use of B-splines and well-known powerful tools of information theory such as the Kullback-Leibler divergence (KLD) and Bhattacharyya distance. The proposed architecture consists of the following four main steps: (1) The user selects points inside and outside of a region. (2) Profiles that link these inside and outside points are extracted. (3) Boundary points that lie on the profile are located. (4) Finally, the B-splines that provide both elasticity and smoothness are used connect boundary points together to obtain an accurate estimate of the actual boundary. Existing work related to this approach are extended in several axes. First the use of multiple points both inside and outside of a region made possible to obtain a few times more boundary points. A tracking stage is proposed to put the boundary points in the right order and at the same time eliminate some of them that are erroneously detected as boundary points as well. Experiments were conducted using simulated and real SAR images.

Performance assessment is carried out for a simple target delineation process based on thresholding and shape fitting. The method uses the information contained in Receiver Operating Characteristic curves together with basic observations regarding target sizes and shapes. Performance is gauged by considering the delineations that might result from having particular arrangements of detected pixels within the vicinity of a hypothesized target. In particular, the method considers the qualities of delineations generated when having various combinations of detected pixels at a number of locations around the inner and outer boundaries of the underlying object. Three distinct types of arrangement for pixels on the inner target boundary are considered. Each has the potential to lead to a good quality delineation in a thresholding and shape fitting scheme. The deleterious effect of false alarms within the surrounding local region is also taken into account. The resulting ensembles of detected pixels are treated using familiar rules for combination to form probabilities for the delineations as a whole. Example results are produced for simple target prototypes in cluttered SAR imagery.

In burst mode SAR imaging, echo intensity depends on the target's azimuth position in the antenna pattern. As a result, an amplitude modulation known as scalloping may appear, particularly in ScanSAR images of ocean areas. A denoising method, recently developed for multibeam bathymetry, can be used to reduce residual scalloping in ScanSAR images. The algorithm is analogous to a band-stop filter in the frequency domain. Here, the transform is the composition of an edge detection operator and a discrete Radon transform (DRT). The edge operator accentuates fine-scale intensity changes; the DRT focuses linear features, as each DRT component is the sum of pixel intensities along a linear graph. A descalloping filter is implemented in the DRT domain by suppressing the range direction. The restored image is obtained by applying the inverse composite transform. First, a rapidly converging iterative pseudo-inverse DRT is computed. The edge operator is a spatial filter based on a discrete approximation of the Laplace operator, but modified to make the operator invertible. The method was tested on ocean scene ScanSAR images from the Envisat Advanced Synthetic Aperture Radar. The scalloping effect was significantly reduced, with no apparent distortion or smoothing of physical features.

The aim of this work is to evaluate the compression performance of SAR raw data for interferometry applications collected by airborne from BRADAR (Brazilian SAR System operating in X and P bands) using the new approach based on compressive sensing (CS) to achieve an effective recovery with a good phase preserving. For this framework is desirable a real-time capability, where the collected data can be compressed to reduce onboard storage and bandwidth required for transmission. In the CS theory, a sparse unknown signals can be recovered from a small number of random or pseudo-random measurements by sparsity-promoting nonlinear recovery algorithms. Therefore, the original signal can be significantly reduced. To achieve the sparse representation of SAR signal, was done a curvelet transform. The curvelets constitute a directional frame, which allows an optimal sparse representation of objects with discontinuities along smooth curves as observed in raw data and provides an advanced denoising optimization. For the tests were made available a scene of 8192 x 2048 samples in range and azimuth in X-band with 2 m of resolution. The sparse representation was compressed using low dimension measurements matrices in each curvelet subband. Thus, an iterative CS reconstruction method based on IST (iterative soft/shrinkage threshold) was adjusted to recover the curvelets coefficients and then the original signal. To evaluate the compression performance were computed the compression ratio (CR), signal to noise ratio (SNR), and because the interferometry applications require more reconstruction accuracy the phase parameters like the standard deviation of the phase (PSD) and the mean phase error (MPE) were also computed. Moreover, in the image domain, a single-look complex image was generated to evaluate the compression effects. All results were computed in terms of sparsity analysis to provides an efficient compression and quality recovering appropriated for inSAR applications, therefore, providing a feasibility for compressive sensing application.

In the recent years, new satellite SAR data with very-high spatial resolution are available for scientific studies. In the urban scenario, these data are of high interest. Because, they allow the detection of changes at fine resolution, such those affecting buildings. Thus, they represent a precious information for rescue activities. Here, we study and design a geometrical model for representing possible kinds of damages in buildings. Among the different kinds of damages, we focus the attention on the one associated to the façades visible from the SAR sensor. According to the model and by using a ray-tracing method (i.e., the electromagnetic propagation is approximated with optical rays), we develop an analytical model for the backscattering of partially damaged buildings and investigate their behaviors in multi-temporal VHR SAR images. Both surface and multiple-bounce contributions are considered and analyzed by varying geometrical parameters. The resulting single date and multi-temporal patterns are validated on Cosmo-SkyMed data acquired over L’Aquila before and after the seismic event that hit the city in March 2009.

Multiple-input multiple-output (MIMO) radar is getting more and more applications over the last decade. In near field imaging using a linear MIMO array, the azimuth sampling is non-uniform, resulting in spatially variant point spread function (PSF) over a large imaging zone. In this work, an azimuth sidelobe suppression technique is proposed where apodization or complex amplitude weighting is applied to the multiple channel data prior to image reconstruction. For best sidelobe suppression, the optimal channel weights wopt are obtained through mathematical optimization. The overall process mainly includes three steps. Firstly, the expression of PSF in azimuth is acquired by the azimuth focusing process; Secondly, based on the fact that, for an ideal PSF the maximum value of the mainlobe should be one and the values of sidelobes should be zeros, the problem of finding wopt is mathematically fomulated as an optimization problem; Lastly, by setting proper mainlobe width and sidelobe level, the optimal weights can be solved through convex optimization algorithm. Simulations of a MIMO radar system where channel amplitude-phase error and antenna elements position deviation exist are presented and the performance of the proposed technique is studied.

COSMO-SkyMed (CSK) satellites are providing images with a resolution in the meter regime using the sliding spotlight mode (SL). This is an imaging mode which can obtain better azimuth resolution at the expense of azimuth imaged area than stripmap mode .Spotlight SAR data processing is already an established topic; efficient and accurate solutions in frequency domain have been proposed over the last years. However, the assumptions of these algorithms start to be invalid when applied to high-resolution spotlight SAR data acquired in spaceborne low Earth orbit (LEO) configurations. The assumption of a hyperbolic range history is no longer accurate for sub-metric spatial resolutions due to the satellite curved orbit. Since velocity of a space-borne platform is quite uniform, a simple focusing scheme had been designed in order to handle no straight line trajectory, using both approximated and accurate ω-k focusing kernel. Moreover, when getting close to decimeter resolution (at X-band) other several effects appear; in particular the motion of the satellite during the transmission and reception of the chirp signal deteriorate the impulse response function (IRF), if not properly considered (so called stop-and-go approximation). This paper shows that also CSK SL SAR data, with a resolution close to 1 meter, are not immune to disturbance effects when the stop-and-go approximation is assumed. The ω-k algorithm with satellite curved orbit handling is used to focus CSK spotlight data, and the stop-and-go approximation correction is included in the data processing chain. Experimental results with CSK spotlight data are provided to show quality enhancement on SAR standard focused products.

The use of images from earth observation satellites is spread over different applications, such as a car navigation systems and a disaster monitoring. In general, those images are captured by on board imaging devices and must be transmitted to the Earth using a communication system. Even though a high resolution image can produce a better Quality of Service, it leads to transmitters with high bit rate which require a large bandwidth and expend a large amount of energy. Therefore, it is very important to design efficient communication systems. From communication theory, it is well known that a source encoder is crucial in an efficient system. In a remote sensing satellite image transmission, this efficiency is achieved by using an image compressor, to reduce the amount of data which must be transmitted. The Consultative Committee for Space Data Systems (CCSDS), a multinational forum for the development of communications and data system standards for space flight, establishes a recommended standard for a data compression algorithm for images from space systems. Unfortunately, in the satellite communication channel, the transmitted signal is corrupted by the presence of noise, interference signals, etc. Therefore, the receiver of a digital communication system may fail to recover the transmitted bit. Actually, a channel code can be used to reduce the effect of this failure. In 2002, the Luby Transform code (LT-code) was introduced and it was shown that it was very efficient when the binary erasure channel model was used. Since the effect of the bit recovery failure depends on the position of the bit in the compressed image stream, in the last decade many e
orts have been made to develop LT-code with unequal error protection. In 2012, Arslan et al. showed improvements when LT-codes with unequal error protection were used in images compressed by SPIHT algorithm. The techniques presented by Arslan et al. can be adapted to work with the algorithm for image compression recommended by CCSDS. In fact, to design a LT-code with an unequal error protection, the bit stream produced by the algorithm recommended by CCSDS must be partitioned in M disjoint sets of bits. Using the weighted approach, the LT-code produces M different failure probabilities for each set of bits, p1, ..., pM leading to a total probability of failure, p which is an average of p1, ..., pM. In general, the parameters of the LT-code with unequal error protection is chosen using a heuristic procedure. In this work, we analyze the problem of choosing the LT-code parameters to optimize two figure of merits: (a) the probability of achieving a minimum acceptable PSNR, and (b) the mean of PSNR, given that the minimum acceptable PSNR has been achieved. Given the rate-distortion curve achieved by CCSDS recommended algorithm, this work establishes a closed form of the mean of PSNR (given that the minimum acceptable PSNR has been achieved) as a function of p1, ..., pM. The main contribution of this work is the study of a criteria to select the parameters p1, ..., pMto optimize the performance of image transmission.

In this paper a stable and unsupervised Linde-Buzo-Gray (LBG) algorithm named LBGO is presented. The originality of the proposed algorithm relies: i) on the utilization of an adaptive incremental technique to initialize the class centres that calls into question the intermediate initializations; this technique makes the algorithm stable and deterministic, and the classification results do not vary from a run to another, and ii) on the unsupervised evaluation criteria of the intermediate classification result to estimate the optimal number of classes; this makes the algorithm unsupervised.

The efficiency of this optimized version of LBG is shown through some experimental results on synthetic and real aerial hyperspectral data. More precisely we have tested our proposed classification approach regarding three aspects: firstly for its stability, secondly for its correct classification rate, and thirdly for the correct estimation of number of classes.

In this paper, a new lossy compression method for hyperspectral images (HSI) is introduced. HSI are considered as a 3D dataset with two dimensions in the spatial and one dimension in the spectral domain. In the proposed method, first 3D multidirectional anisotropic shearlet transform is applied to the HSI. Because, unlike traditional wavelets, shearlets are theoretically optimal in representing images with edges and other geometrical features. Second, soft thresholding method is applied to the shearlet transform coefficients and finally the modified coefficients are encoded using Three Dimensional- Set Partitioned Embedded bloCK (3D SPECK). Our simulation results show that the proposed method, in comparison with well-known approaches such as 3D SPECK (using 3D wavelet) and combined PCA and JPEG2000 algorithms, provides a higher SNR (signal to noise ratio) for any given compression ratio (CR). It is noteworthy to mention that the superiority of proposed method is distinguishable as the value of CR grows. In addition, the effect of proposed method on the spectral unmixing analysis is also evaluated.

An image matching method based on closed edges incorporated with vertex angles is proposed in this paper. Based on edge detection results by Edison operator, invariant moments of closed edges and angles between the two branches for edge vertexes are used as matching entities to determine conjugate features candidates. The transformation relationship between images is approximated by similarity transformation model, and a set of transformation parameters can be determined by each pair of conjugate features after combining conjugate feature candidates in pair-wise. Furthermore, considering that the difference among transformation parameters which calculated by real conjugate features are minor, K-d tree method and K-means Spatial clustering method are used in succession to eliminate pairs which contain mismatching features. Therefore, conjugate features can be obtained from the similarity transformation parameters. Experimental results turn out that this method shows a stable performance and presents a satisfactory matching result.

IRST (Infrared Search and Track) has been applied to many military or civil fields such as precise guidance, aerospace, early warning. As a key technique, small target detection based on infrared image plays an important role. However, infrared targets have their own characteristics, such as target size variation, which make the detection work quite difficult. In practical application, the target size may vary due to many reasons, such as optic angle of sensors, imaging distance, environment and so on. For conventional detection methods, it is difficult to detect such size-varying targets, especially when the backgrounds have strong clutters. This paper presents a novel method to detect size-varying infrared targets in a cluttered background. It is easy to find that the target region is salient in infrared images. It means that target region have a signature of discontinuity with its neighboring regions and concentrates in a relatively small region, which can be considered as a homogeneous compact region, and the background is consistent with its neighboring regions. Motivated by the saliency feature and gradient feature, we introduce minimum target intensity (MTI) to measure the dissimilarity between different scales, and use mean gradient to restrict the target scale in a reasonable range. They are integrated to be multiscale MTI filter. The proposed detection method is designed based on multiscale MTI filter. Firstly, salient region is got by morphological low-pass filtering, where the potential target exists in. Secondly, the candidate target regions are extracted by multiscale minimum target intensity filter, which can effectively give the optimal target size. At last, signal-to-clutter ratio (SCR) is used to segment targets, which is computed based on optimal scale of candidate targets. The experimental results indicate that the proposed method can achieve both higher detection precision and robustness in complex background.

In recent years, earthquake and heavy rain have triggered more and more landslides, which have caused serious economic losses. The timely detection of the disaster area and the assessment of the hazard are necessary and primary for disaster mitigation and relief. As high-resolution satellite and aerial images have been widely used in the field of environmental monitoring and disaster management, the damage assessment by processing satellite and aerial images has become a hot spot of research work. The rapid assessment of building damage caused by landslides with high-resolution satellite or aerial images is the focus of this article. In this paper, after analyzing the morphological characteristics of the landslide disaster, we proposed a set of criteria for rating building damage, and designed a semi-automatic evaluation system. The system is applied to the satellite and aerial images processing. The performance of the experiments demonstrated the effectiveness of our system.

In order to solve the problem of insufficient classification types and low classification accuracy using traditional discrete LiDAR, in this paper, the waveform features of Full-waveform LiDAR were analyzed and corrected to be used for land covers classification. Firstly, the waveforms were processed, including waveform preprocessing, waveform decomposition and features extraction. The extracted features were distance, amplitude, waveform width and the backscattering cross-section. In order to decrease the differences of features of the same land cover type and further improve the effectiveness of the features for land covers classification, this paper has made comprehensive correction on the extracted features. The features of waveforms obtained in Zhangye were extracted and corrected. It showed that the variance of corrected features can be reduced by about 20% compared to original features. Then classification ability of corrected features was clearly analyzed using the measured waveform data with different characteristics. To further verify whether the corrected features can improve the classification accuracy, this paper has respectively classified typical land covers based on original features and corrected features. Since the features have independently Gaussian distribution, the Gaussian mixture density model (GMDM) was put forward to be the classification model to classify the targets as road, trees, buildings and farmland in this paper. The classification results of these four land cover types were obtained according to the ground truth information gotten from CCD image data of the targets region. It showed that the classification accuracy can be improved by about 8% when the corrected features were used.

The Multi-Order Solar Extreme Ultraviolet Spectrograph (MOSES) is a rocket-borne slitless imaging spectrometer, designed to observe He II (30.4 nm) emission in the solar transition region. This instrument forms three simultaneous images at spectral orders m=−1, 0, +1 over an extended field of view (FOV). A multi-layer coating on the grating and thin film filters in front of the detectors defines the instrument passband. Each image contains a unique combination of spectral and spatial information. Our overarching goal in analyzing these data is to estimate a spectral line profile at every point in the FOV.

Each spectral order has different image geometry, and therefore different aberrations. Since the point spread function (PSF) differs between any two images, systematic errors are introduced when we use all three images together to invert for spectral line profiles. To combat this source of systematic error, we have developed a PSF equalization scheme.

Determination of the image PSFs is impractical for several reasons, including changes that may occur due to vibration during both launch and recovery operations. We have therefore developed a strategy using only the solar images obtained during flight to generate digital filters that modify each image so that they have the same effective PSF. Generation of the PSF equalization filters does not require that the PSFs themselves be known. Our approach begins with the assumption that there are only two things that cause the power spectra of our images to differ:

(1) aberrations; and

(2) the FOV average spectral line profile, which is known in principle from an abundance of historical data.

To validate our technique, we generate three synthetic images with three different PSFs. We compare PSF equalizations performed without knowledge of the PSF to corrections performed with that knowledge. Finally, we apply PSF equalization to solar images obtained in the 2006 MOSES flight and demonstrate the removal of artifacts.

Automatic image registration is a vital yet challenging task, particularly for non-rigid deformation images which are more complicated and common in remote sensing images, such as distorted UAV (unmanned aerial vehicle) images or scanning imaging images caused by flutter. Traditional non-rigid image registration methods are based on the correctly matched corresponding landmarks, which usually needs artificial markers. It is a rather challenging task to locate the accurate position of the points and get accurate homonymy point sets. In this paper, we proposed an automatic non-rigid image registration algorithm which mainly consists of three steps: To begin with, we introduce an automatic feature point extraction method based on non-linear scale space and uniform distribution strategy to extract the points which are uniform distributed along the edge of the image. Next, we propose a hybrid point matching algorithm using DaLI (Deformation and Light Invariant) descriptor and local affine invariant geometric constraint based on triangulation which is constructed by K-nearest neighbor algorithm. Based on the accurate homonymy point sets, the two images are registrated by the model of TPS (Thin Plate Spline). Our method is demonstrated by three deliberately designed experiments. The first two experiments are designed to evaluate the distribution of point set and the correctly matching rate on synthetic data and real data respectively. The last experiment is designed on the non-rigid deformation remote sensing images and the three experimental results demonstrate the accuracy, robustness, and efficiency of the proposed algorithm compared with other traditional methods.

Polarimetric active radar calibrator (PARC) is one of the most important calibrators with high radar cross section (RCS) for polarimetry measurement. In this paper, a new double-antenna polarimetric active radar calibrator (DPARC) is proposed, which consists of two rotatable antennas with wideband electromagnetic polarization filters (EMPF) to achieve lower cross-polarization for transmission and reception. With two antennas which are rotatable around the radar line of sight (LOS), the DPARC provides a variety of standard polarimetric scattering matrices (PSM) through the rotation combination of receiving and transmitting polarization, which are useful for polarimatric calibration in different applications. In addition, a technique based on Fourier analysis is proposed for calibration processing. Numerical simulation results are presented to demonstrate the superior performance of the proposed DPARC and processing technique.

The paper addresses the issue on landmark matching of images from Geosynchronous Earth Orbit (GEO) satellites. In general, satellite imagery is matched against the base image, which is predefined. When the satellite imagery rotation occurs, the accuracy of many landmark matching algorithms deteriorates. To overcome this problem, generalised Hough transform (GHT) is employed for landmark matching. At first an improved GHT algorithm is proposed to enhance rotational invariance. Secondly a global coastline is processed to generate the test image as the satellite image and the base image. Then the test image is matched against the base image using the proposed algorithm. The matching results show that the proposed algorithm is rotation invariant and works well in landmark matching.

Anomaly detection (AD) becomes increasingly important in hyperspectral imagery analysis with many practical applications. Local orthogonal subspace projection (LOSP) detector is a popular anomaly detector which exploits local endmembers/eigenvectors around the pixel under test (PUT) to construct background subspace. However, this subspace only takes advantage of the spectral information, but the spatial correlat ion of the background clutter is neglected, which leads to the anomaly detection result sensitive to the accuracy of the estimated subspace. In this paper, a local three dimensional orthogonal subspace projection (3D-LOSP) algorithm is proposed. Firstly, under the jointly use of both spectral and spatial information, three directional background subspaces are created along the image height direction, the image width direction and the spectral direction, respectively. Then, the three corresponding orthogonal subspaces are calculated. After that, each vector along three direction of the local cube is projected onto the corresponding orthogonal subspace. Finally, a composite score is given through the three direction operators. In 3D-LOSP, the anomalies are redefined as the target not only spectrally different to the background, but also spatially distinct. Thanks to the addition of the spatial information, the robustness of the anomaly detection result has been improved greatly by the proposed 3D-LOSP algorithm. It is noteworthy that the proposed algorithm is an expansion of LOSP and this ideology can inspire many other spectral-based anomaly detection methods. Experiments with real hyperspectral images have proved the stability of the detection result.

The Multi-Order Solar EUV Spectrograph (MOSES) is a sounding rocket instrument that utilizes a concave spherical diffraction grating to form simultaneous solar images in the diffraction orders m = 0, +1, and −1. The large 2D field of view allows a single exposure to capture spatial and spectral information for large, complex solar features in their entirety.

Most of the solar emission within the instrument passband comes from a single bright emission line. The m = 0 image is simply an intensity as a function of position, integrated over the passband of the instrument. Dispersion in the images at m = ±1 leads to a field-dependent displacement that is proportional to Doppler shift. Our goal is to estimate the Doppler shift as a function of position for every exposure. However, the interpretation of the data is not straightforward. Imaging an extended object such as the Sun without an entrance slit results in the overlapping of spectral and spatial information in the two dispersed images.

We demonstrate the use of local correlation tracking as a means to quantify the differences between the m = 0 image and either one of the dispersed images. The result is a vector displacement field that may be interpreted as a measurement of the Doppler shift. Since two dispersed images are available, we can generate two independent Doppler maps from the same exposure. We compare these to produce an error estimate.

Hyperspectral remote sensing technology provides detailed spectral information from every pixel in an image. Due to the low spatial resolution of hyperspectral image sensors, and the presence of multiple materials in a scene, each pixel can contain more than one spectral signature. Therefore, endmember extraction is used to determine the pure spectral signature of the mixed materials and its corresponding abundance map in a remotely sensed hyperspectral scene. Advanced endmember extraction algorithms have been proposed to solve this linear problem called spectral unmixing. However, such techniques require the acquisition of the complete hyperspectral data cube to perform the unmixing procedure. Researchers show that using colored coded-apertures improve the quality of reconstruction in compressive spectral imaging (CSI) systems under compressive sensing theory (CS). This work aims at developing a compressive supervised spectral unmixing scheme to estimate the endmembers and the abundance map from compressive measurements. The compressive measurements are acquired by using colored coded-apertures in a compressive spectral imaging system. Then a numerical procedure estimates the sparse vector representation in a 3D dictionary by solving a constrained sparse optimization problem. The 3D dictionary is formed by a 2-D wavelet basis and a known endmembers spectral library, where the Wavelet basis is used to exploit the spatial information. The colored coded-apertures are designed such that the sensing matrix satisfies the restricted isometry property with high probability. Simulations show that the proposed scheme attains comparable results to the full data cube unmixing technique, but using fewer measurements.

Small target detection is an importance part of infrared technology. Human visual system properties can improve signal to noise rate and detection rate, etc. In this paper, a small target detection algorithm based on human visual system utilizing distance information is proposed. First, surrounding regions is calculated by weighed sum of each pixel in surrounding regions. The weight is related to distance information between each surrounding pixel and center pixel. Then, the contrast value of center pixel blocks and surrounding regions is calculated. Finally, the contrast value is weighted to the center pixel to get a saliency map. Experiment shows that the proposed has good performance on improving the signal to noise rate and detection rate.

This paper proposes an efficient methodology for combining multiple remotely sensed imagery, in order to increase the classification accuracy in complex forest species mapping tasks. The proposed scheme follows a decision fusion approach, whereby each image is first classified separately by means of a pixel-wise Fuzzy-Output Support Vector Machine (FO-SVM) classifier. Subsequently, the multiple results are fused according to the so-called multiple spectral– spatial classifier using the minimum spanning forest (MSSC-MSF) approach, which constitutes an effective post-regularization procedure for enhancing the result of a single pixel-based classification. For this purpose, the original MSSC-MSF has been extended in order to handle multiple classifications. In particular, the fuzzy outputs of the pixel-based classifiers are stacked and used to grow the MSF, whereas the markers are also determined considering both classifications. The proposed methodology has been tested on a challenging forest species mapping task in northern Greece, considering a multispectral (GeoEye) and a hyper-spectral (CASI) image. The pixel-wise classifications resulted in overall accuracies (OA) of 68.71% for the GeoEye and 77.95% for the CASI images, respectively. Both of them are characterized by high levels of speckle noise. Applying the proposed multi-source MSSC-MSF fusion, the OA climbs to 90.86%, which is attributed both to the ability of MSSC-MSF to tackle the salt-and-pepper effect, as well as the fact that the fusion approach exploits the relative advantages of both information sources.

This paper provides a novel methodology for removing sensor bias from a space based infrared (IR) system (SBIRS) through the use of stars detected in the background field of the sensor. Space based IR system uses the LOS (line of sight) of target for target location. LOS determination and calibration is the key precondition of accurate location and tracking of targets in Space based IR system and the LOS calibration of scanning sensor is one of the difficulties. The subsequent changes of sensor bias are not been taking into account in the conventional LOS determination and calibration process. Based on the analysis of the imaging process of scanning sensor, a theoretical model based on the estimation of bias angles using star observation is proposed. By establishing the process model of the bias angles and the observation model of stars, using an extended Kalman filter (EKF) to estimate the bias angles, and then calibrating the sensor LOS. Time domain simulations results indicate that the proposed method has a high precision and smooth performance for sensor LOS determination and calibration. The timeliness and precision of target tracking process in the space based infrared (IR) tracking system could be met with the proposed algorithm．

The aim of our research was to evaluate the applicability of textural measures for sub-pixel impervious surfaces estimation using Landsat TM images based on machine learning algorithms. We put the particular focus on determining usefulness of five textural features groups in respect to pixel- and sub-pixel level. However, the two-stage approach to impervious surfaces coverage estimation was also tested. We compared the accuracy of impervious surfaces estimation using spectral bands only with results of imperviousness index estimation based on extended classification features sets (spectral band values supplemented with measures derived from various textural characteristics groups).

Impervious surfaces coverage estimation was done using decision and regression trees based on C5.0 and Cubist algorithms. At the stage of classification the research area was divided into two categories: i) completely permeable (imperviousness index less than 1%) and ii) fully or partially impervious areas. At the stage of sub-pixel classification evaluation of percentage impervious surfaces coverage within single pixel was done. Based on the results of cross-validation, we selected the approaches guaranteeing the lowest means errors in terms of training set. Accuracy of the imperviousness index estimation was checked based on validation data set. The average error of hard classification using spectral features only was 6.5% and about 4.4% for spectral features combining with absolute gradient-based characteristics. The root mean square error (RMSE) of determination of the percentage impervious surfaces coverage within a single pixel was equal to 9.46% for the best tested classification features sets. The two-stage procedure was utilized for the primary approach involving spectral bands as the classification features set and for the approach guaranteeing the best accuracy for classification and regression stage.

The results have shown that inclusion of textural measures into classification features can improve the estimation of imperviousness based on Landsat imagery. However, it seems that in our study this is mainly due higher accuracy of hard classification used for masking out the completely permeable pixels.

In this paper, the noncentral chi-squared distribution is applied in the Constant False Alarm Rate (CFAR) detection of hyperspectral projected images to distinguish the anomaly points from background. Usually, the process of the hyperspectral anomaly detectors can be considered as a linear projection. These operators are linear transforms and their results are quadratic form which comes from the transform of spectral vector. In general, chi-squared distribution could be the proper choice to describe the statistical characteristic of this projected image. However, because of the strong correlation among the bands, the standard central chi-squared distribution often cannot fit the stochastic characteristic of the projected images precisely. In this paper, we use a noncentral chi-squared distribution to approximate the projected image of subspace based anomaly detectors. Firstly, the statistical modal of the projected multivariate data is analysed, and a noncentral chi-squared distribution is deduced. Then, the approach of the parameters calculation is introduced. At last, the aerial hyperspectral images are used to verify the effectiveness of the proposed method in tightly modeling the projected image statistic distribution.

The paper presents accuracy comparison of sub-pixel classification based on medium resolution Landsat data and high resolution RapidEye satellite images, performed using machine learning algorithms built on decision and regression trees method (C.5.0 and Cubist). The research was conducted in southern Poland for the catchment of the Dobczyce Reservoir. The aim of the study was to obtain image of percentage impervious surface coverage and assess which data sets can be more applicable for the purpose of impervious surface coverage estimation.

Imperviousness index map generation was a two-stage procedure. The first step was classification, which divided the study area into two categories: a) completely permeable (imperviousness index less than 1%) and b) fully or partially impervious areas. For pixels classified as impervious, the percentage of impervious surface coverage within a single pixel was estimated. Decision and regression trees models construction was done based on training data set derived from Landsat TM pixels as well as for fragments of RapidEye images corresponding to the same Landsat TM training pixels. In order to obtain imperviousness index maps with the minimum possible error we did the estimation of models accuracy based on the results of cross-validation. The approaches guaranteeing the lowest means errors in terms of training set using C5.0 and Cubist algorithm for Landsat and RapidEye images were selected.

Accuracy of the final imperviousness index maps was checked based on validation data sets. The root mean square error of determination of the percentage of the impervious surfaces within a single Landsat pixel was 9.9% for C.5.0/Cubist method. However, the root mean square error specified for RapidEye test data was 7.2%. The study has shown that better results of two-stage imperiousness index map estimation using RapidEye satellite images can be obtained.

Hyperspectral images have the capability of acquiring images of earth surface with several hundred of spectral bands. Providing such abundant spectral data should increase the abilities in classifying land use/cover type. However, due to the high dimensionality of hyperspectral data, traditional classification methods are not suitable for hyperspectral data classification. The common method to solve this problem is dimensionality reduction by using feature extraction before classification. Kernel methods such as support vector machine (SVM) and multiple kernel learning (MKL) have been successfully applied to hyperspectral images classification. In kernel methods applications, the selection of kernel function plays an important role. The wavelet kernel with multidimensional wavelet functions can find the optimal approximation of data in feature space for classification. The SVM with wavelet kernels (called WSVM) have been also applied to hyperspectral data and improve classification accuracy. In this study, wavelet kernel method combined multiple kernel learning algorithm and wavelet kernels was proposed for hyperspectral image classification. After the appropriate selection of a linear combination of kernel functions, the hyperspectral data will be transformed to the wavelet feature space, which should have the optimal data distribution for kernel learning and classification. Finally, the proposed methods were compared with the existing methods. A real hyperspectral data set was used to analyze the performance of wavelet kernel method. According to the results the proposed wavelet kernel methods in this study have well performance, and would be an appropriate tool for hyperspectral image classification.

This paper proposes a novel inshore ship detection method that is based on the approximation of harbour area with piecewise linear line segments. The method heavily depends on a very fine sea-land segmentation, which is realized in two steps in this work. First, an initial mask is generated by thresholding the normalized difference water index (NDWI) using the zero-level of available global elevation data. In the second step, border of the segmentation result is further enhanced via graph-cut algorithm since spectral characteristics of sea close to sea-land border may differ from the ones of deep parts of the sea. The resultant borderline is used for finding line segments that are assumed to represent the man-made harbours. After being merged and eliminated properly, these line segments are used to extract harbour area so that the remaining connected components of the binary mask can be tested for being ship according to their shapes. Test results show that the proposed method is capable of detecting different kinds of ships in a variety of sea states.

The Ground Control Points (GCPs) is an important source of fundamental data in geometric correction for remote sensing imagery. The quantity, accuracy and distribution of GCPs are three factors which may affect the accuracy of geometric correction. It is generally required that the distribution of GCP should be uniform, so they can fully control the accuracy of mapping regions. In this paper, we establish an objective standard of evaluating the uniformity of the GCPs’ distribution based on regional statistical information (RSI), and get an optimal distribution of GCPs. This sampling method is called RSIS for short in this work. The Amounts of GCPs in different regions by equally partitioning the image in regions in different manners are counted which forms a vector called RSI vector in this work. The uniformity of GCPs’ distribution can be evaluated by a mathematical quantity of the RSI vector. An optimal distribution of GCPs is obtained by searching the RSI vector with the minimum mathematical quantity. In this paper, the simulation annealing is employed to search the optimal distribution of GCPs that have the minimum mathematical quantity of the RSI vector. Experiments are carried out to test the method proposed in this paper, and sampling designs compared are simple random sampling and universal kriging model-based sampling. The experiments indicate that this method is highly recommended as new GCPs sampling design method for geometric correction of remotely sensed imagery.

Object-based image classification consists in the assignment of object that share similar attributes to object categories. To perform such a task the remote sensing expert uses its personal knowledge, which is rarely formalized. Ontologies have been proposed as solution to represent domain knowledge agreed by domain experts in a formal and machine readable language. Classical ontology languages are not appropriate to deal with imprecision or vagueness in knowledge. Fortunately, Description Logics for the semantic web has been enhanced by various approaches to handle such knowledge. This paper presents the extension of the traditional ontology-based interpretation with fuzzy ontology of main land-cover classes in Landsat8-OLI scenes (vegetation, built-up areas, water bodies, shadow, clouds, forests) objects. A good classification of image objects was obtained and the results highlight the potential of the method to be replicated over time and space in the perspective of transferability of the procedure.

At present, there has been a great interest in the development of texture based image classification methods in many different areas. This study presents the results of research carried out to assess the usefulness of selected textural features for detection of asbestos-cement roofs in orthophotomap classification.

Two different orthophotomaps of southern Poland (with ground resolution: 5 cm and 25 cm) were used. On both orthoimages representative samples for two classes: asbestos-cement roofing sheets and other roofing materials were selected. Estimation of texture analysis usefulness was conducted using machine learning methods based on decision trees (C5.0 algorithm). For this purpose, various sets of texture parameters were calculated in MaZda software. During the calculation of decision trees different numbers of texture parameters groups were considered. In order to obtain the best settings for decision trees models cross-validation was performed. Decision trees models with the lowest mean classification error were selected.

The accuracy of the classification was held based on validation data sets, which were not used for the classification learning. For 5 cm ground resolution samples, the lowest mean classification error was 15.6%. The lowest mean classification error in the case of 25 cm ground resolution was 20.0%. The obtained results confirm potential usefulness of the texture parameter image processing for detection of asbestos-cement roofing sheets. In order to improve the accuracy another extended study should be considered in which additional textural features as well as spectral characteristics should be analyzed.

Object detection has gained considerable interest in remote sensing community with a broad range of applications including the remote monitoring of building development in rural areas. Many earlier studies on this task performed their analysis using either multispectral satellite imagery or color images obtained via an aerial vehicle. In recent years, hyperspectral imaging (HSI) has emerged as an alternative technique for remote monitoring of building developments. Unlike other imaging techniques, HSI provides a continuous spectral signature of the objects in the field of view (FOV) which facilitates the separation among different objects. In general, spectral signature similarity between objects often causes a significant amount of false alarm (FA) rate that adversely effects the overall accuracy of these systems. In order to reduce the high rate of FA posed by the pixel-wise classification, we propose a novel rural building detection method that utilizes both spatial information and spectral signature of the pixels. Proposed technique consists of three parts; a spectral signature classifier, watershed based superpixel map and an oriented-gradient filters based object detector. In our analysis, we have evaluated the performance of proposed approach using hyperspectral image dataset obtained at various elevation levels, namely 500 meters and 3000 meters. NEO HySpex VNIR-1800 camera is used for 182 band hyperspectral data acquisition. First 155 band is used due to the atmospheric effects on the last bands. Performance comparison between the proposed technique and the pixel-wise spectral classifier indicates a reduction in sensitivity rate but a notable increase in specificity and overall accuracy rates. Proposed method yields sensitivity, specificity, accuracy rate of 0.690, 0.997 and 0.992, respectively, whereas pixel-wise classification yields sensitivity, specificity, and accuracy rate of 0.758, 0.983, 0.977, respectively. Note that the sensitivity reduction is due to sparseness of buildings in rural areas, however, increase in overall accuracy is considered more important in our study.

In this study, a supportive method for afforestation planning process of partially forested areas using hyperspectral remote sensing imagery has been proposed. The algorithm has been tested on a scene covering METU campus area that is acquired by high resolution hyperspectral push-broom sensor operating in visible and NIR range of the electromagnetic spectrum. The main contribution of this study to the literature is segmentation of partially forested regions with a semi-supervised classification of specific tree species based on chlorophyll content quantified in hyperspectral scenes. In addition, the proposed method makes use of various hyperspectral image processing algorithms to improve identification accuracy of image regions to be planted.

Hyperspectral imagery (HSI) is a special imaging form that is characterized by high spectral resolution with up to hundreds of very narrow and contiguous bands which is ranging from the visible to the infrared region. Since HSI contains more distinctive features than conventional images, its computation cost of processing is very high. That’s why; dimensionality reduction is become significant for classification performance. In this study, dimension reduction has been achieved via VNS based band selection method on hyperspectral images. This method is based on systematic change of neighborhood used in the search space. In order to improve the band selection performance, we have offered clustering technique based on mutual information (MI) before applying VNS. The offered combination technique is called MI-VNS. Support Vector Machine (SVM) has been used as a classifier to evaluate the performance of the proposed band selection technique. The experimental results show that MI-VNS approach has increased the classification performance and decrease the computational time compare to without band selection and conventional VNS.