Abstract

Single- and dual-polarimetric synthetic aperture radar (SAR) images provide very limited capabilities to interpret physical radar signatures. For generality and simplicity, we call single-polarimetric, dual-polarimetric, and fully polarimetric SAR (PolSAR) images flexible PolSAR images. In order to sufficiently extract physical scattering signatures from this kind of data and explore the potentials of different polarization modes on this task, this paper proposes a contrastive-regulated convolutional neural network (CNN) in the complex domain, attempting to learn a physically interpretable deep learning model directly from the original backscattered data. To achieve a better deep model containing physically interpretable parameters, the objective cost is compared to and selected from several commonly used loss functions in the complex form. The required ground-truth labels are generated automatically according to Cloude and Pottier's H-alpha division plane, which significantly reduces intensive labor cost and transfers this method to an unsupervised learning mechanism. The boundaries between different scattering signatures, however, sometimes show an erroneous separation. With the aim of aggregating intra-class instances and alienating inter-class instances, meanwhile, a complex-valued contrastive regularization term is computed mathematically and is added to the objective cost by a tradeoff factor. Moreover, data augmentation is applied to relieve the side effects caused by data imbalance. Finally, we performed experiments on German Aerospace Center's (DLR)'s L-band, high-resolution (HR), and airborne F-SAR data. Our results demonstrate the possibility of extracting physical scattering signatures from flexible PolSAR images. Physically interpretable potentials of SAR images with different polarization modes are analyzed, and we conclude with physical signature identification.