(auteur) The Chang'e-5 mission of China is planned to be launched in 2019 to the landing area near Mons Rümker located in Oceanus Procellarum. Aiming to generate a high-resolution and high-quality digital orthophoto map (DOM) of the planned landing area for supporting the mission and various scientific analyses, this study developed a systematic and effective method for large-area seamless DOM production. The mapping results of the Chang'e-5 landing area using over 700 Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) images are presented. The resultant seamless DOM has a resolution of 1.5 m, covers a large area of 20° in longitude and 4° in latitude, and is tied to SLDEM2015. The results demonstrate that the proposed method can reduce the geometric inconsistencies among the LROC NAC images to the subpixel level and the positional errors with respect to the reference digital elevation model to about one grid cell size.

(Auteur) Blending photogrammetric and Structure from Motion techniques with Unmanned Aerial Vehicles (UAV) is a commonly used approach for the documentation and analysis of archaeological sites. Using the dense 3D point clouds generated from these techniques, two main photogrammetric products are created: orthophotos and Digital Surfaces Models (DSM). Depending on the UAV technology, the flight parameters, the topography and land cover of the flown area, DSMs and orthophotos are delivered with varying positional accuracies and output scales. In this paper, the positional accuracy and maximum allowable scale of these products generated by complete automation of flight mode and processing workflow are assessed. Moreover, three known International Mapping Standards (IMS) are validated using independent checkpoints, obtained by geodetic Global Navigation Satellite Systems receivers, in two Spanish study areas. The results show that accurate photogrammetric products adapted to the IMS can be successfully obtained by the automation of the photogrammetric workflow.

(Auteur) In the last two decades, the integration of a terrestrial laser scanner (TLS) and digital photogrammetry, besides other sensors integration, has received considerable attention for deformation monitoring of natural or man-made structures. Typically, a TLS is used for an area-based deformation analysis. A high-resolution digital camera may be attached on top of the TLS to increase the accuracy and completeness of deformation analysis by optimally combining points or line features extracted both from three-dimensional (3D) point clouds and captured images at different epochs of time. For this purpose, the external calibration parameters between the TLS and digital camera needs to be determined precisely. The camera calibration and internal TLS calibration are commonly carried out in advance in the laboratory environments. The focus of this research is to highly accurately and robustly estimate the external calibration parameters between the fused sensors using signalised target points. The observables are the image measurements, the 3D point clouds, and the horizontal angle reading of a TLS. In addition, laser tracker observations are used for the purpose of validation. The functional models are determined based on the space resection in photogrammetry using the collinearity condition equations, the 3D Helmert transformation and the constraint equation, which are solved in a rigorous bundle adjustment procedure. Three different adjustment procedures are developed and implemented: (1) an expectation maximization (EM) algorithm to solve a Gauss-Helmert model (GHM) with grouped t-distributed random deviations, (2) a novel EM algorithm to solve a corresponding quasi-Gauss-Markov model (qGMM) with t-distributed pseudo-misclosures, and (3) a classical least-squares procedure to solve the GHM with variance components and outlier removal. The comparison of the results demonstrates the precise, reliable, accurate and robust estimation of the parameters in particular by the second and third procedures in comparison to the first one. In addition, the results show that the second procedure is computationally more efficient than the other two.

(Auteur) The detection of vehicles in aerial images is widely applied in many applications. Comparing with object detection in the ground view images, vehicle detection in aerial images remains a challenging problem because of small vehicle size and the complex background. In this paper, we propose a novel double focal loss convolutional neural network (DFL-CNN) framework. In the proposed framework, the skip connection is used in the CNN structure to enhance the feature learning. Also, the focal loss function is used to substitute for conventional cross entropy loss function in both of the region proposal network (RPN) and the final classifier. We further introduce the first large-scale vehicle detection dataset ITCVD with ground truth annotations for all the vehicles in the scene. We demonstrate the performance of our model on the existing benchmark German Aerospace Center (DLR) 3K dataset as well as the ITCVD dataset. The experimental results show that our DFL-CNN outperforms the baselines on vehicle detection.

(Auteur) Land cover classification from airborne data is considered a challenging task in Remote Sensing. Even in the case of available elevation data, shadows and strong intra-class variations of appearances are abundant in urban terrain. In this paper, we propose an approach for supervised land cover classification that has three main contributions. Firstly, for the cumbersome task of training data sampling we propose an algorithm which combines the freely available OpenStreetMap data with the actual sensor data and requires only a minimum of user interaction. The key idea of this algorithm is to rasterize the vector data using a fast segmentation result. Secondly, pixel-wise classification may take long and be quite sensitive to the resolution and quality of input data. Therefore, superpixel decomposition of images, supported by a general framework on operations with superpixels, guarantees fast grouping of pixel-wise features and their assignment to one of four important classes (building, tree, grass and road). Particularly for extraction of street canyons lying in the shadowy regions, high-level features based on stripes are introduced. Finally, the output of a probabilistic learning algorithm can be postprocessed by a non-local optimization module operating on Markov Random Fields, thus allowing to correct noisy results using a smoothness prior. Extensive tests on three datasets of quite different nature have been performed with two probabilistic learners: The well-known Random Forest and by far less known Import Vector Machine are explored. Thus, this work provides insights about promising feature sets for both classifiers. The quantitative results for the ISPRS benchmark dataset Vaihingen are promising, achieving up to 94.5% and 87.1% accuracy on superpixel and on pixel level, respectively, despite the fact that only around 10% of available labeled data were used. At the same time, the results for two additional datasets, validated with the autonomously acquired training data, yielded a significantly lower number of misclassified superpixels. This confirms that the proposed algorithm on training data extraction works quite well in reducing errors of second kind. However, it tends to extract predominantly huge and easy-to-classify areas, while in complicated, ambiguous regions, first type errors often occur. For this and other algorithm shortcomings, directions of future research are outlined.