(auteur) Key Message: This study showed that digital terrestrial photogrammetry is able to produce accurate estimates of stem volume and diameter across a range of species and tree sizes that showed strong correspondence when compared with traditional inventory techniques. This paper demonstrates the utility of the technology for characterizing trees in complex habitats such as boreal mixedwood forests.
Context: Accurate knowledge of tree stem taper and volume are key components of forest inventories to manage and study forest resources. Recent developments have seen the increasing use of ground-based point clouds, including from digital terrestrial photogrammetry (DTP), to provide accurate estimates of these key forest attributes.
Aims: In this study, we evaluated the utility of DTP based on a small set of photos (12 per tree) for estimating stem volume and taper on a set of 15 trees from 6 different species (Populus tremuloides, Picea glauca, Pinus contorta latifolia, Betula papyrifera, Picea mariana, Abies balsamea) in a boreal mixedwood forest in Alberta, Canada.
Methods: We constructed accurate photogrammetric point clouds and derived taper and volume from three point cloud–based methods, which were then compared with estimates from conventional, field-based measurements. All methods were evaluated for their accuracy based on field-measured taper and volume of felled trees.
Results: Of the methods tested, we found that the point cloud–derived diameters in a taper curve matching approach performed the best at estimating diameters at the lowest parts of the stem ( 50% of total height). Using the field-measured DBH and height as inputs to calculate stem volume yielded the most accurate predictions; however, these were not significantly different from the best point cloud-based estimates.
Conclusion: The methodology confirmed that using a small set of photographs provided accurate estimates of individual tree DBH, taper, and volume across a range of species and size gradients (10.8–40.4 cm DBH).

(Auteur) Along with improvements to spatial resolution, multiple-view stereo satellite imagery has become a valuable datasource for digital surface model generation. In 2016, a public multi-view stereo benchmark of commercial satellite imag- ery was released by the John Hopkins University Applied Physics Laboratory, USA. Motivated by this well-organized benchmark, we propose a pipeline to process multi-view satellite imagery into digital surface models. Input images are selected based on view angles and capture dates. We apply the relative bias-compensated model for orientation, and then generate the epipolar image pairs. The images are matched by the modified tube-based SemiGlobal Matching method (tSGM). Within the triangulation step, very dense point clouds are produced, and are fused by a median filter to generate the Digital Surface Model (DSM). A comparison with the reference data shows that the fused DSM generated by our pipeline is accurate and robust.

(Auteur) The main purpose of this article is to show that photogram-metric bundle-adjustment computations can be sequentially organized into modules. Furthermore, the chain rule can be used to simplify the computation of the analytical Jacobians needed for the adjustment. Novel projection models can be flexibly evaluated by inserting, modifying, or swapping the order of selected modules. As a proof of concept, two variants of the pinhole projection model with Brown lens distortion were implemented in the open-source Damped Bundle Adjustment Toolbox and applied to simulated and calibration data for a nonconventional lens system. The results show a significant difference for the simulated, error-free, data but not for the real calibration data. The current flexible implementation incurs a performance loss. However, in cases where flexibility is more important, the modular formulation should be a useful tool to investigate novel sensors, data-processing techniques, and refractive models.

(Auteur) In this article, we present two new approaches for image orientation with a focus on robustness, starting with relative orientations of available image pairs, an incremental and a global one, and compare their performance. For the incremental approach, we first choose a suitable initial image pair, and we then iteratively extend the image cluster by adding new images. The rotations of these newly added images are estimated from relative rotations by single rotation averaging. In the next step, a linear equation system is set up for each new image to solve the translation parameters with triangulated tie points that can be viewed in that new image, followed by a resection for refinement. Finally, we refine the orientation parameters of the images by a local bundle adjustment. We also present a global method that consists of two parts: global rotation averaging, followed by setting up a large linear equation system to solve for all image translation parameters simultaneously; a final bundle adjustment is carried out to refine the results. We compare these two methods by analyzing results on different benchmark sets, including ordered and unordered image data sets from the Internet and two other challenging data sets to demonstrate the performance of our two approaches. We conclude that while the incremental method typically yields results of higher accuracy and performs better on the challenging data sets, our global method runs significantly faster.

(Auteur) To improve the accuracy of sensor orientation using calibrated aerial images, this paper proposes an automatic sensor orientation method utilizing horizontal and vertical constraints on human-engineered structures, addressing the limitations faced with sub-optimal number of Ground Control Points (GCPs) within a scene. Related state-of-the-art methods rely on structured building edges, and necessitate manual identification of end points. Our method makes use of line-segments but eliminates the need for these matched end points, thus eliminating the need for inefficient manual intervention.
To achieve this, a 3D line in object space is represented by the intersection of two planes going through two camera centers. The normal vector of each plane can be written as a function of a pair of azimuth and elevations angles. The normal vector of the 3D line can be expressed by the cross product of these two plane’s normal vectors. Then, we create observation functions of horizontal and vertical line constraints based on the zero-vector cross-product and the dot-product of the normal vector of the 3D lines. The observation functions of the horizontal and vertical lines are then introduced into a hybrid Bundle Adjustment (BA) method as constraints, including observed image points as well as observed line segment projections. Finally, to assess the feasibility and effectiveness of the proposed method, simulated and real data are tested. The results demonstrate that, in cases with only 3 GCPs, the accuracy of the proposed method utilizing line features extracted automatically, is increased by 50%, compared to a BA using only point constraints.