(Auteur) To improve the accuracy of sensor orientation using calibrated aerial images, this paper proposes an automatic sensor orientation method utilizing horizontal and vertical constraints on human-engineered structures, addressing the limitations faced with sub-optimal number of Ground Control Points (GCPs) within a scene. Related state-of-the-art methods rely on structured building edges, and necessitate manual identification of end points. Our method makes use of line-segments but eliminates the need for these matched end points, thus eliminating the need for inefficient manual intervention.
To achieve this, a 3D line in object space is represented by the intersection of two planes going through two camera centers. The normal vector of each plane can be written as a function of a pair of azimuth and elevations angles. The normal vector of the 3D line can be expressed by the cross product of these two plane’s normal vectors. Then, we create observation functions of horizontal and vertical line constraints based on the zero-vector cross-product and the dot-product of the normal vector of the 3D lines. The observation functions of the horizontal and vertical lines are then introduced into a hybrid Bundle Adjustment (BA) method as constraints, including observed image points as well as observed line segment projections. Finally, to assess the feasibility and effectiveness of the proposed method, simulated and real data are tested. The results demonstrate that, in cases with only 3 GCPs, the accuracy of the proposed method utilizing line features extracted automatically, is increased by 50%, compared to a BA using only point constraints.

(Auteur) Photogrammetric methods for dense 3D surface reconstruction are increasingly available to both professional and amateur users who have requirements that span a wide variety of applications. One of the key concerns in choosing an appropriate method is to understand the achievable accuracy and how choices made within the workflow can alter that outcome. In this paper we consider accuracy in two components: the ability to generate a correctly scaled 3D model; and the ability to automatically deliver a high quality data set that provides good agreement to a reference surface. The determination of scale information is particularly important, since a network of images usually only provides angle measurements and thus leads to unscaled geometry. A solution is the introduction of known distances in object space, such as base lines between camera stations or distances between control points. In order to avoid using known object distances, the method presented in this paper exploits a calibrated stereo camera utilizing the calibrated base line information from the camera pair as an observational based geometric constraint. The method provides distance information throughout the object volume by orbiting the object. In order to test the performance of this approach, four topical surface matching methods have been investigated to determine their ability to produce accurate, dense point clouds. The methods include two versions of Semi-Global Matching as well as MicMac and Patch-based Multi-View Stereo (PMVS). These methods are implemented on a set of stereo images captured from four carefully selected objects by using (1) an off-the-shelf low cost 3D camera and (2) a pair of Nikon D700 DSLR cameras rigidly mounted in close proximity to each other. Inter-comparisons demonstrate the subtle differences between each of these permutations. The point clouds are also compared to a dataset obtained with a Nikon MMD laser scanner. Finally, the established process of achieving accurate point clouds from images and known object space distances are compared with the presented strategies. Results from the matching demonstrate that if a good imaging network is provided, using a stereo camera and bundle adjustment with geometric constraints can effectively resolve the scale. Among the strategies for dense 3D reconstruction, using the presented method for solving the scale problem and PMVS on the images captured with two DSLR cameras resulted in a dense point cloud as accurate as the Nikon laser scanner dataset.