(Auteur) Existing virtual globes, including both unique platforms and associated visualization applications, often present geospatial information with a single-view mode that restricts the user to a single dataset. Due to the absence of the functionality and user interface for coordinating multiple virtual-globe views, it is either hard or impossible to explore multiple different geospatial datasets simultaneously only using the existing virtual globes, especially when the datasets come in multiple sources, multiple spatial resolutions or multiple temporal scales. Here we present a general visualization framework that supports the exploration and comparison of various datasets with multiple coordinated views in the web-based virtual globe environment. This framework not only comprehensively considers the dynamic master/slave relationship between multiple virtual globes, but also effectively handles the coordination mechanism for diverse views to respond to users’ manipulations. We also implement a prototype application (termed MultiGlobe) and demonstrate its effectiveness over three typical application scenarios. The first case addresses the comparison of diverse imagery layers derived from different providers. A second case is examining multiple digital maps for a specific region or theme, such as time-varying LUCC datasets. As a final example, we compare and evaluate the accuracy of multiple DEMs generated from diverse data sources with different resolutions. Our informal evaluation with experts in exploratory visualization and spatial analysis confirms that the multiple-view-enhanced virtual globe can bring many benefits including focusing on spatial awareness, reducing cognitive efforts, coordinating interaction strategies, increasing browsing speed and enhancing comparison capabilities. Therefore, it can be incorporated into a variety of geospatial visualizations to replace or supplement the fixed single-view interfaces of the traditional virtual globe applications, empowering users with the ability to explore and compare multiple different datasets across the same geospatial area synchronously.

(Auteur) The image-based modelling systems create 3D models of objects using a set of overlapping photographs. Several applications are available that do not require a user expert or expensive equipment. In this paper, four free systems were applied in two cases: ReMake, which is a freemium software, CMP Web Service and Arc 3D, which are free web services, and Visual SfM, which is a free software. The purpose of this study is to evaluate the applications that support topographical measurements and to assess the potential for their use in accurate modelling. The results show that these systems can be an auxiliary technique for surveyors and can provide an advantage in some cases.

(Auteur) Various methods have been developed to investigate the geospatial information, temporal component, and message content in disaster-related social media data to enrich human-centric information for situational awareness. However, few studies have simultaneously analyzed these three dimensions (i.e. space, time, and content). With an attempt to bring a space–time perspective into situational awareness, this study develops a novel approach to integrate space, time, and content dimensions in social media data and enable a space–time analysis of detailed social responses to a natural disaster. Using Markov transition probability matrix and location quotient, we analyzed the Hurricane Sandy tweets in New York City and explored how people’s conversational topics changed across space and over time. Our approach offers potential to facilitate efficient policy/decision-making and rapid response in mitigations of damages caused by natural disasters.

(Auteur) Voxelization is an efficient and frequently used data process that is applied to terrestrial laser scanning (TLS) data to facilitate data management and reduce storage size. In this study, an innovative method of equiangular sectorial voxelization is presented based on the distinctive point distribution characteristic of single-scan TLS. It has the function of containing the same number of laser beams going through each voxel, which results in metrics that can be applied to delineate forest conditions. To verify the effectiveness of the new voxelization method and to illustrate its application, 48 plots and 1098 individual trees with different degrees of defoliation were scanned using single-scan TLS. Their defoliation could be linearly regressed by using only point density metrics derived from this new shape of voxels. A 0.89 R2 value and a 12 RMSE (% of defoliation) were obtained for individual-tree-scale estimation, and a 0.83 R2 value and a 12 RMSE (% of defoliation) were obtained for plot-scale estimation. We conclude that the new voxelization method was effective, and the point density that was thus calculated was an efficient feature that revealed forest attributes.

(Auteur) To ensure complete coverage when measuring a large-scale urban area, pairwise registration between point clouds acquired via terrestrial laser scanning or stereo image matching is usually necessary when there is insufficient georeferencing information from additional GNSS and INS sensors. In this paper, we propose a semi-automatic and target-less method for coarse registration of point clouds using geometric constraints of voxel-based 4-plane congruent sets (V4PCS). The planar patches are firstly extracted from voxelized point clouds. Then, the transformation invariant, 4-plane congruent sets are constructed from extracted planar surfaces in each point cloud. Initial transformation parameters between point clouds are estimated via corresponding congruent sets having the highest registration scores in the RANSAC process. Finally, a closed-form solution is performed to achieve optimized transformation parameters by finding all corresponding planar patches using the initial transformation parameters. Experimental results reveal that our proposed method can be effective for registering point clouds acquired from various scenes. A success rate of better than 80% was achieved, with average rotation errors of about 0.5 degrees and average translation errors less than approximately 0.6 m. In addition, our proposed method is more efficient than other baseline methods when using the same hardware and software configuration conditions.