To link to the entire object, paste this link in email, IM or documentTo embed the entire object, paste this HTML in websiteTo link to this page, paste this link in email, IM or documentTo embed this page, paste this HTML in website

RAPID CREATION OF PHOTOREALISTIC LARGE-SCALE URBAN CITY
MODELS
by
Charalambos Poullis
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(COMPUTER SCIENCE)
May 2009
Copyright 2009 Charalambos Poullis

In recent years there has been an increasing demand for applications which employ miniature representations of the real world to recreate realistic and immersive virtual environments. Many applications ranging from computer graphics and virtual reality, to Geographical Information Systems have already successfully used real world representations derived from the combination of multi-sensory data captured from aerial or satellite imagery and LiDAR(Light Detection and Ranging) scanners. However, despite their widespread and successfull application, the creation of such realistic 3D content remains a complex, time-consuming, expensive and labor-intensive task. In fact, the creation of models is still widely viewed as a specialized art, requiring personnel with extensive training and experience to produce useful models.; In this thesis, we focus on historically-difficult problems in creating large-scale (city size) scene models from sensor data, including rapid extraction and modeling of geometry models, re-production of high-quality scene textures, and fusion and completion of the geometry and texture data to produce photorealistic 3D scene models. We address the current problems and limitations of state of the art techniques and present our solutions, including a fully automatic technique for extraction of polygonal 3D models from LiDAR data, and a flexible texture blending technique for generation of photorealistic textures from multiple optical sensor resources. The result is a unified(multi-sensory), comprehensive(structure and appearance) and immersive representation of large-scale areas of the real world.; In the first part of this thesis, we address the problem of rapidly creating realistic geometry representations of the real world entirely from remote sensory data captured by an airborne LiDAR scanner, and present two technologies which share the same framework. Firstly, we present a primitive-based technique for the reconstruction of buildings with linear and non-linear roof types using a minimal set of three primitives. We leverage the symmetry constraints found in man-made structures and introduce an extendible parameterization of geometric primitives for the automatic identification and reconstruction of buildings with common linear roof types. The parameterization reformulates the reconstruction as a non-linear optimization problem with reduced dimensionality, therefore considerably reduces the computational time required for the reconstruction of the models. Additionally, we introduce a linear and non-linear primitive for the reconstruction of buildings with complex linear and non-linear roof types such as churches, domes, stadiums, etc.; The primitive-based technique provides a robust and efficient way for reconstructing geometry for any kind of building containing linear or non-linear surfaces, but requires some user interaction. Secondly, we present a complete, automatic probabilistic modeling technique for the reconstruction of buildings with linear roof types. We introduce a robust clustering method based on the analysis of the geometric properties of the data, which makes no particular assumptions about the input data, thus having no data dependencies. The boundaries extracted automatically from the clustered data are then refined and used to reconstruct the geometry for the scene.; In the second part of this thesis, we present a texturing pipeline for the composition of photorealistic textures from multiple imagery data. Information captured by multiple optical sensors is combined together to create an amalgamated texture atlas for the scene models. Thus, by integrating multiple resources, missing or occluded information can be successfully recovered.; Finally, we have integrated the developed techniques to produce a complete modeling system HAL-10K, and extensively tested the system with several city-size datasets including USC campus, downtown Baltimore, downtown Denver, the city of Atlanta, etc. We present and evaluate our experimental results.

RAPID CREATION OF PHOTOREALISTIC LARGE-SCALE URBAN CITY
MODELS
by
Charalambos Poullis
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(COMPUTER SCIENCE)
May 2009
Copyright 2009 Charalambos Poullis