The ASC "Smart Surfaces" project plays host to a
number of research efforts to improve the handling and display of extremely large
surfaces. We have made great strides in expanding the potential information content
of display devices. These display devices, and the rendering capacity to which
they are attached, place an upper bound on the data output requirements of the
interactive visualization problem. An interactive system need only generate geometric
data commensurate to the target display, both in terms of the physical number
of available pixels and the rendering engine capacity. At extreme datasets sizes,
these output limitations generally dominate the system. In this sense, the entire
visualization process can be considered a data reduction filter, whose goal is
to efficiently and dynamically manage the processes of data selection, reduction
and transformation, given a specific target display and specified user criteria.

By designing computational and rendering algorithms that are
predominantly output sensitive, we can greatly reduce the computational requirements
of scalable visualization systems. Hierarchical data structures are key not only
to the development of output-sensitive view-dependent algorithms, which enable
one to trade the accuracy of the final result for increased rate of interaction,
but also to the design of progressive algorithms, which can be utilized to improve
interactivity. The "Smart Surfaces" effort includes a number of projects
that share this goal.

One key approach to large surface handling is simplification. We are currently
investigating a number of simplification approaches, but they are all characterized
by the need to scale to the largest surfaces possible and, wherever possible,
running on restrictive hardware targets.

Out-of-Core Simplification of Large Polygonal Models

We present an algorithm for out-of-core simplification of large
polygonal datasets that are too complex to fit in main memory. The algorithm
extends the vertex clustering scheme of Rossignac and Borrel [RB93] by using
error quadric information for the placement of each cluster's representative
vertex, which better preserves fine details and results in a low mean geometric
error. The use of quadrics instead of the vertex grading approach in [RB93] has
the additional benefits of requiring less disk space and only a single pass over
the model rather than two. The resulting linear time algorithm allows simplification
of datasets of arbitrary complexity.

To handle degenerate quadrics associated with (near) flat regions
and regions with zero Gaussian curvature, we present a robust method for solving
the corresponding underconstrained least-squares problem. The algorithm is able
to detect these degeneracies and handle them gracefully. Key features of the
simplification method include a bounded Hausdorff error, low mean geometric error,
high simplification speed (up to 100,000 triangles/second reduction), output
(but not input) sensitive memory requirements, no disk space overhead, and a
running time that is independent of the order in which vertices and triangles
occur in the input.

In this paper we propose three simple, but significant improvements
to the OoCS (Out-of-Core Simplification) algorithm of Lindstrom [Lin00] which
increase the quality of approximations and extend the applicability of the algorithm
to an even larger class of compute systems.

The original OoCS algorithm has memory complexity that depends
on the size of the output mesh, but no dependency on the size of the input mesh.
That is, it can be used to simplify meshes of arbitrarily large size, but the
complexity of the output mesh is limited by the amount of memory available. Our
first contribution is a version of OoCS that removes the dependency of having
enough memory to hold (even) the simplified mesh. With our new algorithm, the
whole process is made essentially independent of the available memory on the
host computer. Our new technique uses disk instead of main memory, but it is
carefully designed to avoid costly random accesses.

Our two other contributions improve the quality of the approximations
generated by OoCS. We propose a scheme for preserving surface boundaries which
does not use connectivity information, and a scheme for constraining the position
of the "representative vertex" of a grid cell to an optimal position inside
the cell.

We present a method for end-to-end out-of-core simplification
and view-dependent visualization of large surfaces. The method consists of three
phases: (1) memory insensitive simplification; (2) memory insensitive construction
of a multiresolution hierarchy; and (3) run-time, output-sensitive, view-dependent
rendering and navigation of the mesh. The first two off-line phases are performed
entirely on disk, and use only a small, constant amount of memory, whereas the
run-time system pages in only the rendered parts of the mesh in a cache coherent
manner. As a result, we are able to process and visualize arbitrarily large meshes
given a sufficient amount of disk space; a constant multiple of the size of the
input mesh.

Similar to recent work on out-of-core simplification, our memory
insensitive method uses vertex clustering on a rectilinear octree grid to coarsen
and create a hierarchy for the mesh, and a quadric error metric to choose vertex
positions at all levels of resolution. We show how the quadric information can
be used to concisely represent vertex position, surface normal, error, and curvature
information for anisotropic view-dependent coarsening and silhouette preservation.

The run-time component of our system uses asynchronous rendering
and view-dependent refinement driven by screen-space error and visibility. The
system exploits frame-to-frame coherence and has been designed to allow preemptive
refinement at the granularity of individual vertices to support refinement on
a time budget.

Our results indicate a significant improvement in processing
speed over previous methods for out-of-core multiresolution surface construction.
Meanwhile, all phases of the method are disk and memory efficient, and are fairly
straightforward to implement.

In this paper we show how out-of-core mesh processing techniques
can be adapted to perform their computations based on the new processing sequence
paradigm [Isenburg and Gumhold 2003; Isenburg et al. 2003], using mesh simplification
as an example. We believe that this processing concept will also prove useful
for other tasks, such as parameterization, remeshing, or smoothing, for which
currently only in-core solutions exist.

A processing sequence represents a mesh as a particular interleaved
ordering of indexed triangles and vertices. This representation allows streaming
very large meshes through main memory while maintaining information about the
visitation status of edges and vertices. At any time, only a small portion of
the mesh is kept in-core, with the bulk of the mesh data residing on disk. Mesh
access is restricted to a fixed traversal order, but full connectivity and geometry
information is available for the active elements of the traversal. This provides
seamless and highly efficient out-of-core access to very large meshes for algorithms
that can adapt their computations to this fixed ordering.

The two abstractions that are naturally supported by this representation
are boundary-based and buffer-based processing. We illustrate both abstractions
by adapting two different simplification methods to perform their computation
using a prototype of our mesh processing sequence API. Both algorithms benefit
from using processing sequences in terms of improved quality, more efficient
execution, and smaller memory footprints.

Several scalable terrain visualization projects are currently under way in
the group.

Visualization of Large Terrains Made Easy

Click on the image to see the related
movie.

We present an elegant and simple to implement framework for performing
out-of-core visualization and view-dependent refinement of large terrain surfaces.
Contrary to the recent trend of increasingly elaborate algorithms for large-scale
terrain visualization, our algorithms and data structures have been designed
with the primary goal of simplicity and efficiency of implementation. Our approach
to managing large terrain data also departs from more conventional strategies
based on data tiling. Rather than emphasizing how to segment and efficiently
bring data in and out of memory, we focus on the manner in which the data is
laid out to achieve good memory coherency for data accesses made in a top-down
(coarse-to-fine) refinement of the terrain. We present and compare the results
of using several different data indexing schemes, and propose a simple to compute
index that yields substantial improvements in locality and speed over more commonly
used data layouts.

Our second contribution is a new and simple, yet easy to generalize
method for view-dependent refinement. Similar to several published methods in
this area, we use longest edge bisection in a top-down traversal of the mesh
hierarchy to produce a continuous surface with subdivision connectivity. In tandem
with the refinement, we perform view frustum culling and triangle stripping.
These three components are done together in a single pass over the mesh. We show
how this framework supports virtually any error metric, while still being highly
memory and compute efficient.

Terrain Simplification Simplified: A General
Framework for View-Dependent Out-of-Core Visualization

Click on the image to see the related
movie.

This paper describes a general framework for out-of-core rendering and management
of massive terrain surfaces. The two key components of this framework are: view-dependent
refinement of the terrain mesh; and a simple scheme for organizing the terrain
data to improve coherence and reduce the number of paging events from external
storage to main memory. Similar to several previously proposed methods for view-dependent
refinement, we recursively subdivide a triangle mesh defined over regularly gridded
data using longest-edge bisection. As part of this single, per-frame refinement
pass, we perform triangle stripping, view frustum culling, and smooth blending
of geometry using geomorphing. Meanwhile, our refinement framework supports a
large class of error metrics, is highly competitive in terms of rendering performance,
and is surprisingly simple to implement.

Independent of our refinement algorithm, we also describe several data layout
techniques for providing coherent access to the terrain data. By reordering the
data in a manner that is more consistent with our recursive access pattern, we
show that visualization of gigabyte-size data sets can be realized even on low-end,
commodity PCs without the need for complicated and explicit data paging techniques.
Rather, by virtue of dramatic improvements in multilevel cache coherence, we
rely on the built-in paging mechanisms of the operating system to perform this
task. The end result is a straightforward, simple-to-implement, pointerless indexing
scheme that dramatically improves the data locality and paging performance over
conventional matrix-based layouts.