Photography and motion pictures play an important role in our society as information carriers, artistic medium, and historical document, representing cultural values which have to be preserved. The emerging electronic imaging techniques help in developing new methods to accomplish this goal. The dyes of common photographic three-color materials are chemically rather unstable. Both the thermodynamic and the photochemical stability is low. As a result, millions of photographs and thousands of films deteriorate, if not preserved and stored under optimal conditions. It is of great interest to curators of museums that house photographic or cinematographic collections to simulate and visualize the fading process. A multimedia production including images and further information offers a direct and convincing way to demonstrate the different effects of various storage alternatives on dye loss. This project is an example of an interdisciplinary approach that includes photography, conservation, and computer science. The simulation program used for the creation of the faded images is based on algorithms developed for the reconstruction of faded color photographic materials.

In the petroleum industry, huge simulations are analyzed with powerful graphics tools. Undoubtedly interactivity increases the efficiency of these tools. Is immersion, popularized in VR systems nowadays, also a key factor? Interactivity can be enhanced by various means, such as dedicated hardware, efficient polygonal extraction algorithms and geometry simplification. Special visualization techniques must be used for refined and multiblock structured grids. There are various input devices and navigational methods to manipulate 3D datasets. Two-dimensional input devices can be used for direct 3D manipulations which give the illusion of handling a real object. Many 6 DOF input devices are available. We have tested a mechanical joystick designed by V. Hayward at McGill University. It does not have some of the disadvantages of electromagnetic sensors (lag times, high noise, low accuracy) and it can be naturally used for control rate motion, which allows precise displacements in large virtual spaces. Conversely the ability of electromagnetic sensors to track various movements in large physical workspaces is quite useful for immersive visualization. Objects can be selected by a flying stylus. These new types of navigational tools combining stereoscopy, head-tracking and 3D manipulation tools will probably prevail over traditional tools which will still survive for some time because they have achieved some kind of perfection on their own.

The NASA EOS Mission to Planet Earth (MTPE) will be both an opportunity and a challenge to visualization and analysis tool developers. The EOS Data and Information System (EOSDIS) will gather, process, and archive a terabyte (TB) of earth science data per day. Because of the size of the EOSDIS program and the size and diversity of the global change community, technical decisions and standards adopted by EOSDIS should have a significant impact on the earth science, geographic information systems (GIS), and visualization development communities as a whole. NASA is working to encourage and assist commercial and other outside tool developers to create or refine visual analysis tools to meet the demands of the large global change research community by providing necessary libraries and application program interfaces (API). In order to simplify data ingest, EOSDIS has adopted the hierarchical data format (HDF) as a standard for most, if not all, EOS data. EOSDIS will provide developers with standard APIs for accessing and understanding the wide variety of data and metadata within EOSDIS data sets. Many existing visualization tools have inadequate capabilities for locating data within geographic and temporal domains. Standards are being established and libraries developed which will allow developers to provide capabilities for navigating, geolocating, and projecting both low-level orbital-swath data and projected/gridded data within their tools.

Three-dimensional tomographic images obtained from different modalities or from the same modality at different times provide complementary information. For example, while PET shows brain function, images from MRI identify anatomical structures. In this paper, we investigate the problem of displaying available information about structures and function together. Several steps are described to achieve our goal. These include segmentation of the data, registration, resampling, and display. Segmentation is used to identify brain tissue from surrounding tissues, especially in the MRI data. Registration aligns the different modalities as closely as possible. Resampling arises from the registration since two data sets do not usually correspond and the rendering method is most easily achieved if the data correspond to the same grid used in display. We combine several techniques to display the data. MRI data is reconstructed from 2D slices into 3D structures from which isosurfaces are extracted and represented by approximating polygonalizations. These are then displayed using standard graphics pipelines including shaded and transparent images. PET data measures the qualitative rates of cerebral glucose utilization or oxygen consumption. PET image is best displayed as a volume of luminous particles. The combination of both display methods allows the viewer to compare the functional information contained in the PET data with the anatomically more precise MRI data.

Various tomographic imaging systems start data processing with a series of 2D images acquired from the target objects. To describe the geometry of complex structures, such as in biomechanical and orthopaedic investigation, we need methods to reconstruct 3D surface boundaries of the objects from 2D contours in the images. Of primary importance is an effective and efficient data structure for representing these free-form surfaces. This paper presents a representation scheme for free-form surface patch by a 3D curvilinear lattice. Coupled with the Marching Cubes algorithm, the new data structure provides a uniform representation for different types of surfaces and its output can be visualized using a normal graphic renderer. The method considerably reduces memory required for boundary generation compared to the conventional way where iso-surfaces are evaluated from a volume. It also speeds up the surface rendering procedure by the reduction of surface triangulation. We illustrate its abilities in handling surfaces with different topological structures.

We discuss how vector quantization, a technique well known for data compression, can be applied to exploratory data visualization. This technique is especially useful for multivariate imagery, because it reduces the data to a manageable size, without stripping important features. Previous visualization methods are able to combine up to three variables per pixel into an integrated display. Our vector quantization technique allows us to integrate essentially any number of variables per pixel. Furthermore, the cluster analysis inherent in vector quantization has the property of identifying relationships within the data, based on similarity of textural and sample features. We use straightforward techniques to visualize these relationships interactively. The result is a tool that applies to a wide variety of imagery visualization problems. Our prototype uses contrast enhancement, color scales, and highlighting for interactive feature extraction. We show examples from panchromatic and multispectral earth observation satellites and medical imagery.

This paper involves data structures in planning to combine engineering research areas considered as communication modes: image, outline-sketches, and speech. Images are enhanced compressed and transmitted, but in graphics solid display is central, while in speech recognition/identification dominate. Outside computing, graphics uses sketch, outline-drawing, or schematic summaries of other data (photographic images). Practical image-processing involves comparisons, features/edges, shape, and segmentation, using both transforms and other global analyses. Most speech work involves domain restriction. This limit can be deleted by focussing on data structures: they can link word and picture domains, and allow for captioning, for indexing/highlighting-domains to users. This shows data structures enable implementing useful functions, support information-handling with synergistic benefits: the paper's theme. Data structuring is also the theme of recent research literature on alternate means for visual presentation of multiple-measure numerical data. This paper briefly surveys these materials. We show how research from the data structure field enables new methods for addressing visualization issues, improves large-record data-handling, and aids greater use of visual and numerical records. (This expands on a talk presented 8 July 1994 at Argonne National Laboratory.)

We present alternative ways of looking at vector fields that complement existing flow visualization methods. These techniques are based on bump mapping and are simple, easy to compute, and fast to render. For example, in one of the techniques a surface from the vector field is extracted and the directions of the vector values on the surface are normalized and used as surface normals. The shading of the surface provides directional information. By manipulating the position and color of light sources, different regions with particular vector directions can be highlighted or hidden allowing direction based selection. Since the technique simply uses the directions of the vector as surface normals to bump the surface, it can also be used on irregularly sampled flow fields as well as for visualizing flow directions on curved surfaces. The magnitudes of the vector values can be optionally mapped to color.

Environmental data have inherent uncertainty which is often ignored in visualization. For example, meteorological stations measure wind with good accuracy, but winds are often averaged over minutes or hours. As another example, Doppler radars (wind profilers and ocean current radars) take thousands of samples and average the possibly spurious returns. Others, including time series data, have a wealth of uncertainty information that the traditional vector visualization methods such as using wind barbs and arrow glyphs simply ignore. We have developed new vector glyphs to visualize uncertain winds and ocean currents. Our approach is to include uncertainty in direction and magnitude, as well as the mean direction and length, in vector glyph plots. Our glyphs show the variation in uncertainty, and provide fair comparisons of data from instruments, models, and time averages of varying certainty. We use both qualitative and quantitative methods to compare our glyphs to traditional ones. Subjective comparison tests with experts (meteorologists and oceanographers) are provided, as well as objective tests (data ink manximization), where the information density of our new glyphs and traditional glyphs are compared. We have shown that visualizing data together with their uncertainty information enhances the understanding of the continuous range of data quality in environmental vector fields.

Through the use of digital image processing and advanced shock tube technology, a more stringent comparison of experimental data with computational predictions can be achieved. Holographic interferometry is used to visualize density distributions of shock wave flows experimentally. Experiment-like images are created from numerical data using the optical parameters of the experiment. In the case of axisymmetric flow a ray-tracing algorithm is incorporated to produce the correct images. Where many dynamic processes, such as multiple shock reflections, occur the use of a single image is not sufficient. For these cases a specially designed shock tube is used in combination with a computer controlled trigger to yield images obtained at constant experimental conditions but different moments in the development of the flow. This data is combined into animation sequences. This presentation of results provides a better insight into the flow physics and CFD code behavior. The combination of a physical interpretation of numerical data and a dynamic comparison of images has been found to greatly enhance the quality of CFD code validation with the experiment.

We present the DEVise (data exploration via visualization environment) toolkit designed for visual exploration of stream data. Data of this type are collected continuously from sources such as remote sensors, program traces, and the stock market. A typical application involves looking for correlations, which may not be precisely defined, by experimenting with graphical representations. This includes selectively comparing data from multiple sources, selective viewing by zooming and scrolling at various resolutions, and querying the underlying data from the graphics. DEVise is designed to provide greater support than packages such as AVS or Khoros for this type of application. First, by abandoning the network flow model of AVS and Khoros in favor of a database query model, we are able to incorporate many performance improvements for visualizing large amounts of data. To our knowledge, this is the first attempt to eliminate data size limitations in a visualization package. Second, by structuring the stand-alone graphics module of most existing tools into user accessible components, users can quickly create, destroy, or interconnect the components to generate new visualizations. This flexibility greatly increases the ease with which users can browse their data. Finally, through limited programming, users can query the underlying data through the graphical representation for more information about the records used to generate the graphical representation.

Computational fluid dynamics simulations result in large multivariate data sets with information such as pressure, temperature, and velocity available at grid points for a sequence of steps. Velocity data is typically visualized by displaying a particle animation or streamlines. We present an efficient method for calculating particle paths based on velocity data from curvilinear grids. In order to compute the path, a velocity must be determined at arbitrary points inside the grid. We use a tetrahedral decomposition of the curvilinear grid. Each voxel, formed by eight points, is divided into five tetrahedra. The point of intersection of a particle's path and the boundary of a tetrahedron is calculated and look-up tables are used to determine which tetrahedron the particle enters next. The new velocity is computed by interpolating the velocity at the four tetrahedron grid points. Tracing through the tetrahedra eliminates the need for searching through the curvilinear grid and eliminates additional sampling error caused by imposing a regular grid. Using our method, the time to update the position of a particle for a single time step is essentially a constant [O(1)].

Nonlinear deterministic dynamical systems often exhibit complex and chaotic behavior which is difficult to comprehend. Visualizing the characteristics of such systems is therefore essential for an understanding of the underlying dynamics. In this paper concepts for the interactive graphical exploration of analytically defined dynamical systems are discussed. Emphasis is put on interactivity which shall facilitate the investigation and exploration of such systems. The following topics on dynamical systems are treated in more detail: interactive specification, simple and fast graphical representation, and interactive modification. The paper concentrates on 2D and 3D orthographic projections of higher-dimensional phase spaces and on the display of bifurcation diagrams. A prototype software system which incorporates the previously presented ideas is shortly discussed. The software system is intended to offer a quick insight into the dynamics of a dynamical system and to enable fast investigation of variations of a dynamical system.

Dataflow visualization systems such as AVS, IRIS Explorer, and IBM Data Explorer have been widely used. But the inherent problems such as excessive memory usage and low run- time efficiency have imposed limitations for their use with large, time-dependent data sets. For interactive visualization, efficiency is more of a necessity than luxury. 4d2 is designed as a high-performance interactive visualization tool for time-dependent 3D CFD data on rectangular grids containing multiple scalar and vector fields, as well as for time-dependent 3D particle data. The visualization functions include contour plot, vector plot, slicing, iso- surface extraction, streamline integration, particle displaying, and volume rendering, as well as graphical user interface and animation control for time-dependent data sets. Several features distinguish 4d2 from other visualization tools. First, it provides a rich set of visualization functions and allow mixing of volume rendering with iso-surfaces, streamlines, particles, and so on. Secondly, many techniques are used to achieve extremely memory-efficient and time- efficient run-time performance: no data duplication is needed, hardware alpha-buffering is used to do volume-rendering, immediate graphics mode is used, and parallel algorithms are used when multiprocessors are available. This GL-based software runs on SGI workstations with both 8-bit and 24-bit graphics hardware, and is available in public domain. 4d2 has been successfully used in atmospheric science and astrophysics at the National Center for Supercomputing Applications and a number of other research labs.

The exploration of visual data and the use of visual information during the design process can be greatly enhanced by working within the virtual environment where the user is closely coupled to the data by means of immersive technologies and natural user interfaces. Current technology enables us to construct a virtual environment utilizing 3D graphics projection, object generated stereo sound, tactile feedback, and voice command input. Advances in software architectures and user interfaces enable us to focus on enhancing the design process within the virtual environment. These explorations at MITRE have evolved into an application which focuses on the ability to create, manipulate, and explore photo and audio realistic 3D models of work spaces, office complexes, and entire communities in real-time. This application, the Virtual Interactive Planning System, is a component of the MITRE virtual model shop, a suite of applications which permits the user to design and manipulate computer graphics models within the virtual environment.

An example of visualization used in high energy physics is the visualization of photon trajectories in scintillating detectors and electrons in proportional chambers and new calorimeter detectors based on secondary electron emission. The main disadvantage of the latter is that the time interval of the electron collection from the detector's volume is too long. A visualization method presented provides a designer with a powerful simple tool for an optimization of the detector parameters. In the presented work we use two parameters for electron dynamics modelling: an initial energy and an angle of emission. A beam of noninteracting electrons is used as a model. The beam consists of particles with one of the parameters being varied. Thus, emission angles are varied holding initial kinetic energy of particles constant and vice versa. The beam motion is visualized as a colored surface. Each color corresponds to the definite value of the parameter. This approach allows us to investigate the behavior of the electron in the electromagnetic fields of different configuration dependent on the field parameters. All possible values of initial energy and emission angle were also taken into account.

Assembly planning is an important component for automation in manufacturing. It can help reduce the production cost by avoiding unstable subassemblies and eliminating unnecessary tool changes within the assembly cell. The assembly plan generation process begins with the exploration of the precedence relations due to geometrical and mechanical constraints. After the precedence relations are derived, all feasible assembly sequences are generated. A diamond-shape graph is commonly used to visualize all possible assembly sequences. A dual representation of all assembly sequences is also provided to facilitate the assembly sequence comparison task. Each possible sequence is transformed into a nodal representation and assumes a spatial location in a three-dimensional space. The proximity among all assembly sequence nodes in the dual space is designed to reflect the similarity among the sequences. The user can therefore navigate in the space of all feasible assembly sequences and compare similar assembly sequences that are clustered closely in the dual space. All three visualizations, namely the precedence relation, the diamond graph, and the dual graph, are coupled together so that interactions on one visualization are reflected on the other two.

Expert interpretation of raster-based data, needed when, for example, automatic reconstruction of sparsely sampled data cannot produce accurate models, requires a means for interaction through which the expert's knowledge can be incorporated into the model to improve accuracy. If such expert interpretation is to be viable, the interaction must be intuitive, direct and flexible. We present a novel approach to the design of such interaction: the use of the discrete thin-plate spline permits interactive manipulation of the stiffness and tension parameters in the plate to control the behavior between control points; an object based approach allows raster based objects to be manipulated in an intuitive manner in the context of a visual representation of the objects. The editor adopts a problem driven approach which allows specialized editing tools to be developed for editing in a specific application domain. A prototype implementation of the editor is presented which provides insights into the advantages and limitations of the approach.

Armstrong Laboratory is focusing on software which will allow a graphics workstation to completely create and fly through a visual database anywhere in the world in under one hour, with no human intervention. This is being accomplished by utilizing the strengths of graphics workstations, rather than emulating traditional image generator methodology. Current software development allows a graphics workstation to automatically generate a flyable, two degree latitude by two degree longitude, visual database in less than five minutes from Defense Mapping Agency's DTED (digital terrain elevation data). Sources of cultural, color, and satellite digital data which could be used to improve the coloration/texture of the terrain- skin are being sought. Good feature data would then allow the development of an automated cultural population (e.g., buildings, overpasses, and rivers) of the database. These databases possess a much higher density of terrain data (approximately 450 times the usual number of terrain polygons) than traditional flight simulation databases processed from DTED source. This higher resolution provides the opportunity to research acrobatic maneuvering strategies during combat situations. Armstrong Laboratory's rapid database generation (RDG) project also incorporates a software-controlled high resolution inlay inside a lower resolution database within a single display-channel.

We consider the task of passive navigation, where a stereo visual sensor system moves around an unknown scene. In order to guide an autonomous navigation, it is important to build a visual map which records the location and the shape of the objects in the scene and their world coordinates. The extended global visual map is an integration of local maps. The approach described in this paper integrates the processes of motion estimation, stereo matching, temporal tracking, and Delaunay triangulation interpolation. Through stereo matching, each frame (a stereo image pair) can produce a set of 3D points of the current scene. The global structures of these 3D points are obtained using the results of the motion estimation. Delaunay tetrahedralization interpolates three-dimensional data points with a simplicial polyhedral surface. The experiment includes 151 frames of stereo images acquired from the moving mobile robot.

During the design of the COVISE (collaborative visualization environment) distributed and collaborative visualization system the efficient use of high performance computers and networks has been taken into account as far as possible. The efficiency of this system under varying conditions has been measured and the results of these measurements and their relation to the theoretically achievable values are presented. The results and the usability of these scenarios are discussed and possibilities to improve the overall performance are proposed.

The principles of scalable computing have been used in an investigation of the application of high speed data networks and remote computer resources in providing visualization tools for research and development activities. The architecture of a distributed visualization system that can utilize either shared memory or message passing paradigms is described. The three components of the system can be physically separated if network communication is provided. A flexible data cache server is used to accommodate newly computed data or data from an earlier experiment or computation. An image specification toolset, implemented for parallel/distributed architectures using PVM, includes methods of calculating common visualization forms such as vector fields, surfaces or streamlines from cache data. An image generation library, implemented for workstations and high performance PCs, receives the data objects and provides investigators with flexibility in image display. The system has been operated with several combinations of distributed and parallel processor machines connected by networks of different bandwidths and capacities. Observations on the performance and flexibility of different system architectures are given.

The complexity of parallel programs make them more difficult to analyze for correctness and efficiency, in part because of the interactions between multiple processors and the volume of data that can be generated. Visualization often helps the programmer in these tasks. This paper focuses on the development of a new technique for constructing, evaluating, and modifying sophisticated, application-specific visualizations for parallel programs and performance data. While most existing tools offer predetermined sets of simple, two-dimensional graphical displays, this environment gives users a high degree of control over visualization development and use, including access to three-dimensional graphics, which remain relatively unexplored in this context. We have developed an environment that uses the IBM Visualization Data Explorer system to allow new visualizations to be prototyped rapidly, often taking only a few hours to construct totally new views of parallel performance trace data. Yet, access to a robust library of sophisticated graphical techniques is preserved. The burdensome task of explicitly programming the visualizations is completely avoided, and the iterative design, evaluation, and modification of new displays is greatly facilitated.

Vision characteristics are covered by the image transfer theory. But up to now, it dealt mainly with observation of Lambertian (i.e., diffuse reflecting) objects on a Lambertian background. This model of reflection is quite a reasonable one for many natural and artificial objects to describe vision quality. This paper presents the mathematical description for vision criteria of another class of objects--retroreflectors to permit their angular patterns of reflection to be dealt with under unfavorable observation conditions through a light-scattering medium such as fog. The small-angle diffusion approximation is used for the calculations of light characteristics under illumination by some source of an active vision system. By way of examples, there will be considered two questions: (1) Visual perception of large-are objects where some parts of a a retroreflector can be seen as dark and others as bright ones. This fact may be important when analyzing and exploring visual information being read out from a retroreflective panel. (2) The interesting effect of enhancing the contrast of a retroreflector image with increasing optical thickness of a scattering medium. This is related to increasing 'effective' albedo of an 'equivalent' lambertian object the retroreflector can be replaced by. The results on vision characteristics of retroreflective objects are compared with those for the case of observation of Lambertian ones. the corresponding differences are discussed.

We discuss the integration of visualization and supercomputing in a low cost environment. Computational requirements continue to increase dramatically as computational capabilities do. Yet most architectures still separate both processes. The computation is done on one system and the visualization on another. We describe an innovative architecture developed by the Supercomputer Research Center of the Institute for Defense Analysis within which the integration of visualization and supercomputation is realized. Immediate gains are obvious: program visualization, real-time computational steering, and rapid porting of current applications. We describe the issues in porting our experimental visualization software and issues we encountered. We describe limitations and advantages of the hardware/software coupling. We also discuss a proposed extension of that architecture.

Compared with other medical imaging modalities, ultrasonic imaging has its advantages of safety, low cost, and very importantly, real-time performance in data acquisition. To match the real-time data acquisition, interactivity is considered as one of the crucial characteristics for 3D ultrasonic imaging systems. This paper proposes an incremental refinement method in a multi-process system with an eye on ultimately running real-time 3D ultrasonic imaging applications. We use deformable geometric models as templates of the target biomedical objects. With a matching and refining procedure, the geometric data will be incrementally refined on the arrival of each injected boundary from 2D ultrasonic slice. All the data processing tasks are distributed to a set of processes which are connected in a graphic environment. Each process incrementally reads the data received from its upstream and produces new data for its downstream. By simultaneously running different processes, the system gradually transforms the 3D templates to match the detected ultrasonic images.

Accurate 3-dimensional segmentation and volume reconstruction of scanned data sets are the subjects of a great deal of ongoing computer science and medical research. The current trend is away from simple surface reconstruction and towards having both internal and external reconstructions of the structures of soft tissue, such as the grey and white matter of the brain. Being able accurately to threshold and segment structures such as these have additional benefits in that surface area and volume measurements can be obtained during the process of reconstruction. Measures such as these enable the tracking of physical properties, such as tumor size over the duration of the disease. The primitive for the representation of the interior of these objects is a tetrahedron. Tetrahedral geometries are readily convertible into finite element (FE) or computational fluid dynamic (CFD) meshes. The methodology and software modules are not proprietary and with a moderate amount of experience satisfactory results can be obtained. The key concepts are accurate multi-resolution thresholding, image editing capability, grid simplification for importing into the alpha-shape modeler, and accurate boundary classification.

Model-based object recognition must solve three-dimensional geometric problems involving the registration of multiple sensors and the spatial relationship of a three-dimensional model to the sensors. Observation and verification of the registration and recognition processes requires display of these geometric relationships. We have developed a prototype software system which allows a user to interact with the sensor data and model matching system in a three- dimensional environment. This visualization environment combines range imagery, color imagery, thermal (infrared) imagery, and CAD models of objects to be recognized. We are currently using imagery of vehicles travelling off-road (a challenging environment for the object recognizer). Range imagery is used to create a partial three-dimensional representation of a scene. Optical imagery is mapped onto this partial 3D representation. Visualization allows monitoring of the recognizer as it solves for the type and position of the object. The object is rendered from its associated CAD model. In addition to its usefulness in development of the object recognizer, we foresee eventual use of this technology in a fielded system for operator verification of automatic target recognition results.

A three-stage method is presented to interactively sonify spatially located data sets, such that both individual point sources and overviews of larger areas can be explored. First, a region of interest in the data set is defined. Delimiting the data points by area and other means corresponds to search with multiple keys in a geographic data base. Second, the selected values are mapped to parameters of simulated sound sources at corresponding locations, which are recorded by visually controllable microphone probes. The probes are shown on the visualized data landscape with their cut-off areas and data selections. At the last stage the sounds are spatialized according to distance and location of sources relative to the probes, adding echoes and reverberation from a virtual acoustic environment. We are experimenting with a listening room with loudspeakers and a headphone setting with realtime computation of the head-related transfer function (HRTF). Combined with interactive visualization, this kind of sonification helps as a navigational aid to locate interesting spots of data. The system allows arbitrary mapping of data channels to visual and sonic parameters, which facilitates flexible exploration of multivariate data.

The color icon harnesses color and texture perception to create integrated displays of two- dimensional multiparameter distributions. We have redesigned the color icon. The new design increases the number of parameters that can be integrated. It also extends the icon from two dimensions to three dimensions (which increases the number of parameters that can be integrated even further). To facilitate comparison with other iconographic approaches, we have incorporated the color icon as one of several other icons within our NewExvis environment. Finally, we have implemented a parallel version on a proprietary multiprocessor architecture. We describe the new design; the main issues, considerations, and features of the parallel implementation; and a few application examples.

There is a growing need for systematic control over, and interpretation of, sound to support visual and other sensory mechanisms. A major drawback of many multi-sensory systems is that achieving sensory alignment is difficult if the perceived response to adjustments is not intuitively predictable. Perceptual spaces attempt to provide predictable perceived response to the adjustment of parameters, both in terms of identification of major attributes of change, and in terms of uniformity of perceived change with degree of movement or adjustment. Such spaces have been used to advantage for color representations of data under static viewing conditions. It is possible to construct a perceptual sound space, using the same principles used to construct perceptual color spaces, by drawing on studies identifying sound attributes and their degree of perceptibility. Such a space can be used as a basis for encoding data characteristics by sound attributes, although the temporal nature of sound perception poses `gamut' interpretation distinctions from the analogous color spaces. This paper describes a perceptual sound space based on a pitch-brightness-timbre orthogonalization and linearization against perceived stimuli. The suitability of sound, controlled within this perceptual sound space, for data representation and navigation of complex information spaces is investigated.

Space-variant filtering is generally expensive and difficult to implement in a generic manner. As a result, conventional image filtering is largely space-invariant. Much imagery, such as sensed or modeled data that is geometrically distorted, requires space-variant filtering if data sampling integrity is to be preserved. Space-variant filtering under interactive control can better enable the expertise of an application specialist because filter kernel characteristics, and the result of applying the filters, can be visualized simultaneously as parameters are adjusted. This paper shows how space-variant filters can be generated, modified, and applied to real filtering problems interactively using visualization of filter kernel images and the effects of their application. Massively parallel processing is exploited to provide scalable realizations of the filtering, in which space-variant filters of varying type and bandwidth are embedded within parallel tool-kits. Control of filter characteristics is achieved using image masks derived from interaction, from data properties, from modeling parameters, and from data format information. Application examples show space-variant filtering requirements for surface modeling to avoid smoothing regions of high spatial frequency but which allow smoothness in regions of low spatial frequency, together with geometrically and parametrically derived filtering.

Vision characteristics are covered by the image transfer theory. But up to now, it dealt mainly with observation of Lambertian (i.e., diffuse-reflecting) objects on a Lambertian background. This model of reflection is quite a reasonable one for many natural and artificial objects to describe vision quality. This paper presents the mathematical description for vision criteria of another class of objects-retroreflectors to permit their angular patterns of reflection to be dealt with under unfavorable observation conditions through a light-scattering medium, such as fog. the small-angle diffusion approximation is used for the calculations of light characteristics under illumination by some source of an active vision system. by way of examples, there will be considered two questions: (1) visual perception of large-area objects where some parts of a retroreflector can be seen as dark and others as bright ones. This fact may be important when analyzing and exploring visual information being read out from a retroreflective panel. (2) The interesting effect of enhancing the contrast of a retroreflector image with increasing optical thickness of a scattering medium. This is related to increasing 'effective' albedo of an 'equivalent' Lambertian object the retroreflector can be replaced by. The results on vision characteristics of retroreflective objects are compared with those for the case of observation of Lambertian ones. The corresponding differences are discussed.

This paper discusses scientific visualization of scalar and vector fields, particularly relating to clouds and climate modeling. One cloud rendering method applies a 3-D texture to cloudiness contour surfaces, to simulate a view from outer space. The texture is advected by the wind flow, so that it follows the cloud motion. Another technique simulates multiple scattering of incident light from the sun and sky. This paper also presents a simulation of the microscopic cross-bridge motion which powers muscle contraction. It was rendered by ray-tracing contour surfaces of summed Gaussian ellipsoids approximating the actin and myosin protein shapes.

An exciting use of visualization technology is to provide an environment to explore physical phenomena in a way that cannot be duplicated experimentally. Image synthesis is the method that is used to analyze a typical flow visualization study of a modern aircraft configuration. In particular, tuft and liquid crystal flow visualization studies are used to illustrate the synthesis scheme. This paper details a data environment suitable for image synthesis using real-world and synthetic components. Computer vision techniques are used to extract features from aerodynamic image data. These features, or primitives can be combined in a single scene to view multiple data sets simultaneously. The methods for layering or merging these diverse primitives are detailed. In addition to detailing the steps of the analysis-to-synthesis pathway, a comparison of the effectiveness of segmentation techniques at achieving the required feature extraction is discussed.

The art of optical motion capture combines computational devices with optical detectors. The spatial and temporal resolutions desired require millions of discrete detection locations, or pixels, operating at frame rates ranging from several hertz to several thousand hertz. Processing frames from multiple detectors in real time, requires several billion operations to be performed per second. The process expands as the frame rate or resolution increases. One technique suggested here is utilization of orthogonal one dimensional (1-D) CCD or linear array detectors, instead of the more common two dimensional (2-D) CCD or area array detectors. A brief description of the advantages of the linear arrays is provided based on experience with an actual system, validating empirical supposition with proven results. Extrapolations into future developments are suggested, combining the best of both designs.

By using the wavelet transform, in our previous work we developed a hierarchical planar curve descriptor which decomposes a curve into components of different scales so that the coarsest scale components carry the global approximation information while the finer scale components contain the local detailed information. In this research, we extend the work to a multiscale description of cartoon characters and propose a framework of cartoon animation and morphing. We perform the wavelet transforms on the curves that describe cartoon shapes and use the multiscale coefficients as the control points for shape manipulation. To facilitate animation, we model the motion of a cartoon character with the Lagrangian dynamic equation where the multiscale curve is driven by some internal and external forces. The spatial and frequency localization property of the multiscale curve model results in sparse and diagonally dominant representations of the mass and stiffness matrices of the Lagrangian equation so that the computation can be greatly simplified. To further simplify this model, we also consider an approximating model which consists of a set of a decoupled system of ODEs. The motion parameters can be extracted from some given sequence of real motion. This set of parameters which contain the kinematic information of control points is then used to generate a similar type of motion for cartoon characters. Experiments of the proposed morphing and motion algorithm are conducted to demonstrate its performance.

The interactive graphics system presented in this paper is designed and implemented for constructing hierarchical interactive graphics and animations. The system is based on an interactive, object-oriented data model which we call the SSM model (shape view, structural view, messages). The model states that each object appearing in graphics and animation sequences should have two views, a structural view and a shape view, and that the user can model and manipulate objects by sending messages. The interactive graphics system allows the user to create two views of an object without programming. In addition, the system allows two-view objects to be manipulated at different hierarchical levels and object behaviors to be described at the interactive level, task-command level or user-scripting level. The structural view, once created using a structure editor, is used as a multiple tool which assists the user in creating interactive graphics and animations. The viewer can interactively view the information defined in each node of the structural view.

A method is given for synthesizing a texture by using the interface of a conventional drawing tool. The majority of conventional texture generation methods are based on the procedural approach, and can generate a variety of textures that are adequate for generating a realistic image. But it is hard for a user to imagine what kind of texture will be generated simply by looking at its parameters. Furthermore, it is difficult to design a new texture freely without a knowledge of all the procedures for texture generation. Our method offers a solution to these problems, and has the following four merits: First, a variety of textures can be obtained by combining a set of feature lines and attribute functions. Second, data definitions are flexible. Third, the user can preview a texture together with its feature lines. Fourth, people can design their own textures interactively and freely by using the interface of a conventional drawing tool. For users who want to build this texture generation method into their own programs, we also give the language specifications for generating a texture. This method can interactively provide a variety of textures, and can also be used for typographic design.

This paper describes the parallel implementation of the Zbuffer algorithm on different kinds of distributed memory machines. In computer graphics domain, the Zbuffer is one of the most popular and fastest techniques to generate a surfacic representation of a scene consisting of objects in a 3-dimensional world. To improve this method we develop a parallel algorithm which uses a hypercube topology, load-balancing techniques and portable global communications phases.

We propose a new method for terrain texture synthesis by using a generalized two dimensional fractional Brownian motion (fBm) model called the extended self-similar (ESS) process. The utility of 2-D fBm for terrain texture modeling has been examined by some researchers. Although the fBm may provide a good model for landscapes for some scales, it will not capture the behavior of the terrain at all scales. We introduce the ESS process to model terrains at all scales where the parameters of the ESS model provide a multiscale roughness representation of the landscape. Specifically, we define a generalized Hurst parameter which changes with respect to scales. To validate the usefulness of the new model, we show how to estimate the generalized Hurst parameters from 2-D data and how to synthesize an ESS process. The generation method is based on Fourier synthesis of the stationary ESS increments, and the algorithm has a complexity of O[N2 log(N)] for an image of size N X N. Then, we demonstrate the relation between the generalized Hurst parameter and visual roughness through examples of synthesized images. Finally, we examine the ability of the ESS process to render real terrain data.

In this parallel surface rendering algorithm based on dividing cube2 on a SIMD machine MasPar MP-1, we address the problem of load balancing and image composition. We divide a 3D array of Nx x Ny x Nz volume data into Nx x Ny columns, each Nz deep. Each processor in the mesh receives a subvolume of such data columns. All processors synchronously traverse its subvolume to determine the voxels intersecting the isosurface. Intersecting voxels are called isovoxels. Partial load balancing distributes the isovoxels contained in a row of processors evenly among the processors in that row to reduce the network traffic and complexity of the rendering phase. Each isovoxel is subdivided into point primitives using dividing cube algorithm. Rendering algorithm transforms the surface points and their normals and projects them into the view plane.

Devices such as videocameras, VCRs, microphones, and fax machines have had a great impact on society because they allow various aspects of the physical world to be acquired, stored, transmitted, and reinstantiated. An important physical form that has received little attention is that of 3D shape. Emerging technologies allowing the acquisition and reinstantiation of 3D shape (3D laser scanners and solid freeform fabrication) are motivating numerous applications, including reverse engineering, traditional non-CAD design, and 3D faxing. However, for 3D scanning to realize its full potential, software must be developed to allow the reconstruction of useful 3D geometric models from the raw data that scanners produce. This work addresses surface reconstruction: the recovery of concise, accurate piecewise smooth surface models from scanned 3D points. We present a surface reconstruction method that is significantly more general than previous ones. Neither the topological type of the surface, its geometry, nor the location of its sharp features are known in advance -- all are inferred from the data points. A key ingredient is the introduction of a new class of subdivision surfaces allowing the representation of sharp features such as creases and corners. Finally, we demonstrate the effectiveness of the method using both simulated and real data.

This paper presents the visibility study of a methodology for the acquisition and the generation of 3-D shapes, from MRI images, based on specific requests. The MRI image regions are extracted and represented in 2-D shapes associated with their bit map form. The development of the 3-D shape of a desirable region in a brain can be produced by the synthesis of the 2-D shapes (bit-maps) of the particular MRI regions. The acquisition of the 2-D regions, which belong to the same 3-D brain area (i.e. tumor, hematoma), is based on a search process of the successive MRI images to detect the same undesirable value in a relative close location with the one detected in the previous MRI region. Experimental results are also provided to illustrate the potential of the proposed method.

Full field surface data of cylindrically shaped objects, such as a human's head, can be acquired by rotating a triangulated laser and imaging system about the subject. The method of acquisition is imperfect and requires post processing of the data obtained. To eliminate rough or irregular surface data, a two-dimensional convolution can be used. To eliminate spikes or impulse noise in the data, morphological filters are available. Proper use of both operations requires knowing and understanding the operational parameters. This paper presents an overview of these methods and discusses the optimal parameter settings as have been determined by experimentation.

Scenes that contain every-day man-made objects often possess sets of parallel lines and orthogonal planes, the projective features of which possess enough structural information to constrain possible scene element geometries as well as a camera's intrinsic and extrinsic parameters. In particular, in a scene with three mutually orthogonal sets of parallel lines, detection of the corresponding three vanishing points of the imaged lines allows us to determine the camera's image-relative principal point and effective focal length. In this paper we introduce a new technique to solve for radial and decentering lens distortion directly from the results of vanishing point estimation, thus precluding the need for special calibration templates. This is accomplished by using an iterative method to solve for the parameters that minimize vanishing point dispersion. Dispersion here is measured as covariance of vanishing point estimation error projected on the Gaussian sphere whose origin is the estimated center of projection. Having found a complete model for each camera's intrinsic parameters, corresponding points are used in the relative orientation technique to determine the camera's extrinsic parameters as well as point-wise structure. Surfaces inherit planar geometry and extent from manually identified coplanar lines and points. View independent textures are created for each surface by finding the 2-D homographic texture transformation which corrects for planar perspective foreshortening. We utilize the local Jacobian of this transformation in two important ways: to prevent aliasing in the plane's texture space and to merge correctly texture data arising from varying sampling resolutions in multiple views.

Fractal approach is a very useful method to simulate some irregular and complicated images. It is especially suitable for representation of many natural scenes, e.g. mountains, terrains, trees, clouds, and other phenomena. In this paper, a fractal modeling algorithm simulating some kinds of plants is described. After a study at the space structure of natural plants, a parametric model is defined. Many restriction functions and control parameters are introduced, and in order to get various and more realistic images, lots of random factors are also added in this model. Through modifying these parameters, bamboo, pine, and some other plants are obtained.

Keywords/Phrases

Keywords

in

Remove

in

Remove

in

Remove

+ Add another field

Search In:

Proceedings

Volume

Journals +

Volume

Issue

Page

Advanced PhotonicsJournal of Applied Remote SensingJournal of Astronomical Telescopes Instruments and SystemsJournal of Biomedical OpticsJournal of Electronic ImagingJournal of Medical ImagingJournal of Micro/Nanolithography, MEMS, and MOEMSJournal of NanophotonicsJournal of Photonics for EnergyNeurophotonicsOptical EngineeringSPIE Reviews