Notable Video

Wind-plant operators would like to better control wind turbines to mitigate the wake effects between turbines, as the wakes that form behind upstream wind turbines can have significant impacts on the performance of downstream turbines. However, in order to make sound adjustments, operators need data charting the relationship between the degree of the adjustment and the resulting wake deflection. Light Detection and Ranging (LiDAR) technology, which can be programmed to measure atmospheric velocity, may be able to generate enough data to make such adjustments feasible in real-time. However, LiDAR only provides a low spatial and temporal fidelity measurement, and how accurately that measurement represents a turbine wake has not been established. To better understand the efficacy of using LiDAR to measure the wake trailing a wind turbine, we have used high-fidelity computational fluid dynamics (CFD) to simulate a wind turbine in turbulent flow, and simulate LiDAR measurements in this high-fidelity flow. A visual analysis of the LiDAR measurement in the context of the high-fidelity wake clearly illustrates the limitations of the LiDAR resolution and contributes to the overall comprehension of LiDAR operation.

As the United States moves toward utilizing more of
its wind and water resources for electrical power generation,
computational modeling will play an increasingly important role in
improving the performance, decreasing costs, and accelerating deployment
of wind and water power technologies. We are developing computational
models to better understand the wake effects of wind and marine
hydrokinetic turbines, which operate on the same principles. Large wind
plants are consistently found to perform below expectations. Inadequate
accounting various turbulent-wake effects is believed to be in part
responsible for the under-performance.

Fluidized bed reactors are a promising technology for
the thermo-chemical conversion of biomass in biofuel production.
However, the current understanding of the behavior of the materials in a
fluidized bed is limited. We are using high-fidelity simulations to
better understand the mechanics of the conversion processes. This video
visualizes a simulation of a periodic bed of sand fluidized by a gas
stream injected at the bottom. A Lagrangian approach describes the solid
phase, in which particles with a soft-sphere collision model are
individually tracked along the reactor. A large-scale point-particle
direct numerical simulation involving 12 millions particles and 4
million mesh points has been performed, requiring 512 cores for 4 days.
The onset of fluidization is characterized by the formation of several
large bubbles that rise and burst at the surface and is followed by a
pseudo-steady turbulent motion showing intense bubble activity.

A detailed numerical simulation of a turbulent liquid
jet. Atomization of liquid fuel is the process by which a coherent
liquid flow disintegrates in droplets. Understanding atomization will
have far reaching repercussions on many aspects of the combustion
process. This was a large-scale scaling study run on Red Mesa, ,
involving 1.36 billion cells, run on 12,228 cores, rendered on 1,248.

Publications

We describe a method for visualizing data flows on large networks. We transform data flow on fixed networks into a vector field, which can be directly visualized using scientific flow visualization techniques. We evaluated the method on power flowing through two transmission power networks: a small, regional, IEEE test system (RTS-96) and a large national-scale system (the Eastern Interconnection). For the larger and more complex transmission system, the method illustrates features of the power flow that are not accessible when visualizing the power transmission with traditional network visualization techniques.

In this paper, we investigate the role of flow structures generated in wind farm control through yaw misalignment. A pair of counter-rotating vortices are shown to be important in deforming the shape of the wake and in explaining the asymmetry of wake steering in oppositely signed yaw angles. We motivate the development of new physics for control-oriented engineering models of wind farm control, which include the effects of these large-scale flow structures. Such a new model would improve the predictability of control-oriented models. Results presented in this paper indicate that wind farm control strategies, based on new control-oriented models with new physics, that target total flow control over wake redirection may be different, and perhaps more effective, than current approaches. We propose that wind farm control and wake steering should be thought of as the generation of large-scale flow structures, which will aid in the improved performance of wind farms.

Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94%, while retaining lossless information in the turbine-wake regions most critical to analysis and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interactive visual anlaysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contexualized representation is a valid approach and encourages contextual data management.

We have developed a framework for the exploration, design, and planning of energy systems that combines interactive visualization with machine-learning based approximations of simulations through a general purpose dataflow API. Our system provides a visual interface allowing users to explore an ensemble of energy simulations representing a subset of the complex input parameter space, and spawn new simulations to “fill in” input regions corresponding to new energy system scenarios. Unfortunately, many energy simulations are far too slow to provide interactive responses. To support interactive feedback, we are developing reduced-form models via machine learning techniques, which provide statistically sound estimates of the full simulations at a fraction of the computational cost and which are used as proxies for the full-form models. Fast computation and an agile dataflow enhance the engagement with energy simulations, and allows researchers to better allocate computational resources to capture informative relationships within the system and provide a low-cost method for validating and quality-checking large-scale modeling efforts.

Prediction and characterization of application power use in a high-performance computing environment

Power use in data centers and high-performance computing (HPC) facilities has grown in tandem with increases in the size and number of these facilities. Substantial innovation is needed to enable meaningful reduction in energy footprints in leadership-class HPC systems. In this paper, we focus on characterizing and investigating application-level power usage. We demonstrate potential methods for predicting power usage based on a priori and in situ characteristics. Finally, we highlight a potential use case of this method through a simulated power-aware scheduler using historical jobs from a real scientific HPC system.

Ab Initio Surface Phase Diagrams for Coadsorption of Aromatics and Hydrogen on the Pt(111) Surface.

Supported metal catalysts are commonly used for the hydrogenation and deoxygenation of biomass-derived aromatic compounds in catalytic fast pyrolysis. To date, the substrate–adsorbate interactions under reaction conditions crucial to these processes remain poorly understood, yet understanding this is critical to constructing detailed mechanistic models of the reactions important to catalytic fast pyrolysis. Density functional theory (DFT) has been used in identifying mechanistic details, but many of these works assume surface models that are not representative of realistic conditions, for example, under which the surface is covered with some concentration of hydrogen and aromatic compounds. In this study, we investigate hydrogen-guaiacol coadsorption on Pt(111) using van der Waals-corrected DFT and ab initio thermodynamics over a range of temperatures and pressures relevant to bio-oil upgrading. We find that relative coverage of hydrogen and guaiacol is strongly dependent on the temperature and pressure of the system. Under conditions relevant to ex situ catalytic fast pyrolysis (CFP; 620–730 K, 1–10 bar), guaiacol and hydrogen chemisorb to the surface with a submonolayer hydrogen (∼0.44 ML H), while under conditions relevant to hydrotreating (470–580 K, 10–200 bar), the surface exhibits a full-monolayer hydrogen coverage with guaiacol physisorbed to the surface. These results correlate with experimentally observed selectivities, which show ring saturation to methoxycyclohexanol at hydrotreating conditions and deoxygenation to phenol at CFP-relevant conditions. Additionally, the vibrational energy of the adsorbates on the surface significantly contributes to surface energy at higher coverage. Ignoring this contribution results in not only quantitatively, but also qualitatively incorrect interpretation of coadsorption, shifting the phase boundaries by more than 200 K and ∼10–20 bar and predicting no guaiacol adsorption under CFP and hydrotreating conditions. The implications of this work are discussed in the context of modeling hydrogenation and deoxygenation reactions on Pt(111), and we find that only the models representative of equilibrium surface coverage can capture the hydrogenation kinetics correctly. Last, as a major outcome of this work, we introduce a freely available web-based tool, dubbed the Surface Phase Explorer (SPE), which allows researchers to conveniently determine surface composition for any one- or two-component system at thermodynamic equilibrium over a wide range of temperatures and pressures on any crystalline surface using standard DFT output.

The Eastern Renewable Generation Integration Study (ERGIS), explores the operational impacts of the widespread adoption of wind and solar photovoltaic (PV) resources in the U.S. Eastern Interconnection and Que ́bec Interconnection (collectively, EI). In order to understand some of the economic and reliability challenges of managing hundreds of gigawatts of wind and PV generation, we developed state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NRELs high performance computing capabilities and new methodologies to model operations, we found that the EI could balance the variability and uncertainty of high penetrations of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions.

Interpretation of Simultaneous Mechanical-Electrical-Thermal Failure in a Lithium-Ion Battery Module

Lithium-ion batteries are currently the state-of-the-art power sources for electric vehicles, and their safety behavior when subjected to abuse, such as a mechanical impact, is of critical concern. A coupled mechanical-electrical-thermal model for simulating the behavior of a lithium-ion battery under a mechanical crush has been developed. We present a series of production-quality visualizations to illustrate the complex mechanical and electrical interactions in this model.

Duke Energy, Alstom Grid (now GE Grid Solutions), and the National Renewable Energy Laboratory (NREL) collaborated to better understand advanced inverter and distribution management system (DMS) control options for large (1–5 MW) distributed solar photovoltaics (PV) and their impacts on distribution system operations. The specific goal of the project was to compare the operational—specifically, voltage regulation—impacts of three methods of managing voltage variations resulting from such PV systems: active power, local autonomous inverter control, and integrated volt/VAR control. The project found that all tested configurations of DMS-controlled IVVC provided improved performance and provided operational cost savings compared to the baseline and local control modes. Specifically, IVVC combined with PV at a 0.95 PF proved the technically most effective voltage management scheme for the system studied. This configuration substantially reduced both utility regulation equipment operations and observed voltage challenges.

Assessing the impact of energy efficiency technologies at a city scale is of great interest to city planners, utility companies, and policy makers. This paper describes a flexible framework which can be used to create and run city scale building energy simulations. The framework is built around the new OpenStudio City Database (CityDB). Building footprints, building height, building type, and other data can be imported into the database from public records or other sources. The OpenStudio City User Interface (CityUI) can be used to inspect and edit data in the CityDB. Unknown data can be inferred or assigned from a statistical sampling of other datasets such as the Commercial Buildings Energy Consumption Survey (CBECS) or Residential Energy Consumption Survey (RECS). Once all required data is available, OpenStudio measures are used to create starting point energy models for each building in the dataset and to model particular energy efficiency measures for each building. Together this framework allows a user to pose several scenarios such as “what if 30% of the commercial retail buildings added roof top solar” or “what if all elementary schools converted to ground source heat pumps” and then visualize the impacts at a city scale. This paper focuses on modeling existing building stock using public records; however, the framework is capable of supporting the evaluation of new construction and the use of proprietary data sources.

The U.S. Department of Energy commissioned the National Renewable Energy Laboratory (NREL) to answer a question: What conditions might system operators face if the Eastern Interconnection (EI), a system designed to operate reliably with fossil fueled, nuclear, and hydro generation, was transformed to one that relied on wind and solar photovoltaics (PV) to meet 30% of annual electricity demand?
In this resulting study—Eastern Renewable Generation Integration Study (ERGIS)— NREL answers that question and, in doing so, gives insights on likely operational impacts of higher percentages—up to 30% on an annual energy basis with instantaneous penetrations over 50%—of combined wind and PV generation in the EI. We evaluate potential power system futures where significant portions of the existing generation fleet are retired and replaced by different portfolios of transmission, wind, PV, and natural gas generation. We explore how variable and uncertain conditions caused by wind and solar forecast errors, seasonal and diurnal patterns, weather and system operating constraints impact certain aspects of reliability and economic efficiency. Specifically, we model how the system could meet electricity demand at a 5-minute time interval by scheduling resources for known ramping events, while maintaining adequate reserves to meet random variation in supply and demand, and contingency events.

We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive “parallel-planes” visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate’s mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, each individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be “brushed” to highlight and select observations of interest: a “slider” control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users’ realization of insights about the simulation and its output.

An Analysis of Application Power and Schedule Composition in a High-Performance Computing Environment

As the capacity of high performance computing (HPC) systems continues to grow, small changes in energy management have the potential to produce significant energy savings. In this paper, we employ an extensive informatics system for aggregating and analyzing real-time performance and power use data to evaluate energy footprints of jobs running in an HPC data center. We look at the effects of algorithmic choices for a given job on the resulting energy footprints, and analyze application-specific power consumption, and summarize average power use in the aggregate. All of these views reveal meaningful power variance between classes of applications as well as chosen methods for a given job.
Using these data, we discuss energy-aware cost-saving strategies based on reordering the HPC job schedule. Using historical job and power data, we present a hypothetical job schedule reordering that: (1) reduces the facility’s peak power draw and (2) manages power in conjunction with a large-scale photovoltaic array. Lastly, we leverage this data to understand the practical limits on predicting key power use metrics at the time of submission.

Substrate accessibility to catalysts has been a dominant theme in theories of biomass deconstruction. However, current methods of quantifying accessibility do not elucidate mechanisms for increased accessibility due to changes in microstructure following pretreatment. We introduce methods for characterization of surface accessibility based on fine-scale microstructure of the plant cell wall as revealed by 3D electron tomography. These methods comprise a general framework, enabling analysis of image-based cell wall architecture using a flexible model of accessibility. We analyze corn stover cell walls, both native and after undergoing dilute acid pretreatment with and without a steam explosion process, as well as AFEX pretreatment. Image-based measures provide useful information about how much pretreatments are able to increase biomass surface accessibility to a wide range of catalyst sizes. We find a strong dependence on probe size when measuring surface accessibility, with a substantial decrease in biomass surface accessibility to probe sizes above 5 nm radius compared to smaller probes.

Evaluating the Efficacy of Wavelet Configurations on Turbulent-Flow Data

I/O is increasingly becoming a significant constraint for simulation codes and visualization tools on modern supercomputers. Data compression is an attractive workaround, and, in particular, wavelets provide a promising solution. However, wavelets can be applied in multiple configurations, and the variations in configuration impact accuracy, storage cost, and execution time. While the variation in these factors over wavelet configurations have been explored in image processing, they are not well understood for visualization and analysis of scientific data. To illuminate this issue, we evaluate multiple wavelet configurations on turbulent-flow data. Our approach is to repeat established analysis routines on uncompressed and lossy-compressed versions of a data set, and then quantitatively compare their outcomes. Our findings show that accuracy varies greatly based on wavelet configuration, while storage cost and execution time vary less. Overall, our study provides new insights for simulation analysts and visualization experts, who need to make tradeoffs between accuracy, storage cost, and execution time.

Segmentation and Visualization of Multivariate Features using Feature-Local Distributions

We introduce an iterative feature-based transfer function design that extracts and systematically incorporates multivariate feature-local statistics into a texture-based volume rendering process. We argue that an interactive multivariate feature-local approach is advantageous when investigating ill-deﬁned features, because it provides a physically meaningful, quantitatively rich environment within which to examine the sensitivity of the structure properties to the identiﬁcation parameters. We demonstrate the efficacy of this approach by applying it to vortical structures in Taylor-Green turbulence. Our approach identiﬁed the existence of two distinct structure populations in these data, which cannot be isolated or distinguished via traditional transfer functions based on global distributions.

As the US moves toward 20% wind power by 2030, computational modeling will play an increasingly important role in determining wind-plant siting, designing more efficient and reliable wind turbines, and understanding the interaction between large wind plants and regional weather. From a computing perspective, however, adequately resolving the relevant scales of wind-energy production is a petascale problem verging on exascale. In this paper we discuss the challenges associated with computational simulation of the multiscale wind-plant system, which includes turbine-scale turbulence, atmospheric-boundary-layer turbulence, and regional-weather variation. An overview of computational modeling approaches is presented, and our particular modeling strategy is described, which involves modiﬁcation and coupling of three open-source codes—FAST, OpenFOAM, and WRF, for structure aeroelasticity, local ﬂuid dynamics, and mesoscale ﬂuid dynamics, respectively.

Simulation Characterization and Optimization of Metabolic Models with the High-Performance Systems Biology Toolkit

The High-Performance Systems Biology Toolkit (HiPer SBTK) is a collection of simulation and optimization
components for metabolic modeling and the means to assemble them into large parallel processing hierarchies suiting a particular simulation optimization need. The components come in a variety of different categories: model translation, model simulation, parameter sampling, sensitivity analysis, parameter estimation, and optimization. They can be conﬁgured at runtime into hierarchically parallel arrangements to perform nested combinations of simulation characterization tasks with excellent parallel scaling to thousands of processors. We describe the observations that led to the system, the components, and how one can arrange them. We show nearly 90% efficient scaling to over 13,000 processors, and we demonstrate three complex yet typical examples that have run on ∼1000 processors and accomplished billions of stiff ordinary differential equation simulations.

People in the locality of earthquakes are publishing anecdotal information about the shaking within seconds of their occurrences via social network technologies, such as Twitter. In contrast, depending on the size and location of the earthquake, scientific alerts can take between two to twenty minutes to publish. We describe TED (Twitter Earthquake Detector) a system that adopts social network technologies to augment earthquake response products and the delivery of hazard information. The TED system analyzes data from these social networks for multiple purposes: 1) to integrate citizen reports of earthquakes with corresponding scientific reports 2) to infer the public level of interest in an earthquake for tailoring outputs disseminated via social network technologies and 3) to explore the possibility of rapid detection of a probable earthquake, within seconds of its occurrence, helping to fill the gap between the earthquake origin time and the presence of quantitative scientific data.

In this paper we discuss recent developments in the capabilities of
VAPOR (open source, available at http://www.vapor.ucar.edu): a desktop application
that leverages today’s powerful CPUs and GPUs to enable visualization
and analysis of terascale data sets using only a commodity PC or laptop. We
review VAPORs current capabilities, highlighting support for Adaptive Mesh
Reﬁnement (AMR) grids, and present new developments in interactive feature-based
visualization and statistical analysis.

One of the barriers to visualization-enabled scientiﬁc discovery is the difficulty
in clearly and quantitatively articulating the meaning of a visualization, particularly
in the exploration of relationships between multiple variables in large-scale data sets.
This issue becomes more complicated in the visualization of three-dimensional turbulence,
since geometry, topology, and statistics play complicated, intertwined roles in the
deﬁnitions of the features of interest, making them difficult or impossible to precisely
describe.
This dissertation develops and evaluates a novel interactive multivariate volume
visualization framework that allows features to be progressively isolated and deﬁned
using a combination of global and feature-local properties. I argue that a progressive
and interactive multivariate feature-local approach is advantageous when investigating
ill-deﬁned features because it provides a physically meaningful, quantitatively rich environment
within which to examine the sensitivity of the structure properties to the
identiﬁcation parameters. The efficacy of this approach is demonstrated in the analysis
of vortical structures in Taylor-Green turbulence. Through this analysis, two distinct
structure populations have been discovered in these data: structures with minimal and
maximal local absolute helicity distributions. These populations cannot be distinguished
via global distributions; however, they were readily identiﬁed by this approach, since
their feature-local statistics are distinctive.

Knowledge extraction from data volumes of ever increasing size
requires ever more flexible tools to facilitate interactive query.
Interactivity enables real-time hypothesis testing and scientific
discovery, but can generally not be achieved without some level of
data reduction. The approach described in this paper combines
multi-resolution access, region-of-interest extraction, and structure
identification in order to provide interactive spatial and statistical
analysis of a terascale data volume. Unique aspects of our
approach include the incorporation of both local and global statistics of
the flow structures, and iterative refinement facilities,
which combine geometry, topology, and
statistics to allow the user to effectively tailor the analysis and
visualization to the science. Working together, these facilities
allow a user to focus the spatial scale and domain of the analysis and
perform an appropriately tailored multivariate visualization of the
corresponding data. All of these ideas and algorithms are
instantiated in a deployed visualization and analysis tool called
VAPOR, which is in routine use by scientists internationally. In data
from a 1024x1024x1024 simulation of a forced turbulent flow, VAPOR allowed
us to perform a visual data exploration of the flow properties at
interactive speeds, leading to the discovery of novel scientific
properties of the flow. This kind of intelligent,
focused analysis/refinement approach will become even more important
as computational science moves towards petascale applications.

We studied the added value in using immersive visualization as a molecular
research tool. We present our results in the context of “embodied cognition”,
as a way to understand situations in which immersive virtual visualization may
be particularly useful. PYMOL,
a non-immersive application used by biochemistry
researchers, was ported to an immersive virtual environment (IVE) to run on a
four-PC cluster. Three research groups were invited to extend their current
research on a molecule of interest to include an investigation of that molecule
inside the IVE. The groups each had a similar experience of visualizing a
feature of their molecule they had not previously appreciated from workstation
viewing; large-scale spatial features, such as pockets and ridges, were readily
identified when walking around the molecule displayed at human scale. We
suggest that this added value arises because an IVE affords the opportunity to
visualize the molecule using normal, everyday-world perceptual abilities that
have been tuned and practiced from birth. This work also suggests that short
sessions of IVE viewing can valuably augment extensive, non-IVE based
visualizations.

Immersive virtual environments are becoming increasingly common, driving the need to develop new
software or adapt existing software to these environments. We discuss some of the issues and limitations of
porting an existing molecular graphics system, PyMOL, into an immersive virtual environment. Presenting
macromolecules inside an interactive immersive virtual environment may provide unique insights into
molecular structure and improve the rational design of drugs that target a specific molecule. PyMOL was
successfully extended to render molecular structures immersively; however, elements of the legacy
interactive design did not scale well into three-dimensions. Achieving an interactive frame rate for large
macromolecules was also an issue. The immersive system was developed and evaluated on both a shared-memory
parallel machine and a commodity cluster.

In this paper, we describe an immersive prototype application, AtmosV,
developed to interactively visualize the large multivariate atmospheric
dataset provided by the IEEE Visualization 2004 Contest committee. The
visualization approach is a combination of volume and polygonal rendering.
The immersive application was developed and evaluated on both a shared-memory
parallel machine and a commodity cluster. Using the cluster we were able to
visualize multiple variables at interactive frame rates.

The benefits of immersive visualization are primarily anecdotal; there
have been few controlled user studies that have attempted to quantify
the added value of immersion for problems requiring the manipulation of
virtual objects. This research quantifies the added value of immersion
for a real-world industrial problem: oil well-path planning. An
experiment was designed to compare human performance between an
immersive virtual environment (IVE) and a desktop workstation. This
work presents the results of sixteen participants who planned the paths
of four oil wells. Each participant planned two well-paths on a desktop
workstation with a stereoscopic display and two well-paths in a
CAVE-like IVE. Fifteen of the participants completed well-path editing
tasks faster in the IVE than in the desktop environment. The increased
speed was complimented by a statistically significant increase in
correct solutions in the IVE. The results suggest that an IVE can allow
for faster and more accurate problem solving in a complex
three-dimensional domain.

The benefits of immersive visualization are primarily anecdotal; there
have been few controlled users studies that have attempted to quantify the added
value of immersion for problems requiring the manipulation of virtual objects. This
research quantifies the added value of immersion for a real-world industrial problem:
oil well path planning. An experiment was designed to compare human performance
between an immersive virtual environment (IVE) and a desktop workstation with
stereoscopic display. This work consisted of building a cross-environment
application, capable of visualizing and editing a planned well path within an existing
oilfield, and conducting an user study on that application. This work presents the
results of sixteen participants who planned the paths of four oil wells. Each
participant planned two well paths on a desktop workstation with a stereoscopic
display and two well paths in a CAVE-like IVE. Fifteen of the participants
completed well path editing tasks faster in the IVE than in the desktop environment,
which is statistically significant (p < 0.001). The increased speed in the IVE was
complimented by an increase correct solutions. There was a statistically significant
(p < 0.05) increase in correct solutions in the IVE. The results suggest that an IVE
allows for faster and more accurate problem solving in a complex interactive three-dimensional domain.