This book contains papers presented at the Workshop on the Analysis of Large-scale,High-Dimensional, and Multi-Variate Data Using Topology and Statistics, held in Le Barp,France, June 2013. It features the work of some of the most prominent and recognizedleaders in the field who examine challenges as well as detail solutions to the analysis ofextreme scale data.The book presents new methods that leverage the mutual strengths of both topologicaland statistical techniques to support the management, analysis, and visualizationof complex data. It covers both theory and application and provides readers with anoverview of important key concepts and the latest research trends.Coverage in the book includes multi-variate and/or high-dimensional analysis techniques,feature-based statistical methods, combinatorial algorithms, scalable statistics algorithms,scalar and vector field topology, and multi-scale representations. In addition, the bookdetails algorithms that are broadly applicable and can be used by application scientists toglean insight from a wide range of complex data sets.

This report provides in-depth information and analysis to help create a technical road map for developing nextgeneration programming models and runtime systems that support Advanced Simulation and Computing (ASC) workload requirements. The focus herein is on asynchronous many-task (AMT) model and runtime systems, which are of great interest in the context of "exascale" computing, as they hold the promise to address key issues associated with future extreme-scale computer architectures. This report includes a thorough qualitative and quantitative examination of three best-of-class AMT runtime systems—Charm++, Legion, and Uintah, all of which are in use as part of the ASC Predictive Science Academic Alliance Program II (PSAAP-II) Centers. The studies focus on each of the runtimes' programmability, performance, and mutability. Through the experiments and analysis presented, several overarching findings emerge. From a performance perspective, AMT runtimes show tremendous potential for addressing extremescale challenges. Empirical studies show an AMT runtime can mitigate performance heterogeneity inherent to the machine itself and that Message Passing Interface (MPI) and AMT runtimes perform comparably under balanced conditions. From a programmability and mutability perspective however, none of the runtimes in this study are currently ready for use in developing production-ready Sandia ASC applications. The report concludes by recommending a codesign path forward, wherein application, programming model, and runtime system developers work together to define requirements and solutions. Such a requirements-driven co-design approach benefits the high-performance computing (HPC) community as a whole, with widespread community engagement mitigating risk for both application developers and runtime system developers.

This report provides in-depth information and analysis to help create a technical road map for developing nextgeneration programming models and runtime systems that support Advanced Simulation and Computing (ASC) workload requirements. The focus herein is on asynchronous many-task (AMT) model and runtime systems, which are of great interest in the context of "exascale" computing, as they hold the promise to address key issues associated with future extreme-scale computer architectures. This report includes a thorough qualitative and quantitative examination of three best-of-class AMT runtime systems—Charm++, Legion, and Uintah, all of which are in use as part of the ASC Predictive Science Academic Alliance Program II (PSAAP-II) Centers. The studies focus on each of the runtimes' programmability, performance, and mutability. Through the experiments and analysis presented, several overarching findings emerge. From a performance perspective, AMT runtimes show tremendous potential for addressing extremescale challenges. Empirical studies show an AMT runtime can mitigate performance heterogeneity inherent to the machine itself and that Message Passing Interface (MPI) and AMT runtimes perform comparably under balanced conditions. From a programmability and mutability perspective however, none of the runtimes in this study are currently ready for use in developing production-ready Sandia ASC applications. The report concludes by recommending a codesign path forward, wherein application, programming model, and runtime system developers work together to define requirements and solutions. Such a requirements-driven co-design approach benefits the high-performance computing (HPC) community as a whole, with widespread community engagement mitigating risk for both application developers and runtime system developers.

The relation between two Morse functions defined on a smooth, compact, and orientable 2-manifold can be studied in terms of their Jacobi set. The Jacobi set contains points in the domain where the gradients of the two functions are aligned. Both the Jacobi set itself as well as the segmentation of the domain it induces, have shown to be useful in various applications. In practice, unfortunately, functions often contain noise and discretization artifacts, causing their Jacobi set to become unmanageably large and complex. Although there exist techniques to simplify Jacobi sets, they are unsuitable for most applications as they lack fine-grained control over the process, and heavily restrict the type of simplifications possible.

This paper introduces the theoretical foundations of a new simplification framework for Jacobi sets. We present a new interpretation of Jacobi set simplification based on the perspective of domain segmentation. Generalizing the cancellation of critical points from scalar functions to Jacobi sets, we focus on simplifications that can be realized by smooth approximations of the corresponding functions, and show how these cancellations imply simultaneous simplification of contiguous subsets of the Jacobi set. Using these extended cancellations as atomic operations, we introduce an algorithm to successively cancel subsets of the Jacobi set with minimal modifications to some userdefined metric. We show that for simply connected domains, our algorithm reduces a given Jacobi set to its minimal configuration, that is, one with no birth-death points (a birth-death point is a specific type of singularity within the Jacobi set where the level sets of the two functions and the Jacobi set have a common normal direction).

One of the biggest challenges in high-energy physics is to analyze a complex mix of experimental and simulation data to gain new insights into the underlying physics. Currently, this analysis relies primarily on the intuition of trained experts often using nothing more sophisticated than default scatter plots. Many advanced analysis techniques are not easily accessible to scientists and not flexible enough to explore the potentially interesting hypotheses in an intuitive manner. Furthermore, results from individual techniques are often difficult to integrate, leading to a confusing patchwork of analysis snippets too cumbersome for data exploration. This paper presents a case study on how a combination of techniques from statistics, machine learning, topology, and visualization can have a significant impact in the field of inertial confinement fusion. We present the ND2AV: N-Dimensional Data Analysis and Visualization framework, a user-friendly tool aimed at exploiting the intuition and current work flow of the target users. The system integrates traditional analysis approaches such as dimension reduction and clustering with state-of-the-art techniques such as neighborhood graphs and topological analysis, and custom capabilities such as defining combined metrics on the fly. All components are linked into an interactive environment that enables an intuitive exploration of a wide variety of hypotheses while relating the results to concepts familiar to the users, such as scatter plots. ND2AV uses a modular design providing easy extensibility and customization for different applications. ND2AV is being actively used in the National Ignition Campaign and has already led to a number of unexpected discoveries.

BACKGROUND:Medial forebrain bundle (MFB) deep brain stimulation (DBS) is currently being investigated in patients with treatment-resistant depression. Striking features of this therapy are the large number of patients who respond to treatment and the rapid nature of the antidepressant response.

METHODS:Antidepressant-like effects of MFB stimulation at 100 μA, 90 μs and either 130 Hz or 20 Hz were characterized in the forced swim test (FST). Changes in the expression of the immediate early gene (IEG) zif268 were measured with in situ hybridization and used as an index of regional brain activity. Microdialysis was used to measure DBS-induced dopamine and serotonin release in the nucleus accumbens.

RESULTS:Stimulation at parameters that approximated those used in clinical practice, but not at lower frequencies, induced a significant antidepressant-like response in the FST. In animals receiving MFB DBS at high frequency, increases in zif268 expression were observed in the piriform cortex, prelimbic cortex, nucleus accumbens shell, anterior regions of the caudate/putamen and the ventral tegmental area. These structures are involved in the neurocircuitry of reward and are also connected to other brain areas via the MFB. At settings used during behavioral tests, stimulation did not induce either dopamine or serotonin release in the nucleus accumbens.

CONCLUSIONS:These results suggest that MFB DBS induces an antidepressant-like effect in rats and recruits structures involved in the neurocircuitry of reward without affecting dopamine release in the nucleus accumbens.

Scientific visualization has many effective methods for examining and exploring scalar and vector fields, but rather fewer for multi-variate fields. We report the first general purpose approach for the interactive extraction of geometric separating surfaces in bivariate fields. This method is based on fiber surfaces: surfaces constructed from sets of fibers, the multivariate analogues of isolines. We show simple methods for fiber surface definition and extraction. In particular, we show a simple and efficient fiber surface extraction algorithm based on Marching Cubes. We also show how to construct fiber surfaces interactively with geometric primitives in the range of the function. We then extend this to build user interfaces that generate parameterized families of fiber surfaces with respect to arbitrary polylines and polygons. In the special case of isovalue-gradient plots, fiber surfaces capture features geometrically for quantitative analysis that have previously only been analysed visually and qualitatively using multi-dimensional transfer functions in volume rendering. We also demonstrate fiber surface extraction on a variety of bivariate data

PURPOSE:To evaluate the performance of an edge-based registration technique in correcting for respiratory motion artifacts in magnetic resonance renographic (MRR) data and to examine the efficiency of a semiautomatic software package in processing renographic data from a cohort of clinical patients.

MATERIALS AND METHODS:The developed software incorporates an image-registration algorithm based on the generalized Hough transform of edge maps. It was used to estimate glomerular filtration rate (GFR), renal plasma flow (RPF), and mean transit time (MTT) from 36 patients who underwent free-breathing MRR at 3T using saturation-recovery turbo-FLASH. The processing time required for each patient was recorded. Renal parameter estimates and model-fitting residues from the software were compared to those from a previously reported technique. Interreader variability in the software was quantified by the standard deviation of parameter estimates among three readers. GFR estimates from our software were also compared to a reference standard from nuclear medicine.

This book constitutes the thoroughly refereed post-conference proceedings of the ThirdInternational Workshop on Spatio-temporal Image Analysis for Longitudinal and Time-Series Image Data, STIA 2014, held in conjunction with MICCAI 2014 in Boston, MA, USA, inSeptember 2014.

The 7 papers presented in this volume were carefully reviewed and selected from 15submissions. They are organized in topical sections named: longitudinal registration andshape modeling, longitudinal modeling, reconstruction from longitudinal data, and 4Dimage processing.

Generalized Voronoi Diagrams (GVDs) have far-reaching applications in robotics, visualization, graphics, and simulation. However, while the ordinary Voronoi Diagram has mature and efficient algorithms for its computation, the GVD is difficult to compute in general, and in fact, has only approximation algorithms for anything but the simplest of datasets. Our work is focused on developing algorithms to compute the GVD efficiently and with bounded error on the most difficult of datasets -- those with objects that are extremely close to each other.

In multiatlas segmentation, one typically registers several atlases to the novel image, and their respective segmented label images are transformed and fused to form the final segmentation. In this work, we provide a new dynamical system perspective for multiatlas segmentation, inspired by the following fact: The transformation that aligns the current atlas to the novel image can be not only computed by direct registration but also inferred from the transformation that aligns the previous atlas to the image together with the transformation between the two atlases. This process is similar to the global positioning system on a vehicle, which gets position by inquiring from the satellite and by employing the previous location and velocity—neither answer in isolation being perfect. To solve this problem, a dynamical system scheme is crucial to combine the two pieces of information; for example, a Kalman filtering scheme is used. Accordingly, in this work, a Kalman multiatlas segmentation is proposed to stabilize the global/affine registration step. The contributions of this work are twofold. First, it provides a new dynamical systematic perspective for standard independent multiatlas registrations, and it is solved by Kalman filtering. Second, with very little extra computation, it can be combined with most existing multiatlas segmentation schemes for better registration/segmentation accuracy.

Registering and combining anatomical components from different image modalities, like MRI and CT that have different tissue contrast, could result in patient-specific models that more closely represent underlying anatomical structures.

In this study, we combined a pair of CT and MRI scans of a pig thorax to make a tetrahedral mesh and compared different registration techniques including rigid, affine, thin plate spline morphing (TPSM), and iterative closest point (ICP), to superimpose the segmented bones from the CT scan on the soft tissues segmented from the MRI. The TPSM and affine-registered bones remained close to, but not overlapping, important soft tissue.

Simulation models, including an ECG forward model and a defibrillation model, were computed on generated multi-modality meshes after TPSM and affine registration and compared to those based on the original torso mesh.

OBJECTIVE:Transcranial magnetic stimulation (TMS) is an effective intervention in noninvasive neuromodulation used to treat a number of neurophysiological disorders. Predicting the spatial extent to which neural tissue is affected by TMS remains a challenge. The goal of this study was to develop a computational model to predict specific locations of neural tissue that are activated during TMS. Using this approach, we assessed the effects of changing TMS coil orientation and waveform.

MATERIALS AND METHODS:We integrated novel techniques to develop a subject-specific computational model, which contains three main components: 1) a figure-8 coil (Magstim, Magstim Company Limited, Carmarthenshire, UK); 2) an electromagnetic, time-dependent, nonhomogeneous, finite element model of the whole head; and 3) an adaptation of a previously published pyramidal cell neuron model. We then used our modeling approach to quantify the spatial extent of affected neural tissue for changes in TMS coil rotation and waveform.

RESULTS:We found that our model shows more detailed predictions than previously published models, which underestimate the spatial extent of neural activation. Our results suggest that fortuitous sites of neural activation occur for all tested coil orientations. Additionally, our model predictions show that excitability of individual neural elements changes with a coil rotation of ±15°.

CONCLUSIONS:Our results indicate that the extent of neuromodulation is more widespread than previous published models suggest. Additionally, both specific locations in cortex and the extent of stimulation in cortex depend on coil orientation to within ±15° at a minimum. Lastly, through computational means, we are able to provide insight into the effects of TMS at a cellular level, which is currently unachievable by imaging modalities.

While particle-in-cell type methods, such as MPM, have been very successful in providing solutions to many challenging problems there are some important issues that remain to be resolved with regard to their analysis. One such challenge relates to the difference in dimensionality between the particles and the grid points to which they are mapped. There exists a non-trivial null space of the linear operator that maps particles values onto nodal values. In other words, there are non-zero particle values values that when mapped to the nodes are zero there. Given positive mapping weights such null space values are oscillatory in nature. The null space may be viewed as a more general form of the ringing instability identified by Brackbill for PIC methods. It will be shown that it is possible to remove these null-space values from the solution and so to improve the accuracy of PIC methods, using a matrix SVD approach. The expense of doing this is prohibitive for real problems and so a local method is developed for doing this.