AbstractWith the increasing ubiquity of artificial intelligence and machine learning applications, systems are emerging that require non-ML experts to interact with machine learning at the training step, not just the final system. These users may not have the skills, time, or inclination to familiarize themselves with the way machine learning works, so training systems must be developed that can communicate the necessary information and facilitate effortless collaboration with the user. We consider how to utilize techniques from qualitative coding, a human-centered approach for manual classification, and build better user experience for ML training.

Student Holle Christensen's short paper Structuring human-ML interaction with an immersive interface based on qualitative coding was accepted as a poster at the Workshop on Immersive Analytics at the IEEE Visualization conference in Phoenix, AZ!

Structuring human-ML interaction with an immersive interface based on qualitative codingJohanne Christensen and Benjamin Watson

AbstractWith ever increasing bodies of data, much of it unlabeled and from complex, dynamic and weakly structured domains, machine learning (ML) is more necessary than ever. Yet even domain experts have difficulty understanding most ML algorithms, and so cannot easily retrain them as new data arrives. This limits ML’s use in many fields that sorely need it, such as law, where users must have confidence in ML results. Interactive machine learning techniques have been proposed to take advantage of humanity’s ability to categorize in these complex domains, but little attention has been paid to building interfaces for non-ML experts to provide input, and in particular to creating a user experience that engenders trust. Qualitative coding — the decades-old practice of manual classification — provides a proven methodology that can be adapted to structure interaction between domain experts and ML algorithms. Qualitativecoders often use physical props such as notecards to help sort through and understand datasets. Here we explore how an immersive system can be built to leverage QC’s intuitive techniques and grow a trusting partnership between human and ML classifiers.

Yesterday with many friends and family in attendance, the new Dr. Adam Marrs successfully defended his dissertation. His committee included professors and co-advisors Benjamin Watson and Chris Healey, as well as professors Turner Whitted and Rob St. Amant, and NVidia VP Graphics Research Dr. David Luebke. Dr. Marrs will be joining NVidia in RTP after his graduation. Congratulations Adam!

Real-Time GPU Accelerated Multi-View Point-Based RenderingAdam Marrs

Doctoral dissertationNC State Univ Computer Science

AbstractResearch in the field of computer graphics has focused on producing realistic images by accurately simulating surface materials and the behavior of light. Since achieving photorealism requires significant computational power, visual realism and interactivity are typically adversarial goals. Dedicated graphics co-processors (GPUs) are now synonymous with innovation in real-time rendering and have fueled further advances in the simulation of light within real-time constraints. Important rendering effects that accurately model light transport often require evaluating costly multi-dimensional integrals. Approximating these integrals is achieved by dense spatial sampling, and is typically implemented with GPUs as multiple rasterizations of a scene from differing viewpoints. Producing multiple renders of complex geometry reveals a critical limitation in the design of the graphics processor: the throughput optimizations that make GPUs capable of processing millions of polygons in only milliseconds also prevent them from leveraging data coherence when synthesizing multiple views. Unlike its parallel processing of vertices and post-rasterization fragments, existing GPU architectures must render views serially and thus parallelize view rendering poorly. The full potential of GPU accelerated rendering algorithms is not realized by the existing single view design.

In this dissertation, we introduce an algorithmic solution to this problem that improves the efficiency of sample generation, increases the number of available samples, and enhances the performance-to-quality relationship of real-time multi-view effects. Unlike traditional polygonal rasterization, our novel multi-view rendering design achieves parallel execution in all stages of the rendering process. We accomplish this by: (1) transforming the multi-view rendering primitive from polygons to points dynamically at run-time, (2) performing geometric sampling tailored to multiple views, and (3) reorganizing the structure of computation to parallelize view rendering. We demonstrate the effectiveness of our approach by implementing and evaluating novel multi-view soft shadowing algorithms based on our design. These new algorithms tackle a complex visual effect that is not possible to accurately produce in real-time using existing methods. We also introduce View Independent Rasterization (VIR): a fast and flexible method to transform complex polygonal meshes into point representations suitable for rendering many views from arbitrary viewpoints. VIR is an important tool to achieve multi-view point-based rendering, as well as a useful general approach to real-time view agnostic polygonal sampling. Although we focus on algorithmic solutions to the classic rendering problem of soft shadows, we also provide suggestions to evolve future GPU architectures to better accelerate point-based rendering, multi-view rendering, and complex visual effects that are still out of reach.

AbstractNew measures of user experience must be defined that can combine the scalability and unobtrusiveness of activity traces with the richness of more traditional measures. Machine learning can be used to predict established UX measures from such activity traces. We advocate research into the type of activity traces needed as input for such measures, the machine learning technology needed, and the user experience components and measures to be predicted.

AbstractExisting graphics hardware parallelizes view generation poorly, placing many multi-view effects – such as soft shadows, defocus blur, and reflections – out of reach for real-time applications. We present emerging solutions that address this problem using a high density point set tailored per frame to the current multi-view configuration, coupled with relatively simple reconstruction kernels. Points are a more flexible rendering primitive, which we leverage to render many high resolution views in parallel. Preliminary results show our approach accelerates point generation and the rendering of multi-view soft shadows up to 9x.

Finding our way has always been necessary, and we have always tried to make it easier. Yet today, wayfinding is changing so rapidly that it makes our heads spin. What have we lost? What might we gain? I will use a review of wayfinding past, present and future to raise such questions; arguing that the enjoyment we experience along the way is now just as important as the efficiency with which we find the way’s end.