We had papers accepted at SIGGRAPH 2012 and EGSR 2012 about our new "virtual ray lights" and "virtual beam lights" approaches to global illumination in participating media. The full papers will be posted shortly, but for now you can check out the publications section for the project description and abstracts.

Research Summary

My research in computer graphics is concerned with capturing, simulating, manipulating, and physically realizing how light interacts with its environment. In effect, I strive to understand why things look the way they do, how we can simulate their interaction with light efficiently, how we can intuitively author or edit that appearance, and how we can create physical objects with control over their appearance. My work in these areas has been incorporated into production rendering systems and has been used in the making of feature films, including Disney's Tangled, Frozen, and Big Hero 6. In 2013, I received the Eurographics Young Researcher Award and in 2019 the NSF CAREER Award.

Some overlapping focus areas include:

Light transport simulation

One of the core challenges in rendering is accurately simulating how light gets transported between surfaces and within volumes. I have been working to eliminate the point-centric legacy of volume light transport algorithms by reformulating them over higher-dimensional samples such as light path segments instead of path vertices. This lead to the development of the beam radiance estimate, photon beams, virtual ray light, joint path importance sampling, and higher-dimensional samples like "photon planes".

@article{singh19fourier,
author = "Singh, Gurprit and Subr, Kartic and Coeurjolly, David and Ostromoukhov, Victor and Jarosz, Wojciech",
title = "Fourier Analysis of Correlated Monte Carlo Importance Sampling",
journal = "Computer Graphics Forum",
volume = "38",
number = "1",
year = "2019",
month = "mar",
pubstate = "awaiting publication",
abstract = "Fourier analysis is gaining popularity in image synthesis as a tool for the analysis of error in Monte Carlo (MC) integration. Still, existing tools are only able to analyze convergence under simplifying assumptions (such as randomized shifts) which are not applied in practice during rendering. We reformulate the expressions for bias and variance of sampling-based integrators to unify non-uniform sample distributions (importance sampling) as well as correlations between samples while respecting finite sampling domains. Our unified formulation hints at fundamental limitations of Fourier-based tools in performing variance analysis for MC integration. At the same time, it reveals that, when combined with correlated sampling, importance sampling (IS) can impact convergence rate by introducing or inhibiting discontinuities in the integrand. We demonstrate that the convergence of multiple importance sampling (MIS) is determined by the strategy which converges slowest and propose several simple approaches to overcome this limitation. We show that smoothing light boundaries (as commonly done in production to reduce variance) can improve (M)IS convergence (at a cost of introducing a small amount of bias) since it removes C0 discontinuities within the integration domain. We also propose practical integrand- and sample-mirroring approaches which cancel the impact of boundary discontinuities on the convergence rate of estimators."
}

@article{huang19augmented,
author = "Huang, Jonathan and Kinateder, Max and Dunn, Matt J. and Jarosz, Wojciech and Yang, Xing-Dong and Cooper, Emily A.",
title = "An augmented reality sign-reading assistant for users with reduced vision",
journal = "PLOS ONE",
publisher = "Public Library of Science",
year = "2019",
month = "jan",
volume = "14",
number = "1",
pages = "1–9",
keywords = "augmented reality",
abstract = "People typically rely heavily on visual information when finding their way to unfamiliar locations. For individuals with reduced vision, there are a variety of navigational tools available to assist with this task if needed. However, for wayfinding in unfamiliar indoor environments the applicability of existing tools is limited. One potential approach to assist with this task is to enhance visual information about the location and content of existing signage in the environment. With this aim, we developed a prototype software application, which runs on a consumer head-mounted augmented reality (AR) device, to assist visually impaired users with sign-reading. The sign-reading assistant identifies real-world text (e.g., signs and room numbers) on command, highlights the text location, converts it to high contrast AR lettering, and optionally reads the content aloud via text-to-speech. We assessed the usability of this application in a behavioral experiment. Participants with simulated visual impairment were asked to locate a particular office within a hallway, either with or without AR assistance (referred to as the AR group and control group, respectively). Subjective assessments indicated that participants in the AR group found the application helpful for this task, and an analysis of walking paths indicated that these participants took more direct routes compared to the control group. However, participants in the AR group also walked more slowly and took more time to complete the task than the control group. The results point to several specific future goals for usability and system performance in AR-based assistive tools.",
doi = "10.1371/journal.pone.0210630"
}

@article{bitterli18framework,
author = "Bitterli, Benedikt and Ravichandran, Srinath and Müller, Thomas and Wrenninge, Magnus and Novák, Jan and Marschner, Steve and Jarosz, Wojciech",
title = "A radiative transfer framework for non-exponential media",
journal = "ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia)",
volume = "37",
number = "6",
pages = "225:1–225:17",
year = "2018",
month = "nov",
doi = "10.1145/3272127.3275103",
keywords = "physically based rendering, participating media, non-classical transport, artistic editing, appearance modeling",
abstract = "We develop a new theory of volumetric light transport for media with non-exponential free-flight distributions. Recent insights from atmospheric sciences and neutron transport demonstrate that such distributions arise in the presence of correlated scatterers, which are naturally produced by processes such as cloud condensation and fractal-pattern formation. Our theory formulates a non-exponential path integral as the result of averaging stochastic classical media, and we introduce practical models to solve the resulting averaging problem efficiently. Our theory results in a generalized path integral which allows us to handle non-exponential media using the full range of Monte Carlo rendering algorithms while enriching the range of achievable appearance. We propose parametric models for controlling the statistical correlations by leveraging work on stochastic processes, and we develop a method to combine such unresolved correlations (and the resulting non-exponential free-flight behavior) with explicitly modeled macroscopic heterogeneity. This provides a powerful authoring approach where artists can freely design the shape of the attenuation profile separately from the macroscopic heterogeneous density, while our theory provides a physically consistent interpretation in terms of a path space integral. We address important considerations for graphics including reciprocity and bidirectional rendering algorithms, all in the presence of surfaces and correlated media."
}

@inproceedings{novak18monte-sig,
author = "Novák, Jan and Georgiev, Iliyan and Hanika, Johannes and Křivánek, Jaroslav and Jarosz, Wojciech",
title = "Monte Carlo Methods for Physically Based Volume Rendering",
booktitle = "ACM SIGGRAPH Courses",
year = "2018",
month = "aug",
isbn = "978-1-4503-5809-5",
doi = "10.1145/3214834.3214880",
abstract = "We survey methods that utilize Monte Carlo (MC) integration to simulate light transport in scenes with participating media. The goal of this course is to complement a recent Eurographics 2018 state-of-the-art report providing a broad overview of most techniques developed to date, including a few methods from neutron transport, with a focus on concepts that are most relevant to CG practitioners.

The wide adoption of path-tracing algorithms in high-end realistic rendering has stimulated many diverse research initiatives aimed at efficiently rendering scenes with participating media. More computational power has enabled holistic approaches that tie volumetric effects and surface scattering together and simplify authoring workflows. Methods that were previously assumed to be incompatible have been unified to allow renderers to benefit from each method's respective strengths. Generally, investigations have shifted away from specialized solutions, e.g. for single- or multiple-scattering approximations or analytical methods, towards the more versatile Monte Carlo algorithms that are currently enjoying a widespread success in many production settings.

The goal of this course is to provide the audience with a deep, up-to-date understanding of key techniques for free-path sampling, transmittance estimation, and light-path construction in participating media, including those that are presently utilized in production rendering systems. We present a coherent overview of the fundamental building blocks and we contrast the various advanced methods that build on them, providing attendees with guidance for implementing existing solutions and developing new ones."
}

@article{muller16efficient,
author = "Müller, Thomas and Papas, Marios and Gross, Markus and Jarosz, Wojciech and Novák, Jan",
title = "Efficient Rendering of Heterogeneous Polydisperse Granular Media",
journal = "ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia)",
volume = "35",
number = "6",
pages = "168:1–168:14",
year = "2016",
month = "dec",
doi = "10.1145/2980179.2982429",
keywords = "physically based rendering, granular media, discrete random media",
abstract = "We address the challenge of efficiently rendering massive assemblies of grains within a forward path-tracing framework. Previous approaches exist for accelerating high-order scattering for a limited, and static, set of granular materials, often requiring scene-dependent precomputation. We significantly expand the admissible regime of granular materials by considering heterogeneous and dynamic granular mixtures with spatially varying grain concentrations, pack rates, and sizes. Our method supports both procedurally generated grain assemblies and dynamic assemblies authored in off-the-shelf particle simulation tools. The key to our speedup lies in two complementary aggregate scattering approximations which we introduced to jointly accelerate construction of short and long light paths. For low-order scattering, we accelerate path construction using novel grain scattering distribution functions (GSDF) which aggregate intra-grain light transport while retaining important grain-level structure. For high-order scattering, we extend prior work on shell transport functions (STF) to support dynamic, heterogeneous mixtures of grains with varying sizes. We do this without a scene-dependent precomputation and show how this can also be used to accelerate light transport in arbitrary continuous heterogeneous media. Our multi-scale rendering automatically minimizes the usage of explicit path tracing to only the first grain along a light path, or can avoid it completely, when appropriate, by switching to our aggregate transport approximations. We demonstrate our technique on animated scenes containing heterogeneous mixtures of various types of grains that could not previously be rendered efficiently. We also compare to previous work on a simpler class of granular assemblies, reporting significant computation savings, often yielding higher accuracy results."
}

@article{papas13fabricating,
author = "Papas, Marios and Regg, Christian and Jarosz, Wojciech and Bickel, Bernd and Jackson, Philip and Matusik, Wojciech and Marschner, Steve and Gross, Markus",
title = "Fabricating Translucent Materials using Continuous Pigment Mixtures",
journal = "ACM Transactions on Graphics (Proceedings of SIGGRAPH)",
volume = "32",
number = "4",
year = "2013",
month = "jul",
doi = "10.1145/2461912.2461974",
keywords = "subsurface scattering, material design, fabrication",
abstract = "We present a method for practical physical reproduction and design of homogeneous materials with desired subsurface scattering. Our process uses a collection of different pigments that can be suspended in a clear base material. Our goal is to determine pigment concentrations that best reproduce the appearance and subsurface scattering of a given target material. In order to achieve this task we first fabricate a collection of material samples composed of known mixtures of the available pigments with the base material. We then acquire their reflectance profiles using a custom-built measurement device. We use the same device to measure the reflectance profile of a target material. Based on the database of mappings from pigment concentrations to reflectance profiles, we use an optimization process to compute the concentration of pigments to best replicate the target material appearance. We demonstrate the practicality of our method by reproducing a variety of different translucent materials. We also present a tool that allows the user to explore the range of achievable appearances for a given set of pigments."
}

@article{papas11goal,
author = "Papas, Marios and Jarosz, Wojciech and Jakob, Wenzel and Rusinkiewicz, Szymon and Matusik, Wojciech and Weyrich, Tim",
title = "Goal-based Caustics",
journal = "Computer Graphics Forum (Proceedings of Eurographics)",
volume = "30",
number = "2",
year = "2011",
month = "jun",
pages = "503–511",
doi = "10.1111/j.1467-8659.2011.01876.x",
abstract = "We propose a novel system for designing and manufacturing surfaces that produce desired caustic images when illuminated by a light source. Our system is based on a nonnegative image decomposition using a set of possibly overlapping anisotropic Gaussian kernels. We utilize this decomposition to construct an array of continuous surface patches, each of which focuses light onto one of the Gaussian kernels, either through refraction or reflection. We show how to derive the shape of each continuous patch and arrange them by performing a discrete assignment of patches to kernels in the desired caustic. Our decomposition provides for high fidelity reconstruction of natural images using a small collection of patches. We demonstrate our approach on a wide variety of caustic images by manufacturing physical surfaces with a small number of patches."
}

@article{jarosz11comprehensive,
author = "Jarosz, Wojciech and Nowrouzezahrai, Derek and Sadeghi, Iman and Jensen, Henrik Wann",
title = "A Comprehensive Theory of Volumetric Radiance Estimation Using Photon Points and Beams",
journal = "ACM Transactions on Graphics (Presented at SIGGRAPH)",
volume = "30",
number = "1",
month = "jan",
year = "2011",
pages = "5:1–5:19",
doi = "10.1145/1899404.1899409",
keywords = "photon beams, photon mapping, beam radiance estimate, density estimation, participating media",
abstract = {We present two contributions to the area of volumetric rendering. We develop a novel, comprehensive theory of volumetric radiance estimation that leads to several new insights and includes all previously published estimates as special cases. This theory allows for estimating in-scattered radiance at a point, or accumulated radiance along a camera ray, with the standard photon particle representation used in previous work. Furthermore, we generalize these operations to include a more compact, and more expressive intermediate representation of lighting in participating media, which we call "photon beams." The combination of these representations and their respective query operations results in a collection of nine distinct volumetric radiance estimates.

Our second contribution is a more efficient rendering method for participating media based on photon beams. Even when shooting and storing less photons and using less computation time, our method significantly reduces both bias (blur) and variance in volumetric radiance estimation. This enables us to render sharp lighting details (e.g. volume caustics) using just tens of thousands of photon beams, instead of the millions to billions of photon points required with previous methods.}
}

@article{clarberg05wavelet,
author = "Clarberg, Petrik and Jarosz, Wojciech and Akenine-Möller, Tomas and Jensen, Henrik Wann",
title = "Wavelet Importance Sampling: Efficiently Evaluating Products of Complex Functions",
journal = "ACM Transactions on Graphics (Proceedings of SIGGRAPH)",
volume = "24",
number = "3",
month = "aug",
year = "2005",
pages = "1166–1175",
doi = "10.1145/1073204.1073328",
abstract = "We present a new technique for importance sampling products of complex functions using wavelets. First, we generalize previous work on wavelet products to higher dimensional spaces and show how this product can be sampled on-the-fly without the need of evaluating the full product. This makes it possible to sample products of high-dimensional functions even if the product of the two functions in itself is too memory consuming. Then, we present a novel hierarchical sample warping algorithm that generates high-quality point distributions, which match the wavelet representation exactly. One application of the new sampling technique is rendering of objects with measured BRDFs illuminated by complex distant lighting — our results demonstrate how the new sampling technique is more than an order of magnitude more efficient than the best previous techniques."
}

@article{singh19fourier,
author = "Singh, Gurprit and Subr, Kartic and Coeurjolly, David and Ostromoukhov, Victor and Jarosz, Wojciech",
title = "Fourier Analysis of Correlated Monte Carlo Importance Sampling",
journal = "Computer Graphics Forum",
volume = "38",
number = "1",
year = "2019",
month = "mar",
pubstate = "awaiting publication",
abstract = "Fourier analysis is gaining popularity in image synthesis as a tool for the analysis of error in Monte Carlo (MC) integration. Still, existing tools are only able to analyze convergence under simplifying assumptions (such as randomized shifts) which are not applied in practice during rendering. We reformulate the expressions for bias and variance of sampling-based integrators to unify non-uniform sample distributions (importance sampling) as well as correlations between samples while respecting finite sampling domains. Our unified formulation hints at fundamental limitations of Fourier-based tools in performing variance analysis for MC integration. At the same time, it reveals that, when combined with correlated sampling, importance sampling (IS) can impact convergence rate by introducing or inhibiting discontinuities in the integrand. We demonstrate that the convergence of multiple importance sampling (MIS) is determined by the strategy which converges slowest and propose several simple approaches to overcome this limitation. We show that smoothing light boundaries (as commonly done in production to reduce variance) can improve (M)IS convergence (at a cost of introducing a small amount of bias) since it removes C0 discontinuities within the integration domain. We also propose practical integrand- and sample-mirroring approaches which cancel the impact of boundary discontinuities on the convergence rate of estimators."
}

@article{huang19augmented,
author = "Huang, Jonathan and Kinateder, Max and Dunn, Matt J. and Jarosz, Wojciech and Yang, Xing-Dong and Cooper, Emily A.",
title = "An augmented reality sign-reading assistant for users with reduced vision",
journal = "PLOS ONE",
publisher = "Public Library of Science",
year = "2019",
month = "jan",
volume = "14",
number = "1",
pages = "1–9",
keywords = "augmented reality",
abstract = "People typically rely heavily on visual information when finding their way to unfamiliar locations. For individuals with reduced vision, there are a variety of navigational tools available to assist with this task if needed. However, for wayfinding in unfamiliar indoor environments the applicability of existing tools is limited. One potential approach to assist with this task is to enhance visual information about the location and content of existing signage in the environment. With this aim, we developed a prototype software application, which runs on a consumer head-mounted augmented reality (AR) device, to assist visually impaired users with sign-reading. The sign-reading assistant identifies real-world text (e.g., signs and room numbers) on command, highlights the text location, converts it to high contrast AR lettering, and optionally reads the content aloud via text-to-speech. We assessed the usability of this application in a behavioral experiment. Participants with simulated visual impairment were asked to locate a particular office within a hallway, either with or without AR assistance (referred to as the AR group and control group, respectively). Subjective assessments indicated that participants in the AR group found the application helpful for this task, and an analysis of walking paths indicated that these participants took more direct routes compared to the control group. However, participants in the AR group also walked more slowly and took more time to complete the task than the control group. The results point to several specific future goals for usability and system performance in AR-based assistive tools.",
doi = "10.1371/journal.pone.0210630"
}

@article{bitterli18framework,
author = "Bitterli, Benedikt and Ravichandran, Srinath and Müller, Thomas and Wrenninge, Magnus and Novák, Jan and Marschner, Steve and Jarosz, Wojciech",
title = "A radiative transfer framework for non-exponential media",
journal = "ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia)",
volume = "37",
number = "6",
pages = "225:1–225:17",
year = "2018",
month = "nov",
doi = "10.1145/3272127.3275103",
keywords = "physically based rendering, participating media, non-classical transport, artistic editing, appearance modeling",
abstract = "We develop a new theory of volumetric light transport for media with non-exponential free-flight distributions. Recent insights from atmospheric sciences and neutron transport demonstrate that such distributions arise in the presence of correlated scatterers, which are naturally produced by processes such as cloud condensation and fractal-pattern formation. Our theory formulates a non-exponential path integral as the result of averaging stochastic classical media, and we introduce practical models to solve the resulting averaging problem efficiently. Our theory results in a generalized path integral which allows us to handle non-exponential media using the full range of Monte Carlo rendering algorithms while enriching the range of achievable appearance. We propose parametric models for controlling the statistical correlations by leveraging work on stochastic processes, and we develop a method to combine such unresolved correlations (and the resulting non-exponential free-flight behavior) with explicitly modeled macroscopic heterogeneity. This provides a powerful authoring approach where artists can freely design the shape of the attenuation profile separately from the macroscopic heterogeneous density, while our theory provides a physically consistent interpretation in terms of a path space integral. We address important considerations for graphics including reciprocity and bidirectional rendering algorithms, all in the presence of surfaces and correlated media."
}

@inproceedings{novak18monte-sig,
author = "Novák, Jan and Georgiev, Iliyan and Hanika, Johannes and Křivánek, Jaroslav and Jarosz, Wojciech",
title = "Monte Carlo Methods for Physically Based Volume Rendering",
booktitle = "ACM SIGGRAPH Courses",
year = "2018",
month = "aug",
isbn = "978-1-4503-5809-5",
doi = "10.1145/3214834.3214880",
abstract = "We survey methods that utilize Monte Carlo (MC) integration to simulate light transport in scenes with participating media. The goal of this course is to complement a recent Eurographics 2018 state-of-the-art report providing a broad overview of most techniques developed to date, including a few methods from neutron transport, with a focus on concepts that are most relevant to CG practitioners.

The wide adoption of path-tracing algorithms in high-end realistic rendering has stimulated many diverse research initiatives aimed at efficiently rendering scenes with participating media. More computational power has enabled holistic approaches that tie volumetric effects and surface scattering together and simplify authoring workflows. Methods that were previously assumed to be incompatible have been unified to allow renderers to benefit from each method's respective strengths. Generally, investigations have shifted away from specialized solutions, e.g. for single- or multiple-scattering approximations or analytical methods, towards the more versatile Monte Carlo algorithms that are currently enjoying a widespread success in many production settings.

The goal of this course is to provide the audience with a deep, up-to-date understanding of key techniques for free-path sampling, transmittance estimation, and light-path construction in participating media, including those that are presently utilized in production rendering systems. We present a coherent overview of the fundamental building blocks and we contrast the various advanced methods that build on them, providing attendees with guidance for implementing existing solutions and developing new ones."
}

@article{kinateder18using,
author = "Kinateder, Max and Gualtieri, Justin and Dunn, Matt J. and Jarosz, Wojciech and Yang, Xing-Dong and Cooper, Emily A.",
title = "Using an Augmented Reality Device as a Distance-based Vision Aid—Promise and Limitations",
issn = "1538-9235",
journal = "Optometry and Vision Science",
month = "jun",
year = "2018",
doi = "10.1097/OPX.0000000000001232",
abstract = "For people with limited vision, wearable displays hold the potential to digitally enhance visual function. As these display technologies advance, it is important to understand their promise and limitations as vision aids. The aim of this study was to test the potential of a consumer augmented reality (AR) device for improving the functional vision of people with near-complete vision loss. We developed an AR application that translates spatial information into high-contrast visual patterns and ran two experiements to assess its efficacy to improve performance on a range of visual tasks (identifying the location, pose and gesture of a person, identifying objects, and moving around in an unfamiliar space). Our findings indicate that consumer AR devices may soon be able to support applications that improve the functional vision of users for some tasks. In our study, both artificially impaired participants and participants with near-complete vision loss performed tasks that they could not do without the AR system; however, current limitations in system performance and form factor, as well as the risk of overconfidence, will need to be overcome."
}

@article{novak18monte,
author = "Novák, Jan and Georgiev, Iliyan and Hanika, Johannes and Jarosz, Wojciech",
title = "Monte Carlo Methods for Volumetric Light Transport Simulation",
journal = "Computer Graphics Forum (Proceedings of Eurographics - State of the Art Reports)",
volume = "37",
number = "2",
month = "may",
year = "2018",
abstract = "The wide adoption of path-tracing algorithms in high-end realistic rendering has stimulated many diverse research initiatives. In this paper we present a coherent survey of methods that utilize Monte Carlo integration for estimating light transport in scenes containing participating media. Our work complements the volume-rendering state-of-the-art report by Cerezo et al. [2005]; we review publications accumulated since its publication over a decade ago, and include earlier methods that are key for building light transport paths in a stochastic manner. We begin by describing analog and non-analog procedures for free-path sampling and discuss various expected-value, collision, and track-length estimators for computing transmittance. We then review the various rendering algorithms that employ these as building blocks for path sampling. Special attention is devoted to null-collision methods that utilize fictitious matter to handle spatially varying densities; we import two “next-flight” estimators originally developed in nuclear sciences. Whenever possible, we draw connections between image-synthesis techniques and methods from particle physics and neutron transport to provide the reader with a broader context."
}

@article{marco18second,
author = "Marco, Julio and Jarabo, Adrian and Jarosz, Wojciech and Gutierrez, Diego",
title = "Second-Order Occlusion-Aware Volumetric Radiance Caching",
journal = "ACM Transactions on Graphics (Presented at SIGGRAPH)",
volume = "37",
number = "2",
year = "2018",
month = "apr",
doi = "10.1145/3185225",
abstract = "We present a second-order gradient analysis of light transport in participating media and use this to develop an improved radiance caching algorithm for volumetric light transport. We adaptively sample and interpolate radiance from sparse points in the medium using a second-order Hessian-based error metric to determine when interpolation is appropriate. We derive our metric from each point's incoming lightfield, computed by using a proxy triangulation-based representation of the radiance reflected by the surrounding medium and geometry. We use this representation to efficiently compute the first- and second-order derivatives of the radiance at the cache points while accounting for occlusion changes. We also propose a self-contained two-dimensional model for light transport in media and use it to validate and analyze our approach, demonstrating that our method outperforms previous radiance caching algorithms both in terms of accurate derivative estimates and final radiance extrapolation. We generalize these findings to practical three-dimensional scenarios, where we show improved results while reducing computation time by up to 30\% compared to previous work."
}

@inproceedings{maguire18modelling,
author = "Maguire, Luke and Papas, Marios and Jarosz, Wojciech and Fox, Phillip and Dicinoski, Greg and Olivares, Maria",
title = "The Modelling of Caustics to Produce a Projection Image",
booktitle = "Optical Security Documents Conference",
year = "2018",
month = "jan",
abstract = "The Projection-Based Image is a new banknote security feature that projects a desired image when an array of microlenses embossed into the banknote substrate is exposed to a light source. The feature is designed computationally to make the projected image difficult to discern by inspection of the lens structures, and to accommodate a range of light sources, allowing wide accessibility to users with a simple mobile phone flash. The projected image is formed from a collection of Gaussian-shaped caustic profiles, each arising due to light refracting through a single microlens."
}

@article{bitterli18reversible,
author = "Bitterli, Benedikt and Jakob, Wenzel and Novák, Jan and Jarosz, Wojciech",
title = "Reversible Jump Metropolis Light Transport Using Inverse Mappings",
journal = "ACM Transactions on Graphics (Presented at SIGGRAPH)",
volume = "37",
number = "1",
year = "2018",
month = "jan",
doi = "10.1145/3132704",
abstract = "We study Markov Chain Monte Carlo (MCMC) methods operating in primary sample space and their interactions with multiple sampling techniques. We observe that incorporating the sampling technique into the state of the Markov Chain, as done in Multiplexed Metropolis Light Transport (MMLT), impedes the ability of the chain to properly explore the path space, as transitions between sampling techniques lead to disruptive alterations of path samples. To address this issue, we reformulate Multiplexed MLT in the Reversible Jump MCMC framework (RJMCMC) and introduce inverse sampling techniques that turn light paths into the random numbers that would produce them. This allows us to formulate a novel perturbation that can locally transition between sampling techniques without changing the geometry of the path, and we derive the correct acceptance probability using RJMCMC. We investigate how to generalize this concept to non-invertible sampling techniques commonly found in practice, and introduce probabilistic inverses that extend our perturbation to cover most sampling methods found in light transport simulations. Our theory reconciles the inverses with RJMCMC yielding an unbiased algorithm, which we call Reversible Jump MLT (RJMLT). We verify the correctness of our implementation in canonical and practical scenarios and demonstrate improved temporal coherence, decrease in structured artifacts, and faster convergence on a wide variety of scenes."
}

@article{singh17convergence,
author = "Singh, Gurprit and Jarosz, Wojciech",
title = "Convergence Analysis for Anisotropic Monte Carlo Sampling Spectra",
journal = "ACM Transactions on Graphics (Proceedings of SIGGRAPH)",
volume = "36",
number = "4",
year = "2017",
month = "jul",
doi = "10.1145/3072959.3073656",
keywords = "stochastic sampling, signal processing, Fourier transform, Power spectrum",
abstract = "Traditional Monte Carlo (MC) integration methods use point samples to numerically approximate the underlying integral. This approximation introduces variance in the integrated result, and this error can depend critically on the sampling patterns used during integration. Most of the well-known samplers used for MC integration in graphics—e.g. jittered, Latin-hypercube (N-rooks), multijittered—are anisotropic in nature. However, there are currently no tools available to analyze the impact of such anisotropic samplers on the variance convergence behavior of Monte Carlo integration. In this work, we develop a Fourier-domain mathematical tool to analyze the variance, and subsequently the convergence rate, of Monte Carlo integration using any arbitrary (anisotropic) sampling power spectrum. We also validate and leverage our theoretical analysis, demonstrating that judicious alignment of anisotropic sampling and integrand spectra can improve variance and convergence rates in MC rendering, and that similar improvements can apply to (anisotropic) deterministic samplers."
}

@article{singh17variance,
author = "Singh, Gurprit and Miller, Bailey and Jarosz, Wojciech",
title = "Variance and Convergence Analysis of Monte Carlo Line and Segment Sampling",
journal = "Computer Graphics Forum (Proceedings of EGSR)",
year = "2017",
volume = "36",
number = "4",
month = "jun",
doi = "10.1111/cgf.13226",
publisher = "The Eurographics Association",
keywords = "stochastic sampling, signal processing, Fourier transform, Power spectrum",
abstract = "Recently researchers have started employing Monte Carlo-like line sample estimators in rendering, demonstrating dramatic reductions in variance (visible noise) for effects such as soft shadows, defocus blur, and participating media. Unfortunately, there is currently no formal theoretical framework to predict and analyze Monte Carlo variance using line and segment samples which have inherently anisotropic Fourier power spectra. In this work, we propose a theoretical formulation for lines and finite-length segment samples in the frequency domain that allows analyzing their anisotropic power spectra using previous isotropic variance and convergence tools. Our analysis shows that judiciously oriented line samples not only reduce the dimensionality but also pre-filter C0 discontinuities, resulting in further improvement in variance and convergence rates. Our theoretical insights also explain how finite-length segment samples impact variance and convergence rates only by pre-filtering discontinuities. We further extend our analysis to consider (uncorrelated) multi-directional line (segment) sampling, showing that such schemes can increase variance compared to unidirectional sampling. We validate our theoretical results with a set of experiments including direct lighting, ambient occlusion, and volumetric caustics using points, lines, and segment samples."
}

@inproceedings{marco17transient,
author = "Marco, Julio and Jarosz, Wojciech and Gutierrez, Diego and Jarabo, Adrian",
booktitle = "Spanish Computer Graphics Conference (CEIG)",
title = "Transient Photon Beams",
month = "jun",
year = "2017",
publisher = "The Eurographics Association",
isbn = "978-3-03868-046-8",
doi = "10.2312/ceig.20171216",
abstract = "Recent advances on transient imaging and their applications have opened the necessity of forward models that allow precise generation and analysis of time-resolved light transport data. However, traditional steady-state rendering techniques are not suitable for computing transient light transport due to the aggravation of the inherent Monte Carlo variance over time. These issues are specially problematic in participating media, which demand high number of samples to achieve noise-free solutions. We address this problem by presenting the first photon-based method for transient rendering of participating media that performs density estimations on time-resolved precomputed photon maps. We first introduce the transient integral form of the radiative transfer equation into the computer graphics community, including transient delays on the scattering events. Based on this formulation we leverage the high density and parameterized continuity provided by photon beams algorithms to present a new transient method that allows to significantly mitigate variance and efficiently render participating media effects in transient state."
}

@article{rousselle16image,
author = "Rousselle, Fabrice and Jarosz, Wojciech and Novák, Jan",
title = "Image-space Control Variates for Rendering",
journal = "ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia)",
volume = "35",
number = "6",
pages = "169:1–169:12",
year = "2016",
month = "dec",
doi = "10.1145/2980179.2982443",
keywords = "physically based rendering, gradient-domain, Poisson reconstruction",
abstract = "We explore the theory of integration with control variates in the context of rendering. Our goal is to optimally combine multiple estimators using their covariances. We focus on two applications, re-rendering and gradient-domain rendering, where we exploit coherence between temporally and spatially adjacent pixels. We propose an image-space (iterative) reconstruction scheme that employs control variates to reduce variance. We show that recent works on scene editing and gradient-domain rendering can be directly formulated as control-variate estimators, despite using seemingly different approaches. In particular, we demonstrate the conceptual equivalence of screened Poisson image reconstruction and our iterative reconstruction scheme. Our composite estimators offer practical and simple solutions that improve upon the current state of the art for the two investigated applications."
}

@article{muller16efficient,
author = "Müller, Thomas and Papas, Marios and Gross, Markus and Jarosz, Wojciech and Novák, Jan",
title = "Efficient Rendering of Heterogeneous Polydisperse Granular Media",
journal = "ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia)",
volume = "35",
number = "6",
pages = "168:1–168:14",
year = "2016",
month = "dec",
doi = "10.1145/2980179.2982429",
keywords = "physically based rendering, granular media, discrete random media",
abstract = "We address the challenge of efficiently rendering massive assemblies of grains within a forward path-tracing framework. Previous approaches exist for accelerating high-order scattering for a limited, and static, set of granular materials, often requiring scene-dependent precomputation. We significantly expand the admissible regime of granular materials by considering heterogeneous and dynamic granular mixtures with spatially varying grain concentrations, pack rates, and sizes. Our method supports both procedurally generated grain assemblies and dynamic assemblies authored in off-the-shelf particle simulation tools. The key to our speedup lies in two complementary aggregate scattering approximations which we introduced to jointly accelerate construction of short and long light paths. For low-order scattering, we accelerate path construction using novel grain scattering distribution functions (GSDF) which aggregate intra-grain light transport while retaining important grain-level structure. For high-order scattering, we extend prior work on shell transport functions (STF) to support dynamic, heterogeneous mixtures of grains with varying sizes. We do this without a scene-dependent precomputation and show how this can also be used to accelerate light transport in arbitrary continuous heterogeneous media. Our multi-scale rendering automatically minimizes the usage of explicit path tracing to only the first grain along a light path, or can avoid it completely, when appropriate, by switching to our aggregate transport approximations. We demonstrate our technique on animated scenes containing heterogeneous mixtures of various types of grains that could not previously be rendered efficiently. We also compare to previous work on a simpler class of granular assemblies, reporting significant computation savings, often yielding higher accuracy results."
}

@article{christensen16path,
author = "Christensen, Per H. and Jarosz, Wojciech",
title = "The Path to Path-Traced Movies",
journal = "Foundations and Trends in Computer Graphics and Vision",
volume = "10",
number = "2",
year = "2016",
month = "oct",
doi = "10.1561/0600000073",
issn = "1572-2740",
pages = "103–175",
keywords = "special effects, animated films, industry, rendering",
abstract = "Path tracing is one of several techniques to render photorealistic images by simulating the physics of light propagation within a scene. The roots of path tracing are outside of computer graphics, in the Monte Carlo simulations developed for neutron transport. A great strength of path tracing is that it is conceptually, mathematically, and often-times algorithmically simple and elegant, yet it is very general. Until recently, however, brute-force path tracing techniques were simply too noisy and slow to be practical for movie production rendering. They therefore received little usage outside of academia, except perhaps to generate an occasional reference image to validate the correctness of other (faster but less general) rendering algorithms. The last ten years have seen a dramatic shift in this balance, and path tracing techniques are now widely used. This shift was partially fueled by steadily increasing computational power and memory, but also by significant improvements in sampling, rendering, and denoising techniques. In this survey, we provide an overview of path tracing and highlight important milestones in its development that have led to it becoming the preferred movie rendering technique today."
}

@inproceedings{prevost16balancing,
author = "Prévost, Romain and Bächer, Moritz and Jarosz, Wojciech and Sorkine-Hornung, Olga",
title = "Balancing 3{D} Models with Movable Masses",
booktitle = "Proceedings of the Vision, Modeling and Visualization Workshop (VMV)",
publisher = "Eurographics Association",
month = "oct",
year = "2016",
doi = "10.2312/vmv.20161337",
abstract = "We present an algorithm to balance 3D printed models using movable embedded masses. As input, the user provides a 3D model together with the desired suspension, standing, and immersion objectives. Our technique then determines the placement and suitable sizing of a set of hollow capsules with embedded metallic spheres, leveraging the resulting multiple centers of mass to simultaneously satisfy the combination of these objectives. To navigate the non-convex design space in a scalable manner, we propose a heuristic that leads to near-optimal solutions when compared to an exhaustive search. Our method enables the design of models with complex and surprising balancing behavior, as we demonstrate with several manufactured examples."
}

@article{blumer16reduced,
author = "Blumer, Adrian and Novák, Jan and Habel, Ralf and Nowrouzezahrai, Derek and Jarosz, Wojciech",
title = "Reduced Aggregate Scattering Operators for Path Tracing",
journal = "Computer Graphics Forum (Proceedings of Pacific Graphics)",
volume = "35",
number = "7",
pages = "461–473",
month = "oct",
year = "2016",
doi = "10.1111/cgf.13043",
abstract = "Aggregate scattering operators (ASOs) describe the overall scattering behavior of an asset (i.e., an object or volume, or collection thereof) accounting for all orders of its internal scattering. We propose a practical way to precompute and compactly store ASOs and demonstrate their ability to accelerate path tracing. Our approach is modular avoiding costly and inflexible scene-dependent precomputation. This is achieved by decoupling light transport within and outside of each asset, and precomputing on a per-asset level. We store the internal transport in a reduced-dimensional subspace tailored to the structure of the asset geometry, its scattering behavior, and typical illumination conditions, allowing the ASOs to maintain good accuracy with modest memory requirements. The precomputed ASO can be reused across all instances of the asset and across multiple scenes. We augment ASOs with functionality enabling multi-bounce importance sampling, fast short-circuiting of complex light paths, and compact caching, while retaining rapid progressive preview rendering. We demonstrate the benefits of our ASOs by efficiently path tracing scenes containing many instances of objects with complex inter-reflections or multiple scattering."
}

@techreport{singh16monte,
author = "Singh, Gurprit and Jarosz, Wojciech",
title = "Monte Carlo Convergence Analysis for Anisotropic Sampling Power Spectra",
institution = "Dartmouth College, Computer Science",
address = "Hanover, NH",
number = "TR2016-816",
year = "2016",
month = "aug",
abstract = "Traditional Monte Carlo (MC) integration methods use point samples to numerically approximate the underlying integral. This approximation introduces variance in the integrated result, and this error can depend critically on the sampling patterns used during integration. Most of the well known samplers used for MC integration in graphics, e.g. jitter, Latin hypercube (n-rooks), multi-jitter, are anisotropic in nature. However, there are currently no tools available to analyze the impact of such anisotropic samplers on the variance convergence behavior of Monte Carlo integration. In this work, we propose a mathematical tool in the Fourier domain that allows analyzing the variance, and subsequently the convergence rate, of Monte Carlo integration using any arbitrary (anisotropic) sampling power spectrum. We apply our analysis to common anisotropic point sampling strategies in Monte Carlo integration, and extend our analysis to recent Monte Carlo approaches relying on line samples which have inherently anisotropic power spectra. We validate our theoretical results with several experiments using both point and line samples."
}

@inproceedings{subr16fourier,
author = "Subr, Kartic and Singh, Gurprit and Jarosz, Wojciech",
title = "Fourier Analysis of Numerical Integration in Monte Carlo Rendering: Theory and Practice",
booktitle = "ACM SIGGRAPH Courses",
year = "2016",
month = "jul",
location = "Anaheim, California",
doi = "10.1145/2897826.2927356",
publisher = "ACM",
address = "New York, NY, USA",
abstract = "Since being introduced to graphics in the 1980s, Monte Carlo sampling and integration has become the cornerstone of most modern rendering algorithms. Originally introduced to combat the effect of aliasing when estimating pixels values, Monte Carlo has since become a more general tool for solving complex, multi-dimensional integration problems in rendering. In this context, MC integration involves sampling a function at various stochastically placed points to approximate an integral, e.g. the radiance through a pixel integrated across the multi-dimensional space of possible light transport paths. Unfortunately, this estimation is error-prone, and the visual manifestation of this error depends critically on the properties of the integrand, placement of the stochastic sample points used, and the type of problem (integration vs. reconstruction) that is being solved with these samples.

We describe how errors present in rendered images may be analyzed as a function of the spectral (Fourier domain) statistics of the underlying sampling patterns fed to the renderer. Fourier analysis, along with the Nyquist theorem, has long been used in graphics to motivate more intelligent sampling strategies which try to minimize errors due to noise and aliasing in the pixel reconstruction problem. Only more recently, however, has the community started to apply these same Fourier tools to analyze error in the Monte Carlo integration problem. Loosely speaking, in the context of rendering a 2D image, these two problems are concerned with errors introduced across pixels (reconstruction) vs. the errors introduced within any individual pixel (integration). In this course, we focus on the latter, and survey the recent developments and insights that Fourier analyses have provided about the magnitude and convergence rate of Monte Carlo integration error. We provide a historical perspective of Monte Carlo in graphics, review the necessary mathematical background, summarize the most recent developments, discuss the practical implications of these analyzes on the design of Monte Carlo rendering algorithms, and identify important remaining research problems that can propel the field forward."
}

@article{bitterli16nonlinearly,
author = "Bitterli, Benedikt and Rousselle, Fabrice and Moon, Bochang and Iglesias-Guitián, José A. and Adler, David and Mitchell, Kenny and Jarosz, Wojciech and Novák, Jan",
title = "Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings",
journal = "Computer Graphics Forum (Proceedings of EGSR)",
volume = "35",
number = "4",
pages = "107–117",
month = "jun",
year = "2016",
doi = "10.1111/cgf.12954",
abstract = "We address the problem of denoising Monte Carlo renderings by studying existing approaches and proposing a new algorithm that yields state-of-the-art performance on a wide range of scenes. We analyze existing approaches from a theoretical and empirical point of view, relating the strengths and limitations of their corresponding components with an emphasis on production requirements. The observations of our analysis instruct the design of our new filter that offers high-quality results and stable performance. A key observation of our analysis is that using auxiliary buffers (normal, albedo, etc.) to compute the regression weights greatly improves the robustness of zero-order models, but can be detrimental to first-order models. Consequently, our filter performs a first-order regression leveraging a rich set of auxiliary buffers only when fitting the data, and, unlike recent works, considers the pixel color alone when computing the regression weights. We further improve the quality of our output by using a collaborative denoising scheme. Lastly, we introduce a general mean squared error estimator, which can handle the collaborative nature of our filter and its nonlinear weights, to automatically set the bandwidth of our regression kernel."
}

@inproceedings{koerner16subdivision,
author = "Koerner, David and Novák, Jan and Kutz, Peter and Habel, Ralf and Jarosz, Wojciech",
title = "Subdivision Next-Event Estimation for Path-Traced Subsurface Scattering",
booktitle = "Proceedings of EGSR (Experimental Ideas \& Implementations)",
year = "2016",
month = "jun",
publisher = "The Eurographics Association",
doi = "10.2312/sre.20161214",
abstract = "We present subdivision next-event estimation (SNEE) for unbiased Monte Carlo simulation of subsurface scattering. Our technique is designed to sample high frequency illumination through geometrically complex interfaces with highly directional scattering lobes enclosing a scattering medium. This case is difficult to sample and a common source of image noise. We explore the idea of exploiting the degree of freedom given by path vertices within the interior medium to find two-bounce connections to the light that satisfy the law of refraction. SNEE first finds a surface point by tracing a probe ray and then performs a subdivision step to place an intermediate vertex within the medium according to the incoming light at the chosen surface point. Our subdivision construction ensures that the path will connect to the light while obeying Fermat's principle of least time. We discuss the details of our technique and demonstrate the benefits of integrating SNEE into a forward path tracer."
}

@article{prevost16large,
author = "Prévost, Romain and Jacobson, Alec and Jarosz, Wojciech and Sorkine-Hornung, Olga",
title = "Large-Scale Painting of Photographs by Interactive Optimization",
journal = "Computers \& Graphics",
volume = "55",
month = "apr",
year = "2016",
pages = "108–117",
doi = "10.1016/j.cag.2015.11.001",
abstract = "We propose a system for painting large-scale murals of arbitrary input photographs. To that end, we choose spray paint, which is easy to use and affordable, yet requires skill to create interesting murals. An untrained user simply waves a programmatically actuated spray can in front of the canvas. Our system tracks the can's position and determines the optimal amount of paint to disperse to best approximate the input image. We accurately calibrate our spray paint simulation model in a pre-process and devise optimization routines for run-time paint dispersal decisions. Our setup is light-weight: it includes two webcams and QR-coded cubes for tracking, and a small actuation device for the spray can, attached via a 3D-printed mount. The system performs at haptic rates, which allows the user – informed by a visualization of the image residual – to guide the system interactively to recover low frequency features. We validate our pipeline for a variety of grayscale and color input images and present results in simulation and physically realized murals."
}

@article{schmidt16star,
author = "Schmidt, Thorsten-Walther and Pellacini, Fabio and Nowrouzezahrai, Derek and Jarosz, Wojciech and Dachsbacher, Carsten",
title = "State of the Art in Artistic Editing of Appearance, Lighting, and Material",
journal = "Computer Graphics Forum",
volume = "35",
number = "1",
month = "feb",
year = "2016",
pages = "216–233",
doi = "10.1111/cgf.12721",
keywords = "artistic editing, appearance editing, material design, lighting design, relighting, material appearance, artistic control",
abstract = "Mimicking the appearance of the real world is a longstanding goal of computer graphics, with several important applications in the feature-film, architecture and medical industries. Images with well-designed shading are an important tool for conveying information about the world, be it the shape and function of a CAD model, or the mood of a movie sequence. However, authoring this content is often a tedious task, even if undertaken by groups of highly-trained and experienced artists. Unsurprisingly, numerous methods to facilitate and accelerate this appearance editing task have been proposed, enabling the editing of scene objects' appearances, lighting, and materials, as well as entailing the introduction of new interaction paradigms and specialized preview rendering techniques. In this review we provide a comprehensive survey of artistic appearance, lighting, and material editing approaches. We organize this complex and active research area in a structure tailored to academic researchers, graduate students, and industry professionals alike. In addition to editing approaches, we discuss how user interaction paradigms and rendering backends combine to form usable systems for appearance editing. We conclude with a discussion of open problems and challenges to motivate and guide future research."
}

@article{hostettler15dispersion,
author = "Hostettler, Rafael and Habel, Ralf and Gross, Markus and Jarosz, Wojciech",
title = "Dispersion-based Color Projection using Masked Prisms",
journal = "Computer Graphics Forum (Proceedings of Pacific Graphics)",
volume = "34",
number = "7",
month = "oct",
year = "2015",
doi = "10.1111/cgf.12771",
abstract = "We present a method for projecting arbitrary color images using a white light source and an optical device with no colored components—consisting solely of one or two prisms and two transparent masks. When illuminated, the first mask creates structured white light that is then dispersed in the prism and attenuated by the second mask to create the color projection. We derive analytical expressions for the mask parameters from the physical components and validate our approach both in simulation and also demonstrate it on a wide variety of images using two different physical setups (one consisting of two inexpensive triangular prisms, and the other using a single rhombic prism). Furthermore, we show that optimizing the masks simultaneously enables obfuscating the image content, and provides a tradeoff between increased light throughput (by up to a factor of three) and maximum color saturation."
}

@article{meng15granular,
author = "Meng, Johannes and Papas, Marios and Habel, Ralf and Dachsbacher, Carsten and Marschner, Steve and Gross, Markus and Jarosz, Wojciech",
title = "Multi-Scale Modeling and Rendering of Granular Materials",
journal = "ACM Transactions on Graphics (Proceedings of SIGGRAPH)",
volume = "34",
number = "4",
year = "2015",
month = "jul",
doi = "10.1145/2766949",
keywords = "physically based rendering, granular media, discrete random media",
abstract = "We address the problem of modeling and rendering granular materials—such as large structures made of sand, snow, or sugar—where an aggregate object is composed of many randomly oriented, but discernible grains. These materials pose a particular challenge as the complex scattering properties of individual grains, and their packing arrangement, can have a dramatic effect on the large-scale appearance of the aggregate object. We propose a multi-scale modeling and rendering framework that adapts to the structure of scattered light at different scales. We rely on path tracing the individual grains only at the finest scale, and—by decoupling individual grains from their arrangement—we develop a modular approach for simulating longer-scale light transport. We model light interactions within and across grains as separate processes and leverage this decomposition to derive parameters for classical radiative transport, including standard volumetric path tracing and a diffusion method that can quickly summarize the large scale transport due to many grain interactions. We require only a one-time precomputation per exemplar grain, which we can then reuse for arbitrary aggregate shapes and a continuum of different packing rates and scales of grains. We demonstrate our method on scenes containing mixtures of tens of millions of individual, complex, specular grains that would be otherwise infeasible to render with standard techniques."
}

@inproceedings{chapiro15stereo,
author = "Chapiro, Alexandre and O'Sullivan, Carol and Jarosz, Wojciech and Gross, Markus and Smolic, Aljoscha",
title = "Stereo from Shading",
booktitle = "Proceedings of EGSR (Experimental Ideas \& Implementations)",
month = "jun",
year = "2015",
doi = "10.2312/sre.20151173",
abstract = "We present a new method for creating and enhancing the stereoscopic 3D (S3D) sensation without using the parallax disparity between an image pair. S3D relies on a combination of cues to generate a feeling of depth, but only a few of these cues can easily be modified within a rendering pipeline without significantly changing the content. We explore one such cue—shading stereopsis—which to date has not been exploited for 3D rendering. By changing only the shading of objects between the left and right eye renders, we generate a noticeable increase in perceived depth. This effect can be used to create depth when applied to flat images, and to enhance depth when applied to shallow depth S3D images. Our method modifies the shading normals of objects or materials, such that it can be flexibly and selectively applied in complex scenes with arbitrary numbers and types of lights and indirect illumination. Our results show examples of rendered stills and video, as well as live action footage."
}

@article{zimmer15path,
author = "Zimmer, Henning and Rousselle, Fabrice and Jakob, Wenzel and Wang, Oliver and Adler, David and Jarosz, Wojciech and Sorkine-Hornung, Olga and Sorkine-Hornung, Alexander",
title = "Path-space Motion Estimation and Decomposition for Robust Animation Filtering",
journal = "Computer Graphics Forum (Proceedings of EGSR)",
volume = "34",
number = "4",
month = "jun",
year = "2015",
doi = "10.1111/cgf.12685",
abstract = "Renderings of animation sequences with physics-based Monte Carlo light transport simulations are exceedingly costly to generate frame-by-frame, yet much of this computation is highly redundant due to the strong coherence in space, time and among samples. A promising approach pursued in prior work entails subsampling the sequence in space, time, and number of samples, followed by image-based spatio-temporal upsampling and denoising.

These methods can provide significant performance gains, though major issues remain: firstly, in a multiple scattering simulation, the final pixel color is the composite of many different light transport phenomena, and this conflicting information causes artifacts in image-based methods. Secondly, motion vectors are needed to establish correspondence between the pixels in different frames, but it is unclear how to obtain them for most kinds of light paths (e.g. an object seen through a curved glass panel).

To reduce these ambiguities, we propose a general decomposition framework, where the final pixel color is separated into components corresponding to disjoint subsets of the space of light paths. Each component is accompanied by motion vectors and other auxiliary features such as reflectance and surface normals. The motion vectors of specular paths are computed using a temporal extension of manifold exploration and the remaining components use a specialized variant of optical flow. Our experiments show that this decomposition leads to significant improvements in three image-based applications: denoising, spatial upsampling, and temporal interpolation."
}

@article{bitterli15portal,
author = "Bitterli, Benedikt and Novák, Jan and Jarosz, Wojciech",
title = "Portal-Masked Environment Map Sampling",
journal = "Computer Graphics Forum (Proceedings of EGSR)",
volume = "34",
number = "4",
month = "jun",
year = "2015",
doi = "10.1111/cgf.12674",
keywords = "product importance sampling, sample guides, sample magnets, Monte Carlo",
abstract = "We present a technique to efficiently importance sample distant, all-frequency illumination in indoor scenes. Standard environment sampling is inefficient in such cases, since the distant lighting is typically only visible through small openings (e.g. windows). This visibility is often addressed by manually placing a portal around each window to direct samples towards the openings; however, uniformly sampling the portal (its area or solid angle) disregards the possibly high frequency environment map. We propose a new portal importance sampling technique which takes into account both the environment map and its visibility through the portal, drawing samples proportional to the product of the two. To make this practical, we propose a novel, portal-rectified reparametrization of the environment map with the key property that the visible region induced by a rectangular portal projects to an axis-aligned rectangle. This allows us to sample according to the desired product distribution at an arbitrary shading location using a single (precomputed) summed-area table per portal. Our technique is unbiased, relevant to many renderers, and can also be applied to rectangular light sources with directional emission profiles, enabling efficient rendering of non-diffuse light sources with soft shadows."
}

@article{klehm15star,
author = "Klehm, Oliver and Rousselle, Fabrice and Papas, Marios and Bradley, Derek and Hery, Christophe and Bickel, Bernd and Jarosz, Wojciech and Beeler, Thabo",
title = "Recent Advances in Facial Appearance Capture",
journal = "Computer Graphics Forum (Proceedings of Eurographics - State of the Art Reports)",
volume = "34",
number = "2",
month = "may",
year = "2015",
DOI = "10.1111/cgf.12594",
pages = "709–733",
keywords = "faces, reflectance, scattering",
abstract = "Facial appearance capture is now firmly established within academic research and used extensively across various application domains, perhaps most prominently in the entertainment industry through the design of virtual characters in video games and films. While significant progress has occurred over the last two decades, no single survey currently exists that discusses the similarities, differences, and practical considerations of the available appearance capture techniques as applied to human faces. A central difficulty of facial appearance capture is the way light interacts with skin—which has a complex multi-layered structure—and the interactions that occur below the skin surface can, by definition, only be observed indirectly. In this report, we distinguish between two broad strategies for dealing with this complexity. “Image-based methods” try to exhaustively capture the exact face appearance under different lighting and viewing conditions, and then render the face through weighted image combinations. “Parametric methods” instead fit the captured reflectance data to some parametric appearance model used during rendering, allowing for a more lightweight and flexible representation but at the cost of potentially increased rendering complexity or inexact reproduction. The goal of this report is to provide an overview that can guide practitioners and researchers in assessing the tradeoffs between current approaches and identifying directions for future advances in facial appearance capture."
}

@article{zwicker15star,
author = "Zwicker, Matthias and Jarosz, Wojciech and Lehtinen, Jaakko and Moon, Bochang and Ramamoorthi, Ravi and Rousselle, Fabrice and Sen, Pradeep and Soler, Cyril and Yoon, Sung-Eui",
title = "Recent Advances in Adaptive Sampling and Reconstruction for Monte Carlo Rendering",
journal = "Computer Graphics Forum (Proceedings of Eurographics - State of the Art Reports)",
volume = "34",
number = "2",
month = "may",
year = "2015",
DOI = "10.1111/cgf.12592",
pages = "667–681",
keywords = "denoising, NL-means, bilateral filtering, joint filtering, gradients, hessians, derivatives, frequency analysis",
abstract = "Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements."
}

@article{jarabo14framework,
author = "Jarabo, Adrian and Marco, Julio and Munoz, Adolfo and Buisan, Raul and Jarosz, Wojciech and Gutierrez, Diego",
title = "A Framework for Transient Rendering",
journal = "ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia)",
volume = "33",
number = "6",
year = "2014",
month = "nov",
keywords = "fempo photography, time-resolved rendering, path integral formulation, transient rendering, participating media, importance sampling, density estimation, progressive photon mapping",
abstract = "Recent advances in ultra-fast imaging have triggered many promising applications in graphics and vision, such as capturing transparent objects, estimating hidden geometry and materials, or visualizing light in motion. There is, however, very little work regarding the effective simulation and analysis of transient light transport, where the speed of light can no longer be considered infinite. We first introduce the transient path integral framework, formally describing light transport in transient state. We then analyze the difficulties arising when considering the light's time-of-flight in the simulation (rendering) of images and videos. We propose a novel density estimation technique that allows reusing sampled paths to reconstruct time-resolved radiance, and devise new sampling strategies that take into account the distribution of radiance along time in participating media. We then efficiently simulate time-resolved phenomena (such as caustic propagation, fluorescence or temporal chromatic dispersion), which can help design future ultra-fast imaging devices using an analysis-by-synthesis approach, as well as to achieve a better understanding of the nature of light transport.",
doi = "10.1145/2661229.2661251"
}

@article{novak14residual,
author = "Novák, Jan and Selle, Andrew and Jarosz, Wojciech",
title = "Residual Ratio Tracking for Estimating Attenuation in Participating Media",
journal = "ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia)",
volume = "33",
number = "6",
year = "2014",
month = "nov",
keywords = "transmittance, fractional visibility, opacity, ray marching, woodcock tracking, delta tracking, pseudo-scattering, null-collision",
abstract = "Evaluating transmittance within participating media is a fundamental operation required by many light transport algorithms. We present ratio tracking and residual tracking, two complementary techniques that can be combined into an efficient, unbiased estimator for evaluating transmittance in complex heterogeneous media. In comparison to current approaches, our new estimator is unbiased, yields high efficiency, gracefully handles media with wavelength dependent extinction, and bridges the gap between closed form solutions and purely numerical, unbiased approaches. A key feature of ratio tracking is its ability to handle negative densities. This in turn enables us to separate the main part of the transmittance function, handle it analytically, and numerically estimate only the residual transmittance. In addition to proving the unbiasedness of our estimators, we perform an extensive empirical analysis to reveal parameters that lead to high efficiency. Finally, we describe how to integrate the new techniques into a production path tracer and demonstrate their benefits over traditional unbiased estimators.",
doi = "10.1145/2661229.2661292"
}

@article{subr14error,
author = "Subr, Kartic and Nowrouzezahrai, Derek and Jarosz, Wojciech and Kautz, Jan and Mitchell, Kenny",
title = "Error analysis of estimators that use combinations of stochastic sampling strategies for direct illumination",
journal = "Computer Graphics Forum (Proceedings of EGSR)",
volume = "33",
number = "4",
month = "jun",
year = "2014",
pages = "93–102",
DOI = "10.1111/cgf.12416",
keywords = "variance analysis, Gaussian copula, convergence rate",
abstract = "We present a theoretical analysis of error of combinations of Monte Carlo estimators used in image synthesis. Importance sampling and multiple importance sampling are popular variance-reduction strategies. Unfortunately, neither strategy improves the rate of convergence of Monte Carlo integration. Jittered sampling (a type of stratified sampling), on the other hand is known to improve the convergence rate. Most rendering software optimistically combine importance sampling with jittered sampling, hoping to achieve both. We derive the exact error of the combination of multiple importance sampling with jittered sampling. In addition, we demonstrate a further benefit of introducing negative correlations (antithetic sampling) between estimates to the convergence rate. As with importance sampling, antithetic sampling is known to reduce error for certain classes of integrands without affecting the convergence rate. In this paper, our analysis and experiments reveal that importance and antithetic sampling, if used judiciously and in conjunction with jittered sampling, may improve convergence rates. We show the impact of such combinations of strategies on the convergence rate of estimators for direct illumination."
}

@inproceedings{schmidt14star,
author = "Schmidt, Thorsten-Walther and Pellacini, Fabio and Nowrouzezahrai, Derek and Jarosz, Wojciech and Dachsbacher, Carsten",
title = "State of the Art in Artistic Editing of Appearance, Lighting, and Material",
booktitle = "Eurographics 2014 - State of the Art Reports",
year = "2014",
month = "apr",
address = "Strasbourg, France",
publisher = "Eurographics Association",
doi = "10.2312/egst.20141041",
keywords = "relighting, material appearance, artistic control",
abstract = "Mimicking the appearance of the real world is a longstanding goal of computer graphics, with several important applications in the feature-film, architecture and medical industries. Images with well-designed shading are an important tool for conveying information about the world, be it the shape and function of a CAD model, or the mood of a movie sequence. However, authoring this content is often a tedious task, even if undertaken by groups of highly-trained and experienced artists. Unsurprisingly, numerous methods to facilitate and accelerate this appearance editing task have been proposed, enabling the editing of scene objects' appearances, lighting, and materials, as well as entailing the introduction of new interaction paradigms and specialized preview rendering techniques. In this STAR we provide a comprehensive survey of artistic appearance, lighting, and material editing approaches. We organize this complex and active research area in a structure tailored to academic researchers, graduate students, and industry professionals alike. In addition to editing approaches, we discuss how user interaction paradigms and rendering backends combine to form usable systems for appearance editing. We conclude with a discussion of open problems and challenges to motivate and guide future research."
}

@article{nowrouzezahrai14visibility,
author = "Nowrouzezahrai, Derek and Baran, Ilya and Mitchell, Kenny and Jarosz, Wojciech",
title = "Visibility Silhouettes for Semi-Analytic Spherical Integration",
journal = "Computer Graphics Forum",
volume = "33",
number = "1",
month = "feb",
year = "2014",
pages = "105–117",
doi = "10.1111/cgf.12257",
keywords = "all-frequency shadowing, image-based rendering, spherical visibility",
abstract = "At each shade point, the spherical visibility function encodes occlusion from surrounding geometry, in all directions. Computing this function is difficult and point-sampling approaches, such as ray-tracing or hardware shadow mapping, are traditionally used to efficiently approximate it. We propose a semi-analytic solution to the problem where the spherical silhouette of the visibility is computed using a search over a 4D dual mesh of the scene. Once computed, we are able to semi-analytically integrate visibility-masked spherical functions along the visibility silhouette, instead of over the entire hemisphere. In this way, we avoid the artifacts that arise from using point-sampling strategies to integrate visibility, a function with unbounded frequency content. We demonstrate our approach on several applications, including direct illumination from realistic lighting and computation of PRT data. Additionally, we present a new frequency-space method for exactly computing all-frequency shadows on diffuse surfaces. Our results match ground truth computed using importance-sampled stratified Monte Carlo ray-tracing, with comparable performance on scenes with low-to-moderate geometric complexity."
}

@article{georgiev13joint,
author = "Georgiev, Iliyan and Křivánek, Jaroslav and Hachisuka, Toshiya and Nowrouzezahrai, Derek and Jarosz, Wojciech",
title = "Joint Importance Sampling of Low-Order Volumetric Scattering",
journal = "ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia)",
volume = "32",
number = "6",
year = "2013",
month = "nov",
keywords = "photon beams, virtual ray lights, virtual beam lights, VRLs, VBLs, path tracing, bidirectional path tracing",
abstract = "Central to all Monte Carlo-based rendering algorithms is the construction of light transport paths from the light sources to the eye. Existing rendering approaches sample path vertices incrementally when constructing these light transport paths. Paths should ideally be constructed according to a joint probability density function proportional to the integrand, yet current incremental sampling strategies only locally account for certain terms in the integrand. The resulting probability density is thus a product of the conditional densities of each local sampling step, constructed without explicit control over the form of the final joint distribution of the complete path. We analyze why current incremental construction schemes often lead to high variance in the presence of participating media, and reveal that such approaches are an unnecessary legacy inherited from traditional surface-based rendering algorithms. We devise joint importance sampling of path vertices in participating media to construct paths that explicitly account for the product of all scattering and geometry terms along a sequence of vertices instead of just locally at a single vertex. This leads to a number of practical importance sampling routines to explicitly construct single- and double-scattering subpaths in anisotropically-scattering media. We demonstrate the benefit of our new sampling techniques, integrating them into several path-based rendering algorithms such as path tracing, bidirectional path tracing, and many-light methods. We also use our sampling routines to generalize deterministic shadow connections to connection subpaths consisting of two or three random decisions, to efficiently simulate higher-order multiple scattering. Our algorithms significantly reduce noise and increase performance in renderings with both isotropic and highly anisotropic, low-order scattering.",
doi = "10.1145/2508363.2508411"
}

@article{papas13fabricating,
author = "Papas, Marios and Regg, Christian and Jarosz, Wojciech and Bickel, Bernd and Jackson, Philip and Matusik, Wojciech and Marschner, Steve and Gross, Markus",
title = "Fabricating Translucent Materials using Continuous Pigment Mixtures",
journal = "ACM Transactions on Graphics (Proceedings of SIGGRAPH)",
volume = "32",
number = "4",
year = "2013",
month = "jul",
doi = "10.1145/2461912.2461974",
keywords = "subsurface scattering, material design, fabrication",
abstract = "We present a method for practical physical reproduction and design of homogeneous materials with desired subsurface scattering. Our process uses a collection of different pigments that can be suspended in a clear base material. Our goal is to determine pigment concentrations that best reproduce the appearance and subsurface scattering of a given target material. In order to achieve this task we first fabricate a collection of material samples composed of known mixtures of the available pigments with the base material. We then acquire their reflectance profiles using a custom-built measurement device. We use the same device to measure the reflectance profile of a target material. Based on the database of mappings from pigment concentrations to reflectance profiles, we use an optimization process to compute the concentration of pigments to best replicate the target material appearance. We demonstrate the practicality of our method by reproducing a variety of different translucent materials. We also present a tool that allows the user to explore the range of achievable appearances for a given set of pigments."
}

@article{habel13pbd,
author = "Habel, Ralf and Christensen, Per H. and Jarosz, Wojciech",
title = "Photon Beam Diffusion: A Hybrid Monte Carlo Method for Subsurface Scattering",
journal = "Computer Graphics Forum (Proceedings of EGSR)",
volume = "32",
number = "4",
month = "jun",
year = "2013",
doi = "10.1111/cgf.12148",
keywords = "subsurface scattering, dipole, sss, photon beams, quantized diffusion, BSSRDF, translucent, diffusion theory",
abstract = "We present photon beam diffusion, an efficient numerical method for accurately rendering translucent materials. Our approach interprets incident light as a continuous beam of photons inside the material. Numerically integrating diffusion from such extended sources has long been assumed computationally prohibitive, leading to the ubiquitous single-depth dipole approximation and the recent analytic sum-of-Gaussians approach employed by Quantized Diffusion. In this paper, we show that numerical integration of the extended beam is not only feasible, but provides increased speed, flexibility, numerical stability, and ease of implementation, while retaining the benefits of previous approaches. We leverage the improved diffusion model, but propose an efficient and numerically stable Monte Carlo integration scheme that gives equivalent results using only 3–5 samples instead of 20–60 Gaussians as in previous work. Our method can account for finite and multi-layer materials, and additionally supports directional incident effects at surfaces. We also propose a novel diffuse exact single-scattering term which can be integrated in tandem with the multi-scattering approximation. Our numerical approach furthermore allows us to easily correct inaccuracies of the diffusion model and even combine it with more general Monte Carlo rendering algorithms. We provide practical details necessary for efficient implementation, and demonstrate the versatility of our technique by incorporating it on top of several rendering algorithms in both research and production rendering systems."
}

@article{papas12magic,
author = "Papas, Marios and Houit, Thomas and Nowrouzezahrai, Derek and Gross, Markus and Jarosz, Wojciech",
title = "The Magic Lens: Refractive Steganography",
journal = "ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia)",
volume = "31",
number = "6",
year = "2012",
month = "nov",
doi = "10.1145/2366145.2366205",
keywords = "steganography, cryptography, image morphing, lens fabrication, novel display devices",
abstract = "We present an automatic approach to design and manufacture passive display devices based on optical hidden image decoding. Motivated by classical steganography techniques we construct Magic Lenses, composed of refractive lenslet arrays, to reveal hidden images when placed over potentially unstructured printed or displayed source images. We determine the refractive geometry of these surfaces by formulating and efficiently solving an inverse light transport problem, taking into account additional constraints imposed by physical manufacturing processes. We fabricate several variants on the basic magic lens idea including using a single source image to encode several hidden images which are only revealed when the lens is placed at prescribed rotational orientations or viewed from different angles. We also present an important special case, the universal lens, that forms an injunction with the source image grid and can be applied to arbitrary source images. We use this type of lens to generate hidden animation sequences. We validate our simulation results with many real-world manufactured magic lenses, and experiment with two separate manufacturing processes."
}

@article{schwarzhaupt12practical,
author = "Schwarzhaupt, Jorge and Jensen, Henrik Wann and Jarosz, Wojciech",
title = "Practical Hessian-Based Error Control for Irradiance Caching",
journal = "ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia)",
volume = "31",
number = "6",
year = "2012",
month = "nov",
doi = "10.1145/2366145.2366212",
keywords = "global illumination, irradiance caching, Monte Carlo ray tracing, illumination derivatives",
abstract = "This paper introduces a new error metric for irradiance caching that significantly outperforms the classic Split-Sphere heuristic. Our new error metric builds on recent work using second order gradients (Hessians) as a principled error bound for the irradiance. We add occlusion information to the Hessian computation, which greatly improves the accuracy of the Hessian in complex scenes, and this makes it possible for the first time to use a radiometric error metric for irradiance caching. We enhance the metric making it based on the relative error in the irradiance as well as robust in the presence of black occluders. The resulting error metric is efficient to compute, numerically robust, supports elliptical error bounds and arbitrary hemispherical sample distributions, and unlike the Split-Sphere heuristic it is not necessary to arbitrarily clamp the computed error thresholds. Our results demonstrate that the new error metric outperforms existing error metrics based on the Split-Sphere model and occlusion-unaware Hessians."
}

We first derive a full theory of 2D light transport by introducing 2D analogues to radiometric quantities such as flux and radiance, and deriving a 2D rendering equation. We use our theory to show how to implement algorithms such as Monte Carlo ray tracing, path tracing, irradiance caching, and photon mapping in 2D, and demonstrate that these algorithms can be analyzed more easily in this domain while still providing insights for 3D rendering.

We apply our theory to develop several practical improvements to the irradiance caching algorithm. We perform a full second-order analysis of diffuse indirect illumination, first in 2D, and then in 3D by deriving the irradiance Hessian, and show how this leads to increased accuracy and performance for irradiance caching. We propose second-order Taylor expansion from cache points, which results in more accurate irradiance reconstruction. We also introduce a novel error metric to guide cache point placement by analyzing the error produced by irradiance caching. Our error metric naturally supports anisotropic reconstruction and, in our preliminary study, resulted in an order of magnitude less error than the "split-sphere" heuristic when using the same number of cache points.}
}

@article{baran12manufacturing,
author = "Baran, Ilya and Keller, Philipp and Bradley, Derek and Coros, Stelian and Jarosz, Wojciech and Nowrouzezahrai, Derek and Gross, Markus",
title = "Manufacturing Layered Attenuators for Multiple Prescribed Shadow Images",
journal = "Computer Graphics Forum (Proceedings of Eurographics)",
volume = "31",
number = "2",
year = "2012",
month = "may",
pages = "603–610",
issn = "0167-7055",
doi = "10.1111/j.1467-8659.2012.03039.x",
abstract = "We present a practical and inexpensive method for creating physical objects that cast different color shadow images when illuminated by prescribed lighting configurations. The input to our system is a number of lighting configurations and corresponding desired shadow images. Our approach computes attenuation masks, which are then printed on transparent materials and stacked to form a single multi-layer attenuator. When illuminated with the input lighting configurations, this multi-layer attenuator casts the prescribed color shadow images. Alternatively, our method can compute layers so that their permutations produce different prescribed shadow images under fixed lighting. Each multi-layer attenuator is quick and inexpensive to produce, can generate multiple full-color shadows, and can be designed to respond to different types of natural or synthetic lighting setups. We illustrate the effectiveness of our multi-layer attenuators in simulation and in reality, with the sun as a light source."
}

@article{sadeghi11physically,
author = "Sadeghi, Iman and Munoz, Adolfo and Laven, Philip and Jarosz, Wojciech and Seron, Francisco and Gutierrez, Diego and Jensen, Henrik Wann",
title = "Physically-based Simulation of Rainbows",
journal = "ACM Transactions on Graphics (Presented at SIGGRAPH)",
volume = "31",
number = "1",
year = "2012",
month = "feb",
pages = "3:1–3:12",
doi = "10.1145/2077341.2077344",
abstract = "In this paper we derive a physically-based model for simulating rainbows. Previous techniques for simulating rainbows have used either geometric optics (ray tracing) or Lorenz-Mie theory. Lorenz-Mie theory is by far the most accurate technique as it takes into account optical effects such as dispersion, polarization, interference, and diffraction. These effects are critical for simulating rainbows accurately. However, as Lorenz-Mie theory is restricted to scattering by spherical particles, it cannot be applied to real raindrops which are non-spherical, especially for larger raindrops. We present the first comprehensive technique for simulating the interaction of a wavefront of light with a physically-based water drop shape. Our technique is based on ray tracing extended to account for dispersion, polarization, interference, and diffraction. Our model matches Lorenz-Mie theory for spherical particles, but it also enables the accurate simulation of non-spherical particles. It can simulate many different rainbow phenomena including double rainbows and supernumerary bows. We show how the non-spherical raindrops influence the shape of the rainbows, and we provide a simulation of the rare twinned rainbow, which is believed to be caused by non-spherical water drops."
}

@article{jarosz11progressive,
author = "Jarosz, Wojciech and Nowrouzezahrai, Derek and Thomas, Robert and Sloan, Peter-Pike and Zwicker, Matthias",
title = "Progressive Photon Beams",
journal = "ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia)",
volume = "30",
number = "6",
year = "2011",
month = "dec",
doi = "10.1145/2070781.2024215",
abstract = "We present progressive photon beams, a new algorithm for rendering complex lighting in participating media. Our technique is efficient, robust to complex light paths, and handles heterogeneous media and anisotropic scattering while provably converging to the correct solution using a bounded memory footprint. We achieve this by extending the recent photon beams variant of volumetric photon mapping. We show how to formulate a progressive radiance estimate using photon beams, providing the convergence guarantees and bounded memory usage of progressive photon mapping. Progressive photon beams can robustly handle situations that are difficult for most other algorithms, such as scenes containing participating media and specular interfaces, with realisxtic light sources completely enclosed by refractive and reflective materials. Our technique handles heterogeneous media and also trivially supports stochastic effects such as depth-of-field and glossy materials. Finally, we show how progressive photon beams can be implemented efficiently on the GPU as a splatting operation, making it applicable to interactive and real-time applications. These features make our technique scalable, providing the same physically-based algorithm for interactive feedback and reference-quality, unbiased solutions."
}

@inproceedings{nowrouzezahrai11light,
author = "Nowrouzezahrai, Derek and Geiger, Stefan and Mitchell, Kenny and Sumner, Robert and Jarosz, Wojciech and Gross, Markus",
title = "Light Factorization for Mixed-Frequency Shadows in Augmented Reality",
booktitle = "10th IEEE International Symposium on Mixed and Augmented Reality (Proceedings of ISMAR 2011)",
month = "oct",
year = "2011",
doi = "10.1109/ISMAR.2011.6092384",
abstract = "Integrating animated virtual objects with their surroundings for high-quality augmented reality requires both geometric and radiometric consistency. We focus on the latter of these problems and present an approach that captures and factorizes external lighting in a manner that allows for realistic relighting of both animated and static virtual objects. Our factorization facilitates a combination of hard and soft shadows, with high-performance, in a manner that is consistent with the surrounding scene lighting."
}

@inproceedings{vanbaar11perceptually,
author = "van Baar, Jeroen and Poulakos, Steven and Jarosz, Wojciech and Nowrouzezahrai, Derek and Tamstorf, Rasmus and Gross, Markus",
title = "Perceptually-Based Compensation of Light Pollution in Display Systems",
booktitle = "Proceedings of the 2011 ACM Symposium on Applied Perception in Graphics and Visualization",
month = "aug",
year = "2011",
publisher = "ACM",
address = "New York, NY, USA",
doi = "10.1145/2077451.2077460",
abstract = "This paper addresses the problem of unintended light contributions due to physical properties of display systems. An example of such unintended contribution is crosstalk in stereoscopic 3D display systems, often referred to as ghosting. Ghosting results in a reduction of visual quality, and may lead to an uncomfortable viewing experience. The latter is due to conflicting (depth) edge cues, which can hinder the human visual system (HVS) proper fusion of stereo images (stereopsis). We propose an automatic, perceptually-based computational compensation framework, which formulates pollution elimination as a minimization problem. Our method aims to distribute the error introduced by the pollution in a perceptually optimal manner. As a consequence ghost edges are smoothed locally, resulting in a more comfortable stereo viewing experience. We show how to make the computation tractable by exploiting the structure of the resulting problem, and also propose a perceptually-based pollution prediction. We show that our general framework is applicable to other light pollution problems, such as descattering."
}

@article{jakob11progressive,
author = "Jakob, Wenzel and Regg, Christian and Jarosz, Wojciech",
title = "Progressive Expectation–Maximization for Hierarchical Volumetric Photon Mapping",
journal = "Computer Graphics Forum (Proceedings of EGSR)",
volume = "30",
number = "4",
month = "jun",
year = "2011",
doi = "10.1111/j.1467-8659.2011.01988.x",
abstract = "State-of-the-art density estimation methods for rendering participating media rely on a dense photon representation of the radiance distribution within a scene. A critical bottleneck of such kernel-based approaches is the excessive number of photons that are required in practice to resolve fine illumination details, while controlling the amount of noise. In this paper, we propose a parametric density estimation technique that represents radiance using a hierarchical Gaussian mixture. We efficiently obtain the coefficients of this mixture using a progressive and accelerated form of the Expectation–Maximization algorithm. After this step, we are able to create noise-free renderings of high-frequency illumination using only a few thousand Gaussian terms, where millions of photons are traditionally required. Temporal coherence is trivially supported within this framework, and the compact footprint is also useful in the context of real-time visualization. We demonstrate a hierarchical ray tracing-based implementation, as well as a fast splatting approach that can interactively render animated volume caustics."
}

@article{papas11goal,
author = "Papas, Marios and Jarosz, Wojciech and Jakob, Wenzel and Rusinkiewicz, Szymon and Matusik, Wojciech and Weyrich, Tim",
title = "Goal-based Caustics",
journal = "Computer Graphics Forum (Proceedings of Eurographics)",
volume = "30",
number = "2",
year = "2011",
month = "jun",
pages = "503–511",
doi = "10.1111/j.1467-8659.2011.01876.x",
abstract = "We propose a novel system for designing and manufacturing surfaces that produce desired caustic images when illuminated by a light source. Our system is based on a nonnegative image decomposition using a set of possibly overlapping anisotropic Gaussian kernels. We utilize this decomposition to construct an array of continuous surface patches, each of which focuses light onto one of the Gaussian kernels, either through refraction or reflection. We show how to derive the shape of each continuous patch and arrange them by performing a discrete assignment of patches to kernels in the desired caustic. Our decomposition provides for high fidelity reconstruction of natural images using a small collection of patches. We demonstrate our approach on a wide variety of caustic images by manufacturing physical surfaces with a small number of patches."
}

In this paper we present a new real-time algorithm for computing volumetric shadows in single-scattering media on the GPU. This computation requires evaluating the scattering integral over the intersections of camera rays with the shadow map, expressed as a 2D height field. We observe that by applying epipolar rectification to the shadow map, each camera ray only travels through a single row of the shadow map (an epipolar slice), which allows us to find the visible segments by considering only 1D height fields. At the core of our algorithm is the use of an acceleration structure (a 1D min-max mipmap) which allows us to quickly find the lit segments for all pixels in an epipolar slice in parallel. The simplicity of this data structure and its traversal allows for efficient implementation using only pixel shaders on the GPU."
}

@article{jarosz11comprehensive,
author = "Jarosz, Wojciech and Nowrouzezahrai, Derek and Sadeghi, Iman and Jensen, Henrik Wann",
title = "A Comprehensive Theory of Volumetric Radiance Estimation Using Photon Points and Beams",
journal = "ACM Transactions on Graphics (Presented at SIGGRAPH)",
volume = "30",
number = "1",
month = "jan",
year = "2011",
pages = "5:1–5:19",
doi = "10.1145/1899404.1899409",
keywords = "photon beams, photon mapping, beam radiance estimate, density estimation, participating media",
abstract = {We present two contributions to the area of volumetric rendering. We develop a novel, comprehensive theory of volumetric radiance estimation that leads to several new insights and includes all previously published estimates as special cases. This theory allows for estimating in-scattered radiance at a point, or accumulated radiance along a camera ray, with the standard photon particle representation used in previous work. Furthermore, we generalize these operations to include a more compact, and more expressive intermediate representation of lighting in participating media, which we call "photon beams." The combination of these representations and their respective query operations results in a collection of nine distinct volumetric radiance estimates.

Our second contribution is a more efficient rendering method for participating media based on photon beams. Even when shooting and storing less photons and using less computation time, our method significantly reduces both bias (blur) and variance in volumetric radiance estimation. This enables us to render sharp lighting details (e.g. volume caustics) using just tens of thousands of photon beams, instead of the millions to billions of photon points required with previous methods.}
}

@article{hachisuka10progressive,
author = "Hachisuka, Toshiya and Jarosz, Wojciech and Jensen, Henrik Wann",
title = "A Progressive Error Estimation Framework for Photon Density Estimation",
journal = "ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia)",
volume = "29",
number = "6",
month = "dec",
year = "2010",
issn = "0730-0301",
pages = "144:1–144:12",
articleno = "144",
doi = "10.1145/1882261.1866170",
abstract = "We present an error estimation framework for progressive photon mapping. Although estimating rendering error has been well investigated for unbiased rendering algorithms, there is currently no error estimation framework for biased rendering algorithms. We characterize the error by the sum of a bias estimate and a stochastic noise bound based on stochastic error bounds in biased methods. As a part of our error computation, we extend progressive photon mapping to operate with smooth kernels. This enables the calculation of illumination gradients with arbitrary accuracy, which we use to progressively compute the local bias in the radiance estimate. We also show how variance can be computed in progressive photon mapping, which is used to estimate the error due to noise. As an example application, we show how our stochastic error bound can be used to compute images with a given error threshold. For this example application, our framework only requires the error threshold and a confidence level to automatically terminate rendering. Our results demonstrate how our error estimation framework works well in realistic synthetic scenes."
}

@article{jarosz09importance,
author = "Jarosz, Wojciech and Carr, Nathan A. and Jensen, Henrik Wann",
title = "Importance Sampling Spherical Harmonics",
journal = "Computer Graphics Forum (Proceedings of Eurographics)",
volume = "28",
number = "2",
year = "2009",
month = "apr",
pages = "577–586",
address = "Munich, Germany",
doi = "10.1111/j.1467-8659.2009.01398.x",
abstract = "In this paper we present the first practical method for importance sampling functions represented as spherical harmonics (SH). Given a spherical probability density function (PDF) represented as a vector of SH coefficients, our method warps an input point set to match the target PDF using hierarchical sample warping. Our approach is efficient and produces high quality sample distributions. As a by-product of the sampling procedure we produce a multi-resolution representation of the density function as either a spherical mip-map or Haar wavelet. By exploiting this implicit conversion we can extend the method to distribute samples according to the product of an SH function with a spherical mip-map or Haar wavelet. This generalization has immediate applicability in rendering, e.g., importance sampling the product of a BRDF and an environment map where the lighting is stored as a single high-resolution wavelet and the BRDF is represented in spherical harmonics. Since spherical harmonics can be efficiently rotated, this product can be computed on-the-fly even if the BRDF is stored in local-space. Our sampling approach generates over 6 million samples per second while significantly reducing precomputation time and storage requirements compared to previous techniques."
}

@article{hachisuka08multidimensional,
author = "Hachisuka, Toshiya and Jarosz, Wojciech and Weistroffer, Richard Peter and Dale, Kevin and Humphreys, Greg and Zwicker, Matthias and Jensen, Henrik Wann",
title = "Multidimensional Adaptive Sampling and Reconstruction for Ray Tracing",
journal = "ACM Transactions on Graphics (Proceedings of SIGGRAPH)",
volume = "27",
number = "3",
month = "aug",
year = "2008",
pages = "33:1–33:10",
doi = "10.1145/1360612.1360632",
abstract = "We present a new adaptive sampling strategy for ray tracing. Our technique is specifically designed to handle multidimensional sample domains, and it is well suited for efficiently generating images with effects such as soft shadows, motion blur, and depth of field. These effects are problematic for existing image based adaptive sampling techniques as they operate on pixels, which are possibly noisy results of a Monte Carlo ray tracing process. Our sampling technique operates on samples in the multidimensional space given by the rendering equation and as a consequence the value of each sample is noise-free. Our algorithm consists of two passes. In the first pass we adaptively generate samples in the multidimensional space, focusing on regions where the local contrast between samples is high. In the second pass we reconstruct the image by integrating the multidimensional function along all but the image dimensions. We perform a high quality anisotropic reconstruction by determining the extent of each sample in the multidimensional space using a structure tensor. We demonstrate our method on scenes with a 3 to 5 dimensional space, including soft shadows, motion blur, and depth of field. The results show that our method uses fewer samples than Mitchell's adaptive sampling technique while producing images with less noise."
}

@article{paris08hair,
author = "Paris, Sylvain and Chang, Will and Kozhushnyan, Oleg I. and Jarosz, Wojciech and Matusik, Wojciech and Zwicker, Matthias and Durand, Frédo",
title = "Hair Photobooth: Geometric and Photometric Acquisition of Real Hairstyles",
journal = "ACM Transactions on Graphics (Proceedings of SIGGRAPH)",
volume = "27",
number = "3",
month = "aug",
year = "2008",
pages = "30:1–30:9",
doi = "10.1145/1360612.1360629",
abstract = "We accurately capture the shape and appearance of a person's hairstyle. We use triangulation and a sweep with planes of light for the geometry. Multiple projectors and cameras address the challenges raised by the reflectance and intricate geometry of hair. We introduce the use of structure tensors to infer the hidden geometry between the hair surface and the scalp. Our triangulation approach affords substantial accuracy improvement and we are able to measure elaborate hair geometry including complex curls and concavities. To reproduce the hair appearance, we capture a six-dimensional reflectance field. We introduce a new reflectance interpolation technique that leverages an analytical reflectance model to alleviate cross-fading artifacts caused by linear methods. Our results closely match the real hairstyles and can be used for animation."
}

@article{jarosz08beam,
author = "Jarosz, Wojciech and Zwicker, Matthias and Jensen, Henrik Wann",
title = "The Beam Radiance Estimate for Volumetric Photon Mapping",
journal = "Computer Graphics Forum (Proceedings of Eurographics)",
volume = "27",
number = "2",
year = "2008",
month = "apr",
pages = "557–566",
doi = "10.1111/j.1467-8659.2008.01153.x",
abstract = "We present a new method for efficiently simulating the scattering of light within participating media. Using a theoretical reformulation of volumetric photon mapping, we develop a novel photon gathering technique for participating media. Traditional volumetric photon mapping samples the in-scattered radiance at numerous points along the length of a single ray by performing costly range queries within the photon map. Our technique replaces these multiple point-queries with a single beam-query, which explicitly gathers all photons along the length of an entire ray. These photons are used to estimate the accumulated in-scattered radiance arriving from a particular direction and need to be gathered only once per ray. Our method handles both fixed and adaptive kernels, is faster than regular volumetric photon mapping, and produces images with less noise."
}

@article{jarosz08radiance,
author = "Jarosz, Wojciech and Donner, Craig and Zwicker, Matthias and Jensen, Henrik Wann",
title = "Radiance Caching for Participating Media",
journal = "ACM Transactions on Graphics (Presented at SIGGRAPH)",
volume = "27",
number = "1",
month = "mar",
year = "2008",
pages = "7:1–7:11",
issn = "0730-0301",
doi = "10.1145/1330511.1330518",
abstract = "In this article we present a novel radiance caching method for efficiently rendering participating media using Monte Carlo ray tracing. Our method handles all types of light scattering including anisotropic scattering, and it works in both homogeneous and heterogeneous media. A key contribution in the article is a technique for computing gradients of radiance evaluated in participating media. These gradients take the full path of the scattered light into account including the changing properties of the medium in the case of heterogeneous media. The gradients can be computed simultaneously with the inscattered radiance with negligible overhead. We compute gradients for single scattering from lights and surfaces and for multiple scattering, and we use a spherical harmonics representation in media with anisotropic scattering. Our second contribution is a new radiance caching scheme for participating media. This caching scheme uses the information in the radiance gradients to sparsely sample as well as interpolate radiance within the medium utilizing a novel, perceptually based error metric. Our method provides several orders of magnitude speedup compared to path tracing and produces higher quality results than volumetric photon mapping. Furthermore, it is view-driven and well suited for large scenes where methods such as photon mapping become costly."
}

@article{clarberg05wavelet,
author = "Clarberg, Petrik and Jarosz, Wojciech and Akenine-Möller, Tomas and Jensen, Henrik Wann",
title = "Wavelet Importance Sampling: Efficiently Evaluating Products of Complex Functions",
journal = "ACM Transactions on Graphics (Proceedings of SIGGRAPH)",
volume = "24",
number = "3",
month = "aug",
year = "2005",
pages = "1166–1175",
doi = "10.1145/1073204.1073328",
abstract = "We present a new technique for importance sampling products of complex functions using wavelets. First, we generalize previous work on wavelet products to higher dimensional spaces and show how this product can be sampled on-the-fly without the need of evaluating the full product. This makes it possible to sample products of high-dimensional functions even if the product of the two functions in itself is too memory consuming. Then, we present a novel hierarchical sample warping algorithm that generates high-quality point distributions, which match the wavelet representation exactly. One application of the new sampling technique is rendering of objects with measured BRDFs illuminated by complex distant lighting — our results demonstrate how the new sampling technique is more than an order of magnitude more efficient than the best previous techniques."
}

@inproceedings{hart02using,
author = "Hart, John C. and Bachta, Ed and Jarosz, Wojciech and Fleury, Terry",
title = "Using Particles to Sample and Control More Complex Implicit Surfaces",
booktitle = "SMI '02: Proceedings of the Shape Modeling International 2002 (SMI'02)",
year = "2002",
month = "aug",
pages = "129",
publisher = "IEEE Computer Society",
address = "Washington, DC, USA",
doi = "10.1109/SMI.2002.1003537",
abstract = "In 1994, Witkin and Heckbert developed a method for interactively modeling implicit surfaces by simultaneously constaining a particle system to lie on an implicit surface and vice-versa. This interface was demonstrated to be effective and easy to use on example models containing a few blobby spheres and cylinders. This system becomes much more difficult to implement and operate on more complex implicit models. The derivatives needed for the particle system behavior can become laborious and error-prone when implemented for more complex models. We have developed, implemented and tested techniques for automatic and numerical differentiation of the implicit surface function. Complex models also require a large number of parameters, and the management and control of these parameters is often not intuitive. We have developed adapters, which are special shape-transformation operators that automatically adjust the underlying parameters to yield the same effect as the transformation. These new techniques allow constrained particle systems to sample and control more complex models than before possible."
}

In contrast to previous methods using simular techniques, our system does not use precomputed lighting, and is capable of achieving interactive feedback to object and light manipulations. Such a feature is invaluable to lighting designers dealing with complex globally-illuminated scenes. The progressive refinement algorithm allows for rapid preview during interaction, while producing higher quality images over time. The images produced also maintain a high correlation to the appearance of full renderings using a conventional Monte Carlo ray tracer."
}

Misc

Hobbies & Miscellaneous

When I can find the spare time I enjoy hiking and photography. In a previous life I enjoyed creating 3D models and animations (some of which you can find in the gallery section), primarily using Lightwave 3D.

You can also find some old material below, which I keep mostly for archival purposes:

CSE 168 final project: The final project writeup for the global illumination renderer I created for CSE 168 at UCSD.

Hand modeling tutorial: A hand modeling tutorial I wrote up a few years back. Written for Lightwave, but should translate easily to other packages too.