Miloš Hašan

milos dot hasan at gmail dot com

I am currently at Autodesk in San Francisco. Before that, I was a postdoc at UC Berkeley, advised by Ravi Ramamoorthi, and before that I spent one year as a postdoc at Massachusetts General Hospital and Harvard University (working with John Wolfgang and Hanspeter Pfister). I received my Ph.D. in Computer Science from Cornell in August 2009, under the supervision of Kavita Bala. My main focus is in physically-based rendering.

Complex specular surfaces under sharp point lighting show a fascinating
glinty appearance, but rendering it is an unsolved problem.
Using Monte Carlo pixel sampling for this purpose is impractical:
the energy is concentrated in tiny highlights that take up a minuscule
fraction of the pixel. We instead compute an accurate solution
using a completely different deterministic approach. Our method
considers the true distribution of normals on a surface patch seen
through a single pixel, which can be highly complex. We show how
to evaluate this distribution efficiently, assuming a Gaussian pixel
footprint and Gaussian intrinsic roughness. We also take advantage
of hierarchical pruning of position-normal space to rapidly find texels
that might contribute to a given normal distribution evaluation.
Our results show complex, temporally varying glints from materials
such as bumpy plastics, brushed and scratched metals, metallic
paint and ocean waves.

This paper investigates rendering glittery surfaces, ones which exhibit shifting
random patterns of glints as the surface or viewer moves. It applies both to
dramatically glittery surfaces that contain mirror-like flakes and also to rough
surfaces that exhibit more subtle small scale glitter, without which most glossy surfaces
appear too smooth in close-up. These phenomena can in principle be simulated by
high-resolution normal maps, but maps with tiny features create severe aliasing
problems under narrow-angle illumination. In this paper we present a stochastic model
for the effects of random subpixel structures that generates glitter and spatial
noise that behave correctly under different illumination conditions and viewing
distances, while also being temporally coherent so that they look right in motion.
The model is based on microfacet theory, but it replaces the usual continuous microfacet
distribution with a discrete distribution of scattering particles on the surface. A novel
stochastic hierarchy allows efficient evaluation in the presence of large numbers of
random particles, without ever having to consider the particles individually. This leads
to a multiscale procedural BRDF that is readily implemented in standard rendering systems,
and which converges back to the smooth case in the limit.

Graphics Processing Units (GPUs) recently became general enough to enable implementation of a
variety of light transport algorithms. However, the efficiency of these GPU implementations has received relatively little attention in the research literature and no systematic study on the topic exists to date. The goal of our work is to fill this gap. Our main contribution is a comprehensive and in-depth investigation of the efficiency of the GPU implementation of a number of classic as well as more recent progressive light transport simulation algorithms. We present several improvements over the state-of-the-art. In particular, our Light Vertex Cache, a new approach to mapping connections of sub-path vertices in Bidirectional Path Tracing on the GPU, outperforms the existing implementations by 30-60%. We also describe a first GPU implementation of the recently introduced Vertex Connection and Merging algorithm [Georgiev et al. 2012], showing that even relatively complex light transport algorithms can be efficiently mapped on the GPU. With the implementation of many of the state-of-the-art algorithms within a single system at our disposal, we present a unique direct comparison and analysis of their relative performance.

The highest fidelity images to date of complex materials like cloth use extremely high-resolution volumetric models. However, rendering such complex volumetric media is expensive, with brute-force path tracing often the only viable solution. Fortunately, common volumetric materials (fabrics, finished wood, synthesized solid textures) are structured, with repeated patterns approximated by tiling a small number of exemplar blocks. In this paper, we introduce a precomputation-based rendering approach for such volumetric media with repeated structures based on a modular transfer formulation. We model each exemplar block as a voxel grid and precompute voxel-to-voxel, patch-to-patch, and patch-to-voxel flux transfer matrices. At render time, when blocks are tiled to produce a high-resolution volume, we accurately compute low-order scattering, with modular flux transfer used to approximate higher-order scattering. We achieve speedups of up to 12X over path tracing on extremely complex volumes, with minimal loss of quality. In addition, we demonstrate that our approach outperforms photon mapping on these materials.

Materials such as clothing or carpets, or complex assemblies of small leaves, flower petals or mosses, do not fit well into either BRDF or BSSRDF models. Their appearance is a complex combination of reflection, transmission, scattering, shadowing and inter-reflection. This complexity can be handled by simulating the full volumetric light transport within these materials by Monte Carlo algorithms, but there is no easy way to construct the necessary distributions of local material properties that would lead to the desired global appearance. In this paper, we consider one way to alleviate the problem: an editing algorithm that enables a material designer to set the local (single-scattering) albedo coefficients interactively, and see an immediate update of the emergent appearance in the image. This is a difficult problem, since the function from materials to pixel values is neither linear nor low-order polynomial. We combine the following two ideas to achieve high-dimensional heterogeneous edits: precomputing the homogeneous mapping of albedo to intensity, and a large Jacobian matrix, which encodes the derivatives of each image pixel with respect to each albedo coefficient. Combining these two datasets leads to an interactive editing algorithm with a very good visual match to a fully path-traced ground truth.

Accurately rendering glossy materials in design applications, where
previewing and interactivity are important, remains a major challenge. While many fast global
illumination solutions have been proposed, all of them work under limiting assumptions on the
materials and lighting in the scene. In the presence of
many glossy (directionally scattering) materials,
fast solutions either fail or degenerate
to inefficient, brute-force simulations of the underlying light transport. In particular,
many-light algorithms are able to provide fast
approximations by clamping elements of the light transport matrix,
but they eliminate the part of the transport that contributes to accurate glossy appearance. In
this paper we introduce a solution that
separately solves for the global (low-rank, dense) and local (high-rank, sparse) illumination
components. For the low-rank component we introduce visibility clustering and approximation, while
for the high-rank component we introduce a local light technique
to correct for the missing illumination. Compared to competing
techniques we achieve superior gloss rendering in minutes, making
our technique suitable for applications such as industrial design and
architecture, where material appearance is critical.

We investigate a complete pipeline for measuring, modeling, and fabricating
objects with specified subsurface scattering behaviors.
The process starts with measuring the scattering properties of
a given set of base materials, determining their radial
reflection and transmission profiles.
We describe a mathematical model that predicts the profiles of
different stackings of base materials, at arbitrary thicknesses.
In an inverse process, we can then specify a desired reflection
profile and compute a layered composite material that best
approximates it. Our algorithm efficiently searches the space of possible
combinations of base materials, pruning unsatisfactory states imposed by
physical constraints.
We validate our process by producing both homogeneous and heterogeneous composites fabricated
using a multi-material 3D printer. We demonstrate reproductions that have
scattering properties approximating complex materials.

In this paper, we aim to lift the accuracy limitations of many-light algorithms by introducing a new
light type, the virtual spherical light (VSL). The illumination contribution of a VSL is computed over
a non-zero solid angle, thus eliminating the illumination
spikes that virtual point lights used in traditional many-light methods are notorious for. The VSL
enables application of many-light
approaches in scenes with glossy materials and complex illumination that could previously be rendered
only by much slower algorithms. By combining VSLs with the matrix row-column sampling
algorithm, we achieve high-quality images in one to four minutes,
even in scenes where path tracing or photon mapping take hours to
converge.

This paper describes a technique to automatically adapt programmable shaders for use in
physically-based rendering algorithms. Programmable shading provides great flexibility and power
for creating rich local material detail, but only allows the material
to be queried in one limited way: point sampling. Physically-based
rendering algorithms simulate the complex global ﬂow of light
through an environment but rely on higher level information about
the material properties, such as importance sampling and bounding,
to intelligently solve high dimensional rendering integrals.
We propose using a compiler to automatically generate interval
versions of programmable shaders that can be used to provide the
higher level query functions needed by physically-based rendering
without the need for user intervention or expertise. We demonstrate
the use of programmable shaders in two such algorithms, multidimensional lightcuts and photon mapping,
for a wide range of scenes
including complex geometry, materials and lighting.

Rendering animations of scenes with deformable objects,
camera motion, and complex illumination, including
indirect lighting and arbitrary shading, is a long-standing challenge.
Prior work has shown that complex lighting
can be accurately approximated by a large collection of point lights.
In this formulation, rendering of animation
sequences becomes the problem of efficiently shading many surface samples
from many lights across several
frames. This paper presents a tensor formulation of the animated many-light
problem, where each element of the
tensor expresses the contribution of one light to one pixel in one frame.
We sparsely sample rows and columns
of the tensor, and introduce a clustering algorithm to select a small number
of representative lights to efficiently
approximate the animation. Our algorithm achieves efficiency by reusing
representatives across frames, while
minimizing temporal flicker. We demonstrate our algorithm in a variety of
scenes that include deformable objects,
complex illumination and arbitrary shading and show that a surprisingly
small number of representative lights is
sufficient for high quality rendering. We believe out algorithm will
find practical use in applications that require
fast previews of complex animations.

Rendering complex scenes with indirect illumination, high dynamic
range environment lighting, and many direct light sources remains
a challenging problem. Prior work has shown that all these effects
can be approximated by many point lights. This paper presents
a scalable solution to the many-light problem suitable for a GPU
implementation. We view the problem as a large matrix of sample-light interactions;
the ideal final image is the sum of the matrix
columns. We propose an algorithm for approximating this sum by
sampling entire rows and columns of the matrix on the GPU using
shadow mapping. The key observation is that the inherent structure of the transfer matrix can be
revealed by sampling just a small
number of rows and columns. Our prototype implementation can
compute the light transfer within a few seconds for scenes with
indirect and environment illumination,
area lights, complex geometry
and arbitrary shaders. We believe this approach can be very useful
for rapid previewing in applications like cinematic and architectural
lighting design.

This paper presents an interactive GPU-based system for cinematic
relighting with multiple-bounce indirect illumination from a fixed
view-point. We use a deep frame-buffer containing a set of view
samples, whose indirect illumination is recomputed from the direct
illumination on a large set of gather samples, distributed around the
scene. This direct-to-indirect transfer is a linear transform which is
particularly large, given the size of the view and gather sets. This
makes it hard to precompute, store and multiply with. We address
this problem by representing the transform as a set of sparse matrices
encoded in wavelet space. A hierarchical construction is used
to impose a wavelet basis on the unstructured gather cloud, and an
image-based approach is used to map the sparse matrix computations to the GPU.
We precompute the transfer matrices using a hierarchical algorithm and a variation
of photon mapping in less than
three hours on one processor. We achieve high-quality indirect illumination at
10-20 frames per second for complex scenes with over
2 million polygons, with diffuse and glossy materials, and arbitrary
direct lighting models (expressed using shaders). We compute per-pixel indirect
illumination without the need of irradiance caching
or other subsampling techniques.

Ph.D. Thesis

Global illumination is the problem of rendering images by simulating the light transport
in a scene, also considering the inter-reflection of light between surfaces. One general
approach to global illumination that gained popularity during the last decade is the
many-light formulation, whose idea is to
approximate global illumination by many automatically generated virtual point lights. In
this thesis, we address two fundamental issues that arise with the many-light formulation:
scalability and generality. We present a new view of the many-light approach, by treating
it as a large matrix of light-surface contributions. Our insight is that there is usually
a significant amount of structure and redundancy in the matrix; this suggests that only a
tiny subset of the elements might be needed for accurate reconstruction. First, we present
a scalable rendering algorithm that exploits this insight by sampling a small subset of matrix
rows and columns to reconstruct the image. This algorithm is very flexible in terms of the material
and light types it can handle, and achieves high-quality rendering of complex scenes in several seconds
on consumer-level graphics hardware. Furthermore, we extend this approach to render whole
animations, by considering a 3D tensor of light-surface contributions over time. This allows
us to further decrease the necessary number of samples by exploiting temporal coherence. We
also address a long-standing limitation of all previous many-light approaches that leads to
fundamentally incorrect results in scenes with glossy materials, by introducing a new virtual
light type that does not have this limitation. Finally, we describe an algorithm that computes
a wavelet-compressed approximation to the lighting matrix, which allows for interactive light
placement in a scene with global illumination.

Other

Volume Rendering of Dosimetric Distribution and Biological Response
from 3D/4D Treatment and DeliveryMiloš Hašan, Hanspeter Pfister, George Chen, John WolfgangAppears in the 2010 Annual Meeting of the American Association of Physics in Medicine (AAPM)

Interactive 4D Visualization of Radiological Path Length Variation
for Proton Treatment Port SelectionMiloš Hašan, Hanspeter Pfister, George Chen, John WolfgangPoster in the 2010 Annual Meeting of the American Society for Therapeutic Radiology and Oncology (ASTRO)