Rain GL – Part One

Introduction:

While I was at Hiroshima University from 2007 to 2009, I worked with a friend, Marcos Slomp, who was a Brazilian PhD student. My research was on rendering raindrops, which was an extension of Dr. Kaneda’s raindrops on a front glass research. We experimented with a couple of interesting ideas before we discovered the method which would work for us, and I find it to be an interesting and intriguing story about discovery, the development of ideas, and over-coming a difficult challenge. I also find this story particularly interesting because it shows that we didn’t just wake up one day with the solution but it was necessary to attack the problem iteratively, step by step, from a variety of different directions.

So for this story, I am going to make three parts, part one is today and I will show you where we started and what types of results we derived. Next I will discuss part two, how we changed our approach and tried a new idea, that also failed to work. Last, I will conclude with part three that shows the final solution.

I like this story because it is from real life and because for me it does such an excellent job of capturing the nature of research, from start to finish. While our results answer some questions, including our primary task, it, as with most good research, poses its fair share of new questions which have yet to be answered.

The Challenge:

So while I was at Hiroshima University, my advisor let me choose one of many topics to study during my masters. To be honest I forgot most of the other options, one had to do with parabolic motion parallax. That is calculating motion parallax as the moving object travels on a curve. A second one had to do with calculating the rainbow colors you see on thin film, like when you spill some oil, or the rainbow you can see in a soap bubble. The topic I wanted to study was raindrops. This is for two reasons, one is because raindrops are pretty tactile, everyone knows what rain looks like, or thinks they know. The second reason was my Professor, Dr. Kaneda did a fair amount of his own research on raindrops, so if you are going to study under someone, you mind as well study they topic they are interested in most.

The idea:

So, Dr. Kaneda in 2001, had basically solved the problem of rendering raindrops on the front glass of a vehicle.

Figure 1: This is an image from Kaneda's 1999 work. The rain is computer generated and is affected by the CG-wiper.

In the image above, the rain is rendered in the CPU and is an offline approach. The image is from a one minute video, but the rain slides down the front glass and is wiped away by the wiper as it moves.

The challenge then was to speed up the process of rendering these raindrops and the initial idea was that we could do that through interpolation. To understand how this works, let’s first take a look at raindrops.

Rain – A Natural Phenomenon:

One interesting side note about rain, some very smart, highly skilled researchers back in the 40’s and 80’s did a fair amount of research on the sizes and shapes of raindrops. Marshall and Palmer in 1948 [2] show that essentially most rain that falls is small in size. Beard and Chuang [1] in 1987 showed that when rain is small in size it tends to be spherical in shape. These are two important assumptions about rain that were necessary for us to have in order for our methods to work. I really appreciate and respect the work done by these gentlemen and I wonder if they realize or know that their work is still being used today in 2007 – 2009, and being written about now, 2012.

In order to understand how to render a raindrop in CG, it is of course necessary to understand what a raindrop looks like, particularly, in the concepts of mathematics. To do this we read a couple of very helpful works from two researchers, who to our our circle were often referred to as Garg and Nayar which came from the author list of their primary works we used [3, 4]. Looking back now I continue to have a great deal of respect for these gentlemen and oddly enough I believe I appreciate their work now even more then when I when I was using it.

In any case I will borrow an image from their work to help describe what is happening in a raindrop visually.

Figure 2: The image depicts a view vector (v-hat), a pinhole camera, reflection (s-hat), internal refraction (not labeled, but has endpoints B and A) and total internal reflection (also no labeled, but has endpoint A and exits at the bottom of the drop), as well as the internally reflected ray's refraction (p-hat). The second refracted ray is (r-hat). This raindrop model borrow from the works of Garg and Nayar (3).

One important challenge that exists when trying to render a raindrop is, all of the values of the vectors change depending on the position of the camera, and position of the raindrop, meaning this is a view dependent problem. Previous solutions focused on refraction, or as with the works from Garg and Nayar, focused on rainstreaks, which is what rain looks like in nature, because it is falling.

Figure 3: Raindrop distribution and the corresponding geometrical shape: in a typical rainy scene, most raindrops are smaller than 1 mm in radius, thus being nearly spherical in shape. Image adapted from Garg and Nayar (5) using distribuition and shape data from Marshall and Palmer (2) and Beard and Chuang (1).

In the image above, Marshall and Palmer back in 1948 showed did a study where they determined the size of raindrops based on the density of rainfall in a cubic meter. I am not very familiar with meteorology and these kinds of physical experiments but if I understood their method correctly, they did a bunch of work with radar (stuff – which is clearly the technical term) and I believe measuring the amount of sound generated when a drop hits a particular surface. Beard and Chuang did similar physical experiments with dropping water down a 3 meter tall tower, i.e. a controlled environment and did a variety of different measurements. Notice one important fact, the rain does not look like any tear drop shape, and it does not look like any raindrop streak, despite media’s constant inaccurate depiction.

Assumptions:

Based on the images and works mentioned above, we’ll assume a spherical shape. Next we will address the Garg and Nayar model, which is water affects on light, which are dictated by Snell’s law and the refraction coefficient of water which is 1.33. Calculating all of those values, shown in the Garg and Nayar model, is likely expensive, lots of reflection and refraction, so we first need to determine which portions are the most important. To do that we will address the Fresnel equations. Basically those equations show that when the view vector and the normal of a surface approach perpendicular, you get more reflection and less refraction. How much reflection and how much refraction, good question. To figure this out we ray traced a spherical raindrop and generated a mask where white depicts the reflection component and black depicts the refraction component.

Figure 4: Fresnel mask for a sphere. This was ray-traced.

For the image above, we have a camera placed directly in the center of the raindrop at a specified distance to ensure the entire drop is within in our camera’s view-frame. As is expected when the normal is perpendicular to our view vector, i.e. along the edges of the sphere, we have high reflection and low refraction. So reflection looks important, and so does refraction. To make sure let us take a look at an actual image with an environment map.

Figure 5: Here we have a raindrop rendered with No Fresnel, meaning the two components are just additive. On the top right we have just the reflection contribution. On the bottom-left we have just the refraction contribution. In the bottom-right we have the combined contribution modulated using our Fresnel mask.

In this image, we have rendered each component individually and combined. As we can see there is a bit of reflection visible around the edge, so we should, if possible preserve this component. One important note in the image above, this is with a low-dynamic-range image (LDR). When using a High-Dynamic-Range image (HDR) this will become even more important.

The next issue, based on Garg and Nayar’s model is Total-Internal-Reflection (TIR). Total internal reflection, is a component of refraction, the ray will refract first, then reflect on the inside, but where does this happen in relation to our raindrop mask? Let’s take a look.

Figure 6: On the left we have our Fresnel Mask. In the center we have the contribution generated by Total Internal Reflection (TIR) and on the right we have TIR modulated by our Fresnel mask.

Total Internal Reflection looks pretty insignificant. Total Internal Reflection oddly enough is a contribution of the refraction component of Fresnel, it happens when the incident angle between our view vector and the surface of the raindrop is very high, near 90 degrees. This means it only happens near the very edge of our spherical raindrop. In turn, this also happens to be where the refraction portion of Fresnel is the weakest and reflection is the strongest. If you look at the image in the far right, of the figure above, you can see that when TIR is modulated by Fresnel you have a very weak effect, so in the end we ignored this.

Conclusions:

That will conclude part one, where we have identified the characteristics that affect the visual appearance of a raindrop. Next time we will look at how we went about generating faster results through interpolation. So far all of these results were ray-traced which is a slow process because there are a number of expensive calculations to be made, but doing so let us identify some assumptions and determine a couple characteristics of raindrops so it was a useful and meaningful exercise.

Acknowledgments:

I want to thank Dr. Marcos Slomp, Dr. Toru Tamaki and of course my advisor at Hiroshima University Dr. Kazufumi Kaneda. As well I owe thanks to the various researchers that came before me whose work I used and referenced, with out their previous works this research would not have been possible.