Hello. After, finally, completed the shadow volume example, now, before make a journey in long shadow mapping way, i would like to make a little water example.

I'm reading the ShaderX2 article, i will explain you what am i doing, hoping that someone will find the error.

Rendering the Underwater Scene (First Render Pass) We want to simulate a close-to-reality view into the water. The seabed and objects like fish or plants should be distortable by faked bumps (see the section "Animated Surface Bumping"). So, we have to render the underwater scene view each frame again into a render-target texture. For this job, we use the original camera. A clip plane cuts off the invisible part of the scene above the water surface

For this, i created another texture and, after getting the backbuffer, i made a CopyResource and it works. So, no render target it's needed (i hope)

The second pass it's the Modifications Dependent on Water Depth It say to use a vertex and pixel shader, but i did not understand where. On the water-grid mesh? Anyway, if i use it, i can't see the plane on the screen.

The second part of the article say modification of the texture depend of water depth. Ok, but where have i to apply the texture? On the grid plane mesh? Anyway, if yes or not, the second pass make this, but it seems to not work.

I know the algorythm your talking about, your kicking a bit of but if you get this into your game, it looks pretty cool, i imagine it looks really nice with specular. i havent implemented it myself yet, ive only got a rough idea in my head how it works, so i cant really help ya. good going tho, hope you get it into your game system.

I'm sorry, but It's not entirley clear to me what your problem is. You're trying to realistic water when your view-point is above the water right (or just not under the water surface).

I'm also not totally familiar with that wording the book is using. I do know exactly the technique your talking about though, and it was MUCH easier to implement than I had predicted (much less complicated than shadow volumes anyway).

So is your first problem that you need to get the image of the refraction (underwater scene) onto the water itself? That involves a much simpler concept than the book may have you belive.

I can kind of see where you thought that may be right, placing the texture coordinates according to the world space, but I can;t mathematicly see how that would work and that's certainly more complicated than the way I did it.

It's funny you should mention shadow-maps because both of these techniques use the same concept, that is, projective texturing. The camera is essentially projecting the image from your view point onto the water surface. All it takes to modify it is a += or -=, and there are literally hundereds of ways to do that.

EDIT: btw, what's with the "transpose" commands? The raw matricies themselves should correctly transform the geometry.

I looked at that article... and this is ****ing rediculous. I've never understood why the people who write those articles have to take the simplest tasks and stretch them out into the most impossible to understand jibberish.

Personally, I would just forget the article and start over in your own words, because they are making it WAY more complicated to take in than it needs to be.

Here are some ground rules for rendering water, and this is the way I do it:

Render the water plane normally, no wierd transformations, no "transposing", no inverting, ect. Just render it like you do everything else.

In the vertex shader, add the semantic that I did (the position2 variable) and make it equal to the final output of position1 (after all matrix transformations)

Now, this should give you just a flat, normal water plane

In the pixel shader, use the position2 parameter in the equasion described to calculate the ViewTexC vector. These are the texture coordinates that you will use to sample the refraction image (the copied back-buffer)note: if you're wondering why you need a second parameter for position2, I actually have no idea. The shader simply won't compile if you use the output position.

Now the water should become "invisible". If it's not invisible you did something wrong. The image will be perfectly projected onto the plane and therefore you it should be like a window. To make the plane visible again, darken the colors a slight bit (or raise if you're doing HDR, totally up to you).

There are lots of ways you can make realsitic water. Personally I stretch a normal-map across the plane and modify the ViewTexC coordinates by those and fresnel effects

I looked at that article... and this is ****ing rediculous. I've never understood why the people who write those articles have to take the simplest tasks and stretch them out into the most impossible to understand jibberish. Personally, I would just forget the article and start over in your own words, because they are making it WAY more complicated to take in than it needs to be.

Yes, i noticed it. And the same thing happens when reading the GPUGems articles...they are considered as the Byble of gems programming, but in my opinion, people who write articles should better use words...anyway...

Render the water plane normally, no wierd transformations, no "transposing", no inverting, ect. Just render it like you do everything else.

Ok, even if i really would understand why all that things...

In the vertex shader, add the semantic that I did (the position2 variable) and make it equal to the final output of position1 (after all matrix transformations)

Now, this should give you just a flat, normal water plane

In the pixel shader, use the position2 parameter in the equasion described to calculate the ViewTexC vector. These are the texture coordinates that you will use to sample the refraction image (the copied back-buffer)note: if you're wondering why you need a second parameter for position2, I actually have no idea. The shader simply won't compile if you use the output position.

Done. I know about the Position as a inusable value...mabye becouse it's a System Value semantic ( : POSITION)

Now the water should become "invisible". If it's not invisible you did something wrong. The image will be perfectly projected onto the plane and therefore you it should be like a window. To make the plane visible again, darken the colors a slight bit (or raise if you're doing HDR, totally up to you).

Yes, it works, but i would better understand why have i to make that particular equation:

In that code snippet you posted, what's going on is pos2.xy / pos2.w is the perspective projection: dividing eye space x and y by eye space depth, stored in w, giving you screen space x and y. That gives you x and y both in the range of -1 to 1, but for a texture lookup you want them to be in the range of 0 to 1 instead. So, you multiply by 0.5 and add 0.5 to each component and that remaps them from [-1, 1] to [0, 1].

Well, unfortunatley I didn't ever find a resource that didn't have the same problems of all the other articles (that is, overly and necessarily complicated). Not that I have not quite completed my algorithem yet so this is simply a rough model to follow:

First thing is just something to keep in mind. The water plane is essentially an object, so treat it as such. You can apply normal mapping to it as you would any other object, this is the first step.

Use the normals to distort the image. I currently use a VERY rough approximation just to get the job done, I plan on modifying this later:

ViewTexC += (normals.xy * (1-VdotN) ) * .001;

or something like that... really you'll need to just play around with it.

Now for some primitive reflection. First off is lights. Obviously for this we would use a specular component (phong shading). Raise the power very high and adjust the intensity accordingly. The normal maps should be enough to provide very good lighting detail. As you can probably suspect, water doesn't really need a diffuse component unless it's really... nasty water... but that's kind of a different subject.

Now for world reflections. This early in, I would stick to cube maps. Just environment map it like you would anything else, and do a fresnel term ( pow(1-VdotN, 10) ). Tweek it to how you like.

Now, doing realtime and local reflections is not as an advanced of a topic as you might think. In essense you just flip the scene upside down and render it, along with some other steps, but its suprisingly hard to implement because of so many complications. So local reflections are awesome, but for now just stick to cube mapping and see where you get.

I figured I would show you what this algorithem would give you. Honestly I don't remember entirley if this is what it was because these pics are like... a year old. The first is showing more refraction and the second shows better realtime reflection.

In that code snippet you posted, what's going on is pos2.xy / pos2.w is the perspective projection: dividing eye space x and y by eye space depth, stored in w, giving you screen space x and y. That gives you x and y both in the range of -1 to 1, but for a texture lookup you want them to be in the range of 0 to 1 instead. So, you multiply by 0.5 and add 0.5 to each component and that remaps them from [-1, 1] to [0, 1].

1)I do not understand yet...making the 3 matrix multiplication, should i have already got position in perspective position? Why have i to / pos.w? 2) Can't i use saturate or clamp instead add 0.5? (Just to understand) 3)What a nice rendering! I would reach the same result!!! 4)What can you say me about water movement? 5)I would avoid the use of cubemap, anyway today i will try so write some other code, thank you again!

1)I do not understand yet...making the 3 matrix multiplication, should i have already got position in perspective position? Why have i to / pos.w?

I belive its because the X and Y coords output by the vertex shader are not divided by W yet, so they still have extremley high values. I think the W divide is done automaticly by the system. I could be wrong though. That's just a guess.

2) Can't i use saturate or clamp instead add 0.5? (Just to understand)

Saturate or clamp only restricts the values to 0-1, it doesn't move the coords. Doing this would cut off half the screen and give you a wierd "line" effect.

Anyway, really don't worry about why you need it. I didn't understand wht it did untill about a month after I started using it. I just knew that I needed it to work right. I think one mistake a lot of people make is they assume they need to understand everything about the method to use it properly. My experience has been that's a waste of time that just keeps you from your work. It's good to understand what they do, but that knowlege will just come to you as you continue working on it, and dwelling keeps you from moving on.

3)What a nice rendering! I would reach the same result!!!

Thanks!

4)What can you say me about water movement?

That's one I havent quite gotten yet. A while ago I implemented 2 techniques that I decided to use in parallel for level of detail.

One method was using an off-screen render target and drawing dynamic ripples on in the form of normal maps. Doing a little simple algebra it was possible to convert objects world-space positions into positions over the pool of water. From that I just used ripples like a particle system and emmited them from the body of the characters. The result was pretty damn cool.

It wasn't convincing though at steep angles, so up close I converted the water to a higher-tesselation mesh and then used a mass spring system. I also experimented with using texture reads in the vertex shader to animate, but this was not very cost effective as the tesselation had to be impractically high to get visually pleasing results. I admit though I never finished this implementation.

5)I would avoid the use of cubemap, anyway today i will try so write some other code, thank you again!

Well for things like small puddles you don't have much choice because realtime reflections would be totally impractical for those. One thing you should consider is using local reflections and cube maps.

You can use local reflections for moving objetcs and cube maps for the envrionment around them. The technique is pretty effective and cheap, it's currently being used by Mirrors-Edge I belive.

I belive its because the X and Y coords output by the vertex shader are not divided by W yet, so they still have extremley high values. I think the W divide is done automaticly by the system. I could be wrong though. That's just a guess.

If you use a projective texture function like tex2Dproj (TXP in assembly, I think) then it will do the divide by w for you. However, there's then a minor complication because you have to do the [-1, 1] to [0, 1] remapping. tex2Dproj does *not* do this remapping after it divides by w before looking up in the texture. So you have to tweak the coordinates slightly before you send them to tex2Dproj. It's not hard and can in fact be rolled into the projection matrix so it doesn't need a shader calculation at all.

Honestly I'm not real sure what that shader code is supposed to do... The only thing it's really doing is making waves in the water mesh, but it's not doing image distortion or anything.

Actually the methods really not expensive at all. It's just a few instructions more than regular lighting. The screenshots you saw (if I remember right) ran at about 240 fps on a Geforce 7900 GS. There's really nothing expensive about it.

Following the article, how can i make the light absorption?

What do you mean by absorbtion? Do you mean having the water be cloudy or dirty?

I'm going to assume that's it for a moment (but correct me if I'm wrong) but this involves a few more steps and it can get more expensive. For this, computing the fog effect is very simple. The code is literally:

The complicated issue here is "sceneDepth". The only thing you need to know to calculate the fog is the amount of water between your eye and the surface your looking at. For this you need 2 pieces of info:

distance from eye to water (which is ->) input.Pos2.z distance from eye to surface (which is ->) sceneDepth

To get the scene depth is a bit complicated though. What you need to do is draw the scene onto a render target in the format of D3DFMT_R32F. You're going to have to create a new shader as well. The vertex shader for this pass consists of just a normal world position tranformation, as in, just draw as you normally would, but forget about normals and texture coordinates. Also you need to include the Pos2 variable like with the water.

The pixel shader for it is one line: color.x = input.Pos2.x;

Now once you've drawn this info onto the render target, use it as a texture in the water shader. The sceneDepth variable is filled by:

sceneDepth = txDiffuse.Sample( DepthTex, ViewTexC )

Now, keep in mind there is a feature that lets you acess depth information in the shader new to DirectX 10, but according to nVidia it disables early-z culling and that could cut your framerate in half.

Thank you again for your reply. I'm really get confused by your ideas (that are very good) and the ShaderX article.

About the Z access in the Shader, the DepthStencil surface, in D3D10, it's managed as a Texture and, if it had god as bind option the D3D10_BIND_SHADER_RESOURCE it can be passed in a shader like a simple texture and manage for it. So i may go on this way.

I have the completed "glass-effect" on my scene. Now, can you please make a todo-list, to proceed in the effect?

use the texture coordinates from the mesh in the shader (the incoming UV's, not the screen projection. In other words, place the normal map on it just like any ordinary texture on any ordinary object. No special steps here.

modify the image (ViewTexC) coordinates so that the refraction in the water becomes wavy. Modify them (of course) becore you use them for the image lookup.

Animate the water. You do this by simply declaring a new float at the top and using it as a timer. Outside the shader you incriment it by 1, and then do the "SetFloat" function. In the shader, just modify the UV coordinates with this: normal = tex2d(normalMap, input.uv + modify.xy);

this will give you a slow steady flow that will have you go "woah..." the first time you see it.

It's not bad for basic water effect, even if the water looks like...so...dirty? Mabye it's a normal map problem? Anyway, i would like, with your help, to go beyond and reach better results. The first issue i can see it's that not all surface it's watered...here is an example

Ummm, how much experience do you have in graphics programming? I think we need to get on the same page here.

It's not bad for basic water effect, even if the water looks like...so...dirty? Mabye it's a normal map problem?

Your outputting the normal map color, not the distorted image . So, as the question above suggests, do you know how to do normal mapping (tangent space normal mapping). If not, that's fine, there's other options, but stretching a normal map over a polygon is not normal mapping.

Ummm, how much experience do you have in graphics programming? I think we need to get on the same page here.

Not so much! I use Direct3D10 from less than a year, but i used D3D9 very years ago, so i forgot a lot of concepts. What about this, for example?

Your outputting the normal map color, not the distorted image . So, as the question above suggests, do you know how to do normal mapping (tangent space normal mapping). If not, that's fine, there's other options, but stretching a normal map over a polygon is not normal mapping.

Normal mapping...if i remember well, in normal mapping there was a particular multiplication with tangent, normal and binormal, even if, now, i do not remember why...i did it so time ago... So, have we to stop for now?