i kind of coded clouds similar to theirs(well based on their noise texture ideas), so the mapping is done by raymarching through the 3d noise texture and doing the density, coverage,refinement and lighting on each of the raymarch sample.

You can start raymarching by intersecting a lower cloud "plane" with the eye-vector at a pixel, then you march in steps and do your calculations.

Later on you can do an intersection with a sphere around your map so you get a nice curvature at the horizon.

So reading up on raymarching and volume rendering is a good idea.

About how one could generate the weather texture, that's something i need ideas for aswell. You could precompute/prepaint some weather textures for a single cloud.

Then based on the weather you want, pack some of the premade textures in a fbo, add some magic and use that in the final raymarch shader.

The cloud shape depends a lot on the baked 3D noise texture and how you combine it later on. I mean you have 4 channels with different octaves and only need a float density value. For me it was a lot of experimenting.

i kind of coded clouds similar to theirs(well based on their noise texture ideas), so the mapping is done by raymarching through the 3d noise texture and doing the density, coverage,refinement and lighting on each of the raymarch sample.

You can start raymarching by intersecting a lower cloud "plane" with the eye-vector at a pixel, then you march in steps and do your calculations.

Later on you can do an intersection with a sphere around your map so you get a nice curvature at the horizon.

So reading up on raymarching and volume rendering is a good idea.

About how one could generate the weather texture, that's something i need ideas for aswell. You could precompute/prepaint some weather textures for a single cloud.

Then based on the weather you want, pack some of the premade textures in a fbo, add some magic and use that in the final raymarch shader.

The cloud shape depends a lot on the baked 3D noise texture and how you combine it later on. I mean you have 4 channels with different octaves and only need a float density value. For me it was a lot of experimenting.

thanks for your reply.

i already implement the raymarch and volume rendering.

problem is how to get the cloud density, i think need to use the weather texture to get cloud base shape and empty sky regions.

so every raymarch step, we need a weather data to calculate the cloud density, so we need use position to calculate a uv to sample the weather texture.

my problem is that : how to calculate the weather texture uv?how to mapping the weather texture to sky dome?

for me the key point is the noise texture, weather texture, the mapping method, and the compose algorithm.

For mapping the weather texture i use as simple planar projektion, or more xy position of the samplepos with some scaling and wind offset. Works quite well as long as the curvature of you cloudsphere is small.
As i said i use a ray-sphere intersection as the starting point, but with a quite big sphere wich is centered in xy at the camera's position and at a height so that above the cam the clouds start at the height you want.

For the weathertexture what you can try out, is using 2 premade tiling perlin noise textures(different noise in r,g,b,a) , scroll them, combine them and in my case apply a global coverage value to it(remapping with some exp magic).
It gives changing patterns , you have a global coverage value and as a result also local covarage values.

For mapping the weather texture i use as simple planar projektion, or more xy position of the samplepos with some scaling and wind offset. Works quite well as long as the curvature of you cloudsphere is small.
As i said i use a ray-sphere intersection as the starting point, but with a quite big sphere wich is centered in xy at the camera's position and at a height so that above the cam the clouds start at the height you want.

For the weathertexture what you can try out, is using 2 premade tiling perlin noise textures(different noise in r,g,b,a) , scroll them, combine them and in my case apply a global coverage value to it(remapping with some exp magic).
It gives changing patterns , you have a global coverage value and as a result also local covarage values.

From my experiments the way you combine the base noise octaves have the most impact on how detailed the clouds are. But well i'm not using perlin-worley , just worley noise with 3 octaves per channel with a size of 123^3.

So the way i combine them is totally different from what they do, some kind of fbm+inbetween remapping with coverage.

And ofc the powder effect adds a lot.

For the cloud top, think the main difference is, i don't modify the density of the clouds with the height signal, but modify the coverage.

Then sure, the smaller high frequency noise adds the very fine detail and is applied like that

base_cloud = base_cloud * high_freq_noise *(1.0-base_cloud);

You see i simply assume that an edge is where the cloud got a density below 1

From my experiments the way you combine the base noise octaves have the most impact on how detailed the clouds are. But well i'm not using perlin-worley , just worley noise with 3 octaves per channel with a size of 123^3.

So the way i combine them is totally different from what they do, some kind of fbm+inbetween remapping with coverage.

And ofc the powder effect adds a lot.

For the cloud top, think the main difference is, i don't modify the density of the clouds with the height signal, but modify the coverage.

Then sure, the smaller high frequency noise adds the very fine detail and is applied like that

base_cloud = base_cloud * high_freq_noise *(1.0-base_cloud);

You see i simply assume that an edge is where the cloud got a density below 1

thanks.

to get a more detailed base cloud noise, need a high octaves? or need other things?

this is my base noise texture.

i found the r channel looks fuzzy indeed, i will change the noise arithmetic to test, thanks.

the noise texture you use rgba fp16 texture or rgba8?

some kind of fbm+inbetween remapping with coverage.

something like Remap(low_freq_FBM, coverage, 1, 0, 1)?

modify coverage accord to the height, the top get less coverage?

use this formula, when base_cloud is 1.0, means it's inside the cloud.

Attached Files

Really great results guys! I wish I could achieve at least something relatively close to what you have for my game.

That GPU 7 Pro article mostly describes ways to create a realistic clouds in terms of density functions and overall weather simulation, but it somehow assumes that reader is already familiar with ray marching techniques. While I can understand the algorithm behind a single ray cast from a given point to sample various density functions and so on, I can't get my head over the more basic stuff like - what render pass does it all happen? Is it during rendering the skybox? Or do you render some special shape and then in it's shader you do all this? What does it mean to do raycasts in this cast? Is it per pixel? Per point in a world space? They say in the article they assume some spherical shell around the camera of some thickness, but how do you pick points on that sphere for actual raycast? And how is the result from such raycast used to actually shade the sky?

Are there some papers on basics like this? If you could provide some pseudocode used to actually "draw" and execute this shader that's supposed to create clouds, including what are you rendering, and what's the input for raymarching algorithm, it would greatly help me understand this concept.

Really great results guys! I wish I could achieve at least something relatively close to what you have for my game.

That GPU 7 Pro article mostly describes ways to create a realistic clouds in terms of density functions and overall weather simulation, but it somehow assumes that reader is already familiar with ray marching techniques. While I can understand the algorithm behind a single ray cast from a given point to sample various density functions and so on, I can't get my head over the more basic stuff like - what render pass does it all happen? Is it during rendering the skybox? Or do you render some special shape and then in it's shader you do all this? What does it mean to do raycasts in this cast? Is it per pixel? Per point in a world space? They say in the article they assume some spherical shell around the camera of some thickness, but how do you pick points on that sphere for actual raycast? And how is the result from such raycast used to actually shade the sky?

Are there some papers on basics like this? If you could provide some pseudocode used to actually "draw" and execute this shader that's supposed to create clouds, including what are you rendering, and what's the input for raymarching algorithm, it would greatly help me understand this concept.

it's a post-process render pass, so you just render a quad.

first you get a ray accord to the screen pixel's coordinate, calculate the intersect point of the camera ray and the bottom cloud sphere.

Hm, where does the buffer come from? This looks like C++ code, you do it on CPU? Do you run it for every pixel of the rendered image or only for those that are actually on "sky"?

Thanks for the paper, I will read it, need to find some good explanations how raymarching works, as I see a lot of weird equations about the actual volume sampling, but nothing about how is this actually rendered

Hm, where does the buffer come from? This looks like C++ code, you do it on CPU? Do you run it for every pixel of the rendered image or only for those that are actually on "sky"?

Thanks for the paper, I will read it, need to find some good explanations how raymarching works, as I see a lot of weird equations about the actual volume sampling, but nothing about how is this actually rendered

rgba8 should be enough precision. After all it's just a simple density value. For the base detail, you can play araound with the scaling. My clouds start at around 1,8km in height and have a scaling of 68m per texel. The detail noise got a 16 times higher scaling.

rgba8 should be enough precision. After all it's just a simple density value. For the base detail, you can play araound with the scaling. My clouds start at around 1,8km in height and have a scaling of 68m per texel. The detail noise got a 16 times higher scaling.