1- Computing normals in Domain Shader, which is slow because I must call the noise function multiple times (adjency vertex based on x offset/delta)

One general advice to improve shader performance is to pull up calculations where possible (to hull shader or even vertex shader). In this case unlikely. How about precalculating the noise to a texture and use SampleLevel in the domain shader ? You could even prepare the normals this way, i.e. generate a normal map with a Sobel filter.

2- Using the stream output from Geometry Shader, and in the second pass, passing the buffer data with "TRIANGLE_ADJ" and computing normals in the Geometry shader

I expect pressure on the bandwidth at high tesselation. Also, I have no idea how to deal with the patch edges. Actually, I wonder if this is even so straightforward in the interior. Doesn't the domain shader just give you triangles ? Do we know in which order they come ? Maybe you need to output points anyway to get something sensible to work with.

Why not do the noise lookup in your vertex shader? That would allow you to grab the sample early in the pipeline, and then the interpolation between points after you tessellate could be based off of that early lookup. This would effectively reduce your computational load, and if you choose your vertex density correctly then there will be very little difference between what you are proposing and doing it this way.

In fact, since Perlin noise is more or less an interpolation technique, you should be able to get away very easily with this type of approximation...

Don't worry - your English is more than sufficient I understand your current approach, but this is actually what I am talking about changing. When you use only a single vertex for hundreds of kilometers of area, you are effectively not taking advantage of the vertex shader for any significant computation. Instead of having a fixed resolution icosphere, why not start with a screen space set of vertices which would be generated based on the current view location? That would let you put the resolution where it needs to be, and let your tessellation algorithms be more effective. Remember, there is a limit to the amount of tessellation that can be achieved in one pass, so you can't count on it being too much of an amplification.

Instead of having a fixed resolution icosphere, why not start with a screen space set of vertices which would be generated based on the current view location?

Can you tell me more about what you mean ?

Don't know if it's the same process.. But in an earlier project, I was using a projective grid.. Some good results but lot artefacts and problems when camera was too close...

[EDITED]

My proposal, have first pass only to tessellate, and the second, using Vertex Shader for computing noise, would not be too much slower I think.. ? And could resolve totaly the problem ? (with a much more detailed sphere as base model to avoid tessellator limits)

That could work - using one pass to tessellate, and then a second pass to add in the noise. But I think you could just do the noise lookup in the domain shader in that case and produce the whole thing in one pass. This is something you would have to try out and see which way works faster - either with stream output or directly working in the domain shader. Just architect your code to be able to be done in discrete chunks so that you can swap them in and out for profiling.

The projective grid is one possibility, but you could just as easily have a flat, uniform grid of vertices based on the portion of the planet that you are near. Just think of it as a set of patches that make up the planet. You could have multiple resolution patches too, so that when you are close by then you switch to a smaller vertex to vertex distance.

Things, like you describe (one pass), having vertices noise applied in the domain shader is what is working now !

The problem I was concerned (first post), was how to compute the normals with this method....

It's working to generate some noise points close to the base vertex, and compute the normal.. But it's a lot of GPU work... And not very well solution...

I wanted (at start) compute the normals with the geometry shader..

But unfortunatly, it is not possible in the same pass because of the output not compatible of the domain shader (cannot output triangle_adj).

My ideas was :

A- First pass for tessellation, Second pass (with TRIANGLE_ADJ) for noise in vertex shader and computing the normals in the geometry shader

B - Like A but using the compute shader for noise and normals after the tessellation (and a third pass for pixel shader)

I find not "clean" to have to compute more noise points juste to have one normal.. As if I render in multiple passes, the noise is computed only one time per vertex and I could use vertex adj to compute the normals..

How about having a pre-calculated noise normal function? Using the same method that you use for a scalar value in the noise function now, you can expand that to use a vector value field instead. Then the noise lookup is essentially the same 'cost' as the regular noise lookup. That is probably how I would approach the problem...

Regarding your options, I think you won't be any better off doing it in two passes instead of one. You can output a triangle list to your geometry shader, which means you will have one face to calculate a normal vector from. However, if you do a first pass for tessellation and then calculate the normal in the second pass, you still only know about one face per vertex - there is no way to get the adjacency information back. So you would be just as good off to use the single pass with a geometry shader working on the face normal.

How about having a pre-calculated noise normal function? Using the same method that you use for a scalar value in the noise function now, you can expand that to use a vector value field instead. Then the noise lookup is essentially the same 'cost' as the regular noise lookup. That is probably how I would approach the problem...

Oh oh.. Very intersting.. I never think about that... I must investigate how to do that (do you have some links ?).. But your approah seems to be the best !! Thank you very much ;)

Regarding your options, I think you won't be any better off doing it in two passes instead of one. You can output a triangle list to your geometry shader, which means you will have one face to calculate a normal vector from. However, if you do a first pass for tessellation and then calculate the normal in the second pass, you still only know about one face per vertex - there is no way to get the adjacency information back. So you would be just as good off to use the single pass with a geometry shader working on the face normal.

And if I cannot have adjacency informations in the second pass.. I don't have much choices...

Can I ask you, if you were working on the same project, what "solution" you think you could use ? Big steps, not details ;)

For info, it's a personal project only, i'm not expert in graphics development ;)

I would go for the pre-calculated solution. That would let you do as complicated of algorithm as you could possibly want to generate the normal vectors from your noise function, and then store the noise value and the normal vector together in your noise structure. Then when you do the perlin lookup, just lookup the 4-component value and normalize the normal vector portion of the values.

I think that would simplify the process, keep the number of noise lookups down, and let you work in a single pass - which should handle all of your requirements! The only work is to pre-calculate the normal vectors and ensure that you won't run into any situations where you end up with a <0,0,0> value from your routine.

I haven't done any larger scale terrain renderers, although it does seem to be a popular topic. You might be interested in checking out the tiled resources in D3D11.2, as they could help you with texture variation over the surface of the planet.

One thing that comes to mind is that you should generate the geometry used for rendering in caches. I presume your geometry will change based on the location of your camera, so if your camera stays in one general area then you should build the geometry and then cache it if possible to minimize the time used to build it up. This would play nicely with your ideas of direct modification and re-saving too. And you would also be able to use the GPU with stream output for optimized computation.

Again, thank you ;) Interesting feature with D3D11.2.. I just need to move on Windows 8 !!

Because of the LOD (tessellation, noise lookups, etc.), the geometry will change very often.. But it could be very efficient to limit the world building when not necessary..

I must work on it ;)

Except an old screen from when I worked on shadows, my project in divised in several branches, where I test separatly (with specifics geometries) the differents parts of the main project. So I don't have, for the moment, good lookings screenshots.. Perhaps I will open a little blog when the project will be more advanced ;)

The graphic engine is in a good way, but there is so work to do with the world builder (and not so time)...

That's always the case - more features to add than time permits ;) That screenshot looks pretty good in any case, so keep up the good work! You can always start a development journal here on gamedev.net if it would be useful for your purposes...