Is it a good ideea to use just dynamic vertex buffers? and static almost never?

I'm talking about having something like 2 dynamic vertex buffers,and just always add the vertices in those 2 vertex buffers,render them, and when needed, put other vertices in those buffers,render ...and you got the ideea.

From a GPU point of view, it's a pretty bad idea. For most GPU's reading from a dynamic vertex buffer will be slower compared to a static vertex buffer, since dynamic vertex buffers need to be placed in CPU-accessible memory. From a CPU point of view it might let you reduce draw calls quite a bit which can remove a lot of CPU overhead, but of course you need to spend some CPU time copying the vertex data into the dynamic buffer as well (you may also need to transform the vertex position/normal/tangent data depending on how you set things up). In general instancing will be a better way of reducing draw calls, if you can make use of it.

Well this whole ideea is from a game engine book.The guy says that rendering more than 100 vertices is better than rendering 2-3 at a time.So he created this manager that has a number of dynamic vertex buffer.And in each dynamic buffer puts something like this:every vertex that uses texture x,no matter from what model that vertex is,goes into dynamic buffer x,every vertex that uses texture z,no matter from what model that vertex is,goes into dynamic buffer z.Now the book is a little bit old,but it says something like this: dynamic buffers are placed in agp ram,which gets faster to the normal ram and to the vram.Is this a good ideea? I was kinda shocked of using lots of dynamic buffers.You can also use static buffers ofcourse,he creates a function for that.But he is very proud by this concept of organizing vertices based on the texture they need in dynamic buffers.Please,express an opinion about this.

BTW: I know the concert of agp ram...is outdated,but is he still right about the current pcs?

Well he's correct that rendering a large number of vertices per-draw call is better than rendering a small number, but you'll get more mileage by ordering your data at load time (or - even better - in your content creation pipeline) so that you can achieve this with static buffers. And MJP is absolutely correct that you can burn more CPU than you save by doing too much work to order stuff at runtime; generally you should be aiming to get your runtime work as low as possible, preferably just setting states and issuing draw calls (which is not always achievable - e.g some particle system approaches - but should be possible for ~90% of what you see on screen).

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.

Ayway,big question:
Can you give me some examples about when I should use a dynamic buffer? I thought I should use it for animated models.Would that be a good ideea?
Because there are certain types of animation,I mean I could make a cube animate like this: put a translation it it's world transform.So the vertex shader would use the world transform to animate it.Then there is absolutely no point in using a dynamic vertex buffer.

For systems that procedurally draw a lot of tiny procedural meshes, like dynamic 2D GUI elements, then yes, a system like this can help you batch them all together.

For general rendering though (e.g. with static models of 100+) polygons, it's a bad idea. If its possible to use the vertex-shader for animation, then you should prefer that over the CPU and dynamic buffers.

An alternative mode of operation is to create a library of dynamic textures and create texture atlases so that all the vertices that belong to one model have access to all their different textures on one atlas.

I'm also curious about what the tradeoffs are. I think its cool he got it to work, how does he handle other operations that break a frame's worth of rendering into multiple draw calls? Stuff like different vertex declarations between two models, different render states, different shader constant registers, etc? Unless you have one uber vertex declaration, one uber vertex/pixel shader and do instancing on everything so that you don't need to change a world matrix using SetVertexShaderConstantF between each new model draw, I don't see how he can even win more than once or twice per frame.

well I actually saw 10 pages later that he suggests using static buffer as a primary resource.I have no ideea what kind of vertex or pixel shaders are used.I'm not even at the middle of the book.<br />

All dynamic buffers will need to be reloaded upon losing the device, i. e. a full screen app that you alt tab away from. Until the dynamic buffers have been cleared and reset, the renderer will not render anything.

typically dynamic buffers tend to be things like particle effects, fonts, any of the ID3DX objects.