Hey, i'm working on a 2D project, but i'm rendering the textures to geometry instead of using the Sprite Manager. But I don't know how i would only render part of the image. With the sprite manager, the draw function had a 'SrcRect' parameter. Does anyone know how to do this? i can't find anything online about it =\

Inside your shader, you have a texture sampling function (tex2d, iirc). It takes two parameters for a 2D texture, called "uv coordinates" which range from 0 to 1 and encompass the whole texture. So (0, 0) is the top left pixel of the texture, and (1, 1) is the bottom right pixel. These uv coordinates are defined for each of your geometry vertices, and interpolated over pixels that way. (you can also define what happens when you sample the texture outside the [0..1] range, for instance it can either tile, clamp, or mirror).

If you only want to sample, say, the top-left quarter of the texture, then you need your uv's to be between 0 and 0.5.

If your geometry was a simple rectangle, then you'd have those vertices with the indicated uv's:

These coordinates are often set inside your 3D modelling programs (you might have heard the term "uv-unwrapping" before, it is a method to "wrap" a model with correct uv coordinates so the texture doesn't get distorted), but you can also set them in hardware.

There is no easy way to just say "render the texture between pixels (20, 13) and (100, 49)" in a shader, because the shader doesn't care about what size your texture is.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

k soo... I would have to create separate geometry for each part of the texture i want to use?

Yes, in theory, a 3D model is made of lots of little triangles, and each triangle has its very own "texture chunk" delimited by the triangle's vertices' uv coordinates. But this is usually automated through 3D modelling programs like 3ds-max, maya, etc... so you don't have to worry about it, you just read whatever uv coordinates were provided with your model and use that in your shader.

What are you trying to do exactly?

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

Well i'm experimenting with a side scroller, and i was wanting to do lighting with it. But i dont know how, or if you even can do lighting with just the SpriteManager that directx 9 gives you. So i started rendering everything to geometry in order to do the lighting. I was just having a hell of a time trying to get only part of the texture so i could draw my animations lol.

If you're doing animations, such as from a sprite sheet, you don't necessarily need to use a different piece of geometry for each sprite. You can modify the texture coordinates that are output from the vertex shader according to some shader constant (which would define the cel of animation you want).

If you're doing animations, such as from a sprite sheet, you don't necessarily need to use a different piece of geometry for each sprite. You can modify the texture coordinates that are output from the vertex shader according to some shader constant (which would define the cel of animation you want).

I can see how you could offset the very first texCoord, but what about the others after it?