2 Answers
2

I believe 32 is the maximum number of textures that can be bound currently. As far as I can tell even the 8800 series had 32 texture units.

As far as I know, for openGL 4.x support you will need a Fermi or newer nvidia card(or corresponding amd card), the higher end models all seem to have 32 units, while the lowest end cards (GT 430, for example) have 16.
However, looking at AMD spec sheets they list numbers like 80 or 128 texture units, but list 32 color ROP units which seem to have remained constant through generations.

The GTX 480 on the other hand is listed with 60 texture units and 48 ROP units, while lower end cards like the 430 reportedly only have 16 texture units and 4 ROP inits.
So on the whole I'm not really convinced either of those is the number you are actually looking for.

You can check the number of texture units available for non-fixed function pipeline rendering with glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &texture_units);, though, so if you have access to some diverse hardware you could check yourself.

PS: AMD and nvidia have recently introduced "bindless textures", (amd has a different name for it) which allows you to use large numbers of textures without binding them to textutre units, at the moment this is only available in openGL.

Following that, the real answer is something like 'you have as much as the hw can offer'. Or, wording it differently, everything should run as fast as it can. (and in OpenGL that covers almost every field).
–
DarkwingsJun 3 '12 at 12:13

@Darkwings I suppose so, binding more textures than the hardware can handle will usually lead to an error or the driver timing out though. I also misread, he said shader model 4+ not opengl 4+. Looking at the first DX10 cards (X2900 and 8800 families of cards) they all seem tu have at least 16 texture units.
–
melak47Jun 3 '12 at 12:29

I was just underlining the fact that since it can be parameterized, it should be. Even if the game were to crawl on older cards, it shouldn't be something hard-coded but a 'suggested requirement'.
–
DarkwingsJun 3 '12 at 14:14

@Darkwings I don't think parameterizing the number of textures used in your shader code is that easy. Sure, you could maybe drop some textures for reduced quality and a simplified effect, say drop the normal and specular color maps, and stuff a single specular level and bump channel in some other textures' channels, or drop them alltogether. At some point you might even have to rely on multi-pass effects if you can't make do with the number of texture units, so if you can count on the target hardware having at least 16 units, that should simplify things a bit
–
melak47Jun 3 '12 at 16:43

Of course it could be necessary even to skip entire effects, that would make the game less eye-catching but not less playable. Having a decent Video Configuration option in the menu is not really an option (unless you're developing for consoles).
–
DarkwingsJun 3 '12 at 16:47

The number of textures that can be bound to OpenGL is not 32 or 16. It is not what you get with glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &texture_units);. That function retrieves the number of textures that can be accessed by the fragment shader.

So there are two limits: the textures-per-stage, and the textures-total-bound.

OpenGL 3.x defines the minimum number for the per-stage limit to be 16, so hardware cannot have fewer than 16 textures-per-stage. It can have a higher limit, but you know you get at least 16. 3.x defines the minimum number for the textures-total-bound as 48. AKA: 16 * 3 stages. Similarly, GL 4.x defines the numbers to be 16 textures-per-stage, and 80 textures-total-bound (ie: 16 * 5 stages).

Again, these are the minimum numbers. Hardware can (and does) vary.

As for specific hardware, you can expect any DX10-class hardware to match these numbers. DX11 class hardware has some variance; NVIDIA (GeForce 4xx+) and higher-end AMD chips (aka: GCN-cores) may have more than the 16-per-stage.