Your theory is probably way out compared to the practice. Do you really expect to be able to get 16MB of textures on screen at once in a 2D game? Answer: no, not in a thousand years! The reason we have so much texture ram these days is because in a 3D scene we can easily find ourselves looking at vastly more stuff than a 2D scene. In a 2D scene you're going to have maybe 3-4 layers of textures covering the whole screen - very unlikely to thrash the cache.

Your theory is probably way out compared to the practice. Do you really expect to be able to get 16MB of textures on screen at once in a 2D game?

I should have stated more details of how the 2d game shall be: three or four background layers shall scroll constantly and consist of large blocks (256x256 or even 512x512 pix) so that they can look very interesting. Needless to say, the front layers will have to have an (8bpp) alpha channel.Then there shall be many sprites, again with 8bpp alpha. They shall be animated: on the avarage ~50 frames for one animation. I pack them onto large texture pages, but even if the sprite is on the avarage ~100x100 pixel sized, it leads to ~2 MB for just one animated RGBA sprite. Sigh.

So... A dozen of such sprites and several background tiles at once on screen, if they're RGBA: voila, you exceed 16 MB video RAM within a few seconds.

Since it's constantly scrolling, I'd like to avoid the un-/binding of new textures during one level. On older 3d cards it's not possible but well. With newer cards compression could help: compression ratios from 4:1 up to 8:1 shall be possible, the Nvidia document explains.

Y'know, if you sort all your drawing by texture (which you should be doing anyway) I doubt you'd actually notice that you couldn't fit all your textures into video ram at the same time.

What exactly do you mean with "sort your drawings by texture?" ... I pack all animation sprites of one sprite into one (or 2..3) large texture, for example 512x512 pix sized. Is this the kind of sorting you mention?

It just means minimizing calls to glBindTexture() by ordering your sprite drawing to use textures in sequence.

Technically glBindTexture can be a very expensive operation as it can involve uploading a texture from sytem ram to vram if it's not cached. This only happens at most typically once per frame, of course; it's in the cache then and tends to get reused. But as the cache is a LRU cache, if you've got 17mb of textures in a single frame and there's only 16mb of free vram, you'll overwrite them all every single frame and end up uploading 17mb of texture data every single frame. You know when this happens as the frame rate drops to about 10fps.

cache is a LRU cache, if you've got 17mb of textures in a single frame and there's only 16mb of free vram, you'll overwrite them all every single frame and end up uploading 17mb of texture data every single frame. You know when this happens as the frame rate drops to about 10fps.

Cas

Do you know of any good benchmarking apps that would let me try and work out myself why this GF2Go is so incredibly slow for current AF and some JOGL games, but fine for others?

(for instance, survivor runs considerably faster on this than on a winXP machine with twice as fast CPU, twice as much RAM, and more GFX ram but a puny GF2MX - and, as previously mentioned, it runs Quake3 fine)

Obviously there are a wide range of "my [card] is bigger than yours" style things that call themselves "benchmarks" but are really just a series of not-very-smart micro-benchmarks only used by crap lazy "hardware reviewers" who've never written a single line of high performance code in their life and make statements such as "This card gets 7500 winmarks, whereas that one gets 7511, so the second card is obviously much better in this area" (which means: "I don't have the faintest clue what this benchmark really does, but it gives me some numbers to post on my website!").

...but I was kind of hoping there might be some more useful apps - something akin to the Sony PS2 devkits (although much less powerful) which give detailed graphs and stats on how you're using the graphics pipeline etc so you can see if your code is (ab)using the hw. Perhaps something started by Carmack, as a stick to beat hw providers over the head with?

I'm guessing that if there are any such things, Cas (or someone else here) would probably know of them...?

Or...alternatively, if someone has / wants to write a java-based OGL tool to do stuff like this, perhaps as a "perf-test for [JOGL, LWJGL, etc]" I could do lots of testing for you .

You don't have a performance problem; you've got a driver problem. Try opening a 16 bit fullscreen window with no alpha, depth, or stencil requirements, at 800x600. If it doesn't run like the clappers then something in your x config is preventing that mode from getting hardware acceleration.

You don't have a performance problem; you've got a driver problem. Try opening a 16 bit fullscreen window with no alpha, depth, or stencil requirements, at 800x600. If it doesn't run like the clappers then something in your x config is preventing that mode from getting hardware acceleration.

As I said, survivor (using xith + JOGL) works fine. It's doing an 800x600 window faster than a considerably faster win32 PC (GFMX). I don't have a handy fullscreen app that runs at 800x600 (lots of LWJGL games posted here have half-broken screenmode selection, and the JOGL games can't do fullscreen, so...).

There isn't ANY config for screenmodes on this machine. It's all autosensed by the nv driver, except for the refresh rate override I mentioned earlier (refreshrate is locked at 60 Hz in X config; LWJGL games report this as 0 Hz). If you have a hand crafted X config that does anything better, send me a copy and I'll analyse, but I'm just doing everything nv tells you to, and seeing this sometimes-on-sometimes-off behaviour (well, in fact, worse than that - the Zoltar game I get less than 1 frame every 3 seconds, which I would have thought I could do better than in software!)

Also, IIRC, games always report the current mode as hw-accelerated (e.g. I wrote a tiny JOGL app to check what renderer was being used, and it reported the hw as opposed to software as expected.)

It seems that just going out of "maximized window mode" is enough to bring performance back to roughly where it should be.

Is there a way I could start one of your games (AF, HG) in windowed mode (perhaps a cmd-line switch if I manually download JARs) so I could test this out with LWJGL too? See if the same action has the same effect?

About your texture-animation problem, why not put all the frames for one set of animation on a single texture? I.E. you'd have a run texture and a shooting texture, etc, so that you swap positions for the anim, and swap textures when you swap animations. Don't know if that would work, but I thought I'd suggest it.

BTW, I have a related question, since I'm also planning a 2D game with LWJGL. According to the OpenGL tutorials I've read, you're supposed to not use textures above something like 512x512. If that's the case, then how does one go about doing big objects, like a backround, or a huge sprite sheet?

About your texture-animation problem, why not put all the frames for one set of animation on a single texture?

I do something similar already: all sprites of one animation cycle are inside one large texture (512x512).

Quote

BTW, I have a related question, since I'm also planning a 2D game with LWJGL. According to the OpenGL tutorials I've read, you're supposed to not use textures above something like 512x512. If that's the case, then how does one go about doing big objects, like a backround, or a huge sprite sheet?

When you draw a (textured) polygon with glVertex2i(..) is the top right corner's coordinate pair inclusive or exclusive?I couldn't find a hint inside the OpenGL redbook.Currently I'm using it inclusive and it looks OK.So for example when a bitmap texture is 100 pixels in size (square) and you want your magical 1:1 texel to pixel ratio, I do: glVertex(0, 0) glVertex(100, 0) glVertex(100, 100) glVertex(0, 100).. and it looks OK. Which needn't mean anything. Maybe actually it draws 101 pixels.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org