The balls can overlap, the transparent bits on the image must be transparent (e.g. blending is one way to achieve that), the balls must not leave the screen (should bounce of edge). When adding balls just add them at a random position, moving in a random direction and speed.

Wouldn't it make more sense to time how long it took to render 10000 frames with a fixed number of sprites. Then it doesn't matter what machine you run it on you can compare the engines?

Kev

smooth movement and constant frame rate including the users preception of these should be an important part of the first benchmark. Guess we can use the above method in the next round when thing like logic speed and the ability for the engine/library to manage many sprites quickly in different situations could be tested. If the results are really close between some libraries/engines we'll have as many different benchmarks until there is a single winner.

O'course, it's going to be slightly tricky to show off the awesome animation / special effect interleaving capabilities of the SPGL sprite engine but meh

Any photoshop/gimp/etc experts around that can help clean this 60 frame gif into a nice usable png spritesheet for the contest, hopefully with an alpha background that doesn't have horrid dots around the globe? or if you have another nice round 60 frame animatable image

With a lot of sprites on the screen, this skews the results to the most highly optimized collision-detection algorithm... which has nothing to do with the performance of a sprite engine.

Ah good point, so how do you suggest we handle this? no collision at all or a standard algorithm that everyone must use? or something else? One concern is not to let the contest tests get too low level (i.e. just forcing every one to use pure java/opengl to optimise directly for the test) but to leave some bits for the actual libraries/engines running on such things to compete on how efficient they are.

With a lot of sprites on the screen, this skews the results to the most highly optimized collision-detection algorithm... which has nothing to do with the performance of a sprite engine.

Ah good point, so how do you suggest we handle this? no collision at all or a standard algorithm that everyone must use? or something else? my main logic behind suggesting that was not to let the contest get too low level (i.e. just forcing every one to use pure java/opengl) but to leave some bits for the actual libraries/engines running on such things to compete on how efficient they are.

If you want competitive performance, the only option is OpenGL anyway.

Besides that, I think what we really want it something that does a little more than sprite rendering, say, it has to support multiple layers. Your engine might be able to analyze a scene and determine that the 'terrain layer' is mostly static, and may be baked into a single texture. Obviously every once in a while the terrain-texture will change, as to invalidate any clever optimizations.

I think the best way to turn this into a contest is this:

code a rolling demo, where every sprite has a deterministic position (transformation-matrix?) for each frame.

just to get things rolling, lets stick with first benchmark suggestion for the first run. Mainly since its simple and doesn't require more then few minutes of coding to implement and just have an option to turn off collision detection.

Yeah, so, even the simple benchmark rules are pretty much useless. How big should the viewport be? (fill-rate). Should blending be enabled or not? Should stuff rotate/scale? Riven's suggestion makes the most sense. Provide a simple testbed driver and let people hook it up to their stuff.

Collision detection is quite easy with circular sprites. Why not make it really difficult with an irregular based sprite with per-pixel collision detection. One might be able to do something with stencils. It would certainly make it interesting.

Edit: Found some examples, but apparently the performance is poor, although I guess this means versus a rectangular bounding box.

It was just a quick mock up so likely not optimal and is mostly just immediate mode calls to OpenGL, so doesn't even touch the fancy stuff possible with OpenGL.

Use the 1-9 keys to change the number of balls.Use the + and - keys to continue adding/removing balls (100 at a time) until the fps reaches about 60fps and adding any more balls causes it to go below 60, this will be the max power of the code on your computer. Use the V key to enable/disable Vsync.

Just for the hell of it and since CommanderKeith asked earlier about the bubblemark stuff, an Applet version can be found here. None of the other browser techs on the bubblemark page can even touch this sort of raw performance

On my machine it can draw 28,000 balls before it starts to drop below 60fps, I think the test might be too weak for modern computers

I still think this microbenchmark is flawed. We should do what Riven said, and also target a sprite count instead of an fps count. I also believe that a specialized renderer for the task at hand could outperform libgdx. But that wouldn't be a generic solution useable in a game i'd say (and if it was i'd like to integrate it in libgdx )

Just for the hell of it and since CommanderKeith asked earlier about the bubblemark stuff, an Applet version can be found here. None of the other browser techs on the bubblemark page can even touch this sort of raw performance

Thanks, yep that's a feather in the cap for lwjgl and java.

One thing about that applet which is a bit weird tho is that the sprites don't appear to move smoothly. I'm testing on a computer at uni which I don't have admin access to so I can't provide the system specs sorry. I'll test at home and see what it's like.

It depends on the GPU. The mobile Nvidia crap has some very strange performance characteristics. Apple and Kappa also posted some of the results on IRC. GPUs were 8800 GT and 9600 GT, showing as big a gap as the tests by Nate and myself.

This laptop's only got Intel embedded graphics: libdx:4000ish, slick:3500ishMy old Mac Powerbook G4 was even slower, even though it has a proper GPU (Can't remember what offhand)

Edit: I've been thinking about how I'd write a fast sprite library. Initial thoughts centred around vertex arrays, then moved to VBOs. However if the demo moves every sprite a bit every frame, then I'd only use each VBO once, which seems pointless. Maybe I could reduce the amount of data to transfer with a custom shader with a parameter list that takes x,y. I'd still need to supply this per vertex, which would be an overhead if I used textured quads. Maybe an indexed parameter list. Maybe look at glDrawPixels instead, although most cards are optimised for 3D functions, so probably textured quads would be faster.

Maybe a custom shader that takes velocity/time of last position set as a parameter list and gets time from somewhere (another parameter list or write to a sub of this one. The shader than calculates position(clip space) = position(object space) + parameter(velocity) * (parameter(current time) - parameter(vertex position timestamp). Then I'd only need to send time each frame and update sub parts of the VBO and parameter list where a sprite had changed velocity. In an ideal world I would send delta time and write the updates directly into the VBO in the shader, but I don't think I can do this directly; might be able to cheat round this.

Trouble is I'm not sure whether the bottleneck will be fill-rate, or CPU-GPU vertex transfer (which will be a lot with a silly number of sprites). I've only occasionally used opengl and that in immediate mode, so don't really know what would be best performance wise.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org