I'm trying to get decent speed out of Java2D and it just isn't happening. I don't know if I'm doing it some dumb way or not, but I usually use OpenGL so I haven't worried about it. This time, though I want to stick with Java2D if possible.

Here is a simple test app I made that creates a bunch of images and draws them while spinning them around and moving them. Basically it's meant to be a simple stress test. But I find that even with very simple images and not very many of them I'm having really bad luck in terms of speed. If I increase window size to fill my screen, almost no matter what I end up getting only around 20 fps, which is just crazy.

Same results. I changed the target and the fps counter to match that as well obviously (using 1,000,000,000 instead of 1,000). So it ended up giving the exact same fps.

[EDIT] As a note, I just did the same test on OpenGL and when I have 2000 entities on screen at a bigger resolution I still get 50 fps, which is obviously better than Java2D's 20 fps with 20 entities. The OpenGL test was using currentTimeMillis.

[EDIT2] It takes 5,000 entities on OpenGL to match the fps obtained with 20 entities on Java2D. That's 250:1. There's no way that can be right.

Are you using the java 6u10 runtime?If not, the AffineTransform(s) will be relegating all of your rendering to software.

Other than that, you arn't using BufferStrategy so the target surface of all your rendering may not be in graphics memory - again potencially causing all of the rendering to occur through software.Ofcourse this again depends on Java version - i'm not sure if under-the-hood Canvas has been altered in the most recent JRE releases so that it always uses an accelerated surface for rendering.

3rdly, because the "RCR" image is obtained directly from ImageIO it will not be accelerated in older JRE's (releases prior to 1.5, or maybe 1.4.2, I forget when that flaw was fixed).In these JRE's you need to copy it onto a compatible image for it to be eligable for acceleration.

You need to use VolatileImages instead of BufferedImages in your sprite class, that's your problem. For some reason BufferedImages just don't cut it any more, they're usually un-accelerated for some reason.

Like Abuse said, if you run your program with the trace options (use this as an option: -Dsun.java2d.trace=,count) then you'll see lots of non d3d calls which means it's using software loops instead of hardware.

When I use Java2D to do something fast, I just limit myself to just doing simple things.

I think your main problem is using AffineTransform to scale (scaling using drawImage usually works quite fast), and Graphics.rotate is also slowing you down.Without those things I still got a steady 60fps instead of a slideshow when rendering 2000 sprites (I also changed your run() method as Daniel_F suggested).For rotation I myself use a class that pre-rotates an image.

And the trouble is this is all entirely random advice when applied to any particular combination of OS and VM :/ It's no wonder we made LWJGL.

Cas

Indeed, IMO Java2D was a poorly thought out API from the beginning.The numerous attempts (between 1.3 & 1.6) to graft it ontop of different h/w accelerated native libraries across multiple platforms has left much of it's functionality unreliable due to this fragmentation.

While each release has for the most part conformed to the formal specifications set down by the API, the most important aspect of a graphics rendering API - Performance - has no formal specification, and consequently has not been correctly managed.

I wonder - do any common development languages impose performance specifications upon formal design interfaces?I presume this is a far greater consideration for real-time systems?

I wonder - do any common development languages impose performance specifications upon formal design interfaces?I presume this is a far greater consideration for real-time systems?

The only one that springs to mind is the specification for the C++ standard library. The containers and algorithms all have minimum big-O performance specified (eg. random access to a vector is guranteed to be O(1) ). This leads to the interesting side effect that while it's possible to implement a (say) std::map with whatever data structure / algorithm the implementator chooses, the performance restrictions usually mean there's a canonical data structure that everyone uses (like a red-black tree).

I'm not sure how well big-O notation would work for a graphics api though.

Even OpenGL has no actual requirement for performance. It just so happens you can more or less rely on most of this basic stuff without worrying about it.

Cas

It's been one of the major gripes (especially amongst the opengl.org community) recently though. It might be nice and whizzy for basic stuff but when you start getting into some of the more advanced extensions it can still be something of a performance lottery it seems.

Looks like putting in a buffer strategy is what's really going to make the big difference (I can't actually test that at the moment), because all of the other suggestions I tried at one iteration or another, including volatile images, turning off rotation, turing off the affine transform, etc.

In the past I've only used buffer strategies for full screen modes, I didn't even know that it was a logical thing to use it in Swing elements and Applets.

BufferStrategies use VolatileImages under the hood, except that BufferStrategies can also do pointer-flipping (which is faster) if in full-screen. But you should still have to use VolatileImages for your sprites.

To rotate and scale and all that, just change the Graphics2D's AffineTransform then paint your VolatileImage. That way everything should be accelerated if you've got java6u10 and a non-intel video card.

See this super-useful thread for the latest tips on using java2D - Kirill G of substance fame talks to Dmitri Trembovetski, our hero who's making java2d better every day:

Also does anyone know a way to just reduce the resolution in a window? Like I want the resolution to be say 300 x 300 but the window is 900 x 900, it should just draw all the pixels at 3x normal size. But I'm not talking a way of simulating this (drawing everything bigger) but actually doing it.

Also does anyone know a way to just reduce the resolution in a window? Like I want the resolution to be say 300 x 300 but the window is 900 x 900, it should just draw all the pixels at 3x normal size. But I'm not talking a way of simulating this (drawing everything bigger) but actually doing it.

Of course, if we're only talking images here, you can simply draw a 300*300 image into a 900*900 once, and draw the 900*900 image from there on. (Sort of like the intermediate image technique). However, if you want everything to scale "real-time", you'll just have to scale on the fly.

Oh ok, well that explains things, I just assumed you were using windows. Well then I suppose using the ogl-pipeline should work ok, but it got a lot better in java 6 so that's a pity there's no java 6 for mac yet.

I only have Java 5, unfortunately. That could potentially be the problem, but Sun is slow about releasing new VMs for Mac.

It is not Sun's problem and it is not Sun's JVM under Mac! It is Apple's JVM and there is already a JVM 1.6 for Mac but only for Mac OS X 10.5 (software update 1). It has nothing to do with Sun as Apple has always refused Sun to write a JVM for Mac!

It is not Sun's problem and it is not Sun's JVM under Mac! It is Apple's JVM and there is already a JVM 1.6 for Mac but only for Mac OS X 10.5 (software update 1). It has nothing to do with Sun as Apple has always refused Sun to write a JVM for Mac!

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org