So, I've got a dual-core phone and because of the way Android works I'm also, it seems, forced to use two threads to run my game in. For the moment. I might figure out some way around that.

My problem is that the producer-consumer relationship between the logic thread and the rendering thread seems to be very vague. I don't know if it's the Android thread scheduler (which should be just plain ol' Linux thread sheduling) or what, but the actual length of time spent in the meat of the logic thread and the rendering thread can vary by a factor of up to 3 on the same data each frame. The only thing that can slow it down is a synchronized block waiting for a render queue to become available to write to or read from, and thusly I surmise that the way that the thread scheduler wakes up from this is what's causing the latency (after all it only takes a 30ms delay to effectively double the time any particular unit of work is going to take!).

Has anybody got any ideas as to how I can get this idea of writing to one queue and reading from another queue to work without the random latencies?

Shall I just give up on dual-core efficiency and do the logic in the rendering thread? (After all there's not that many phones with dual cores)

It most likely is ... I've tried setting the GL rendering thread and the logic thread to Thread.MAX_PRIORITY but there still seem to be other things pre-empting it. My suspicions currently fall on the display event loop, sorta like your AWT event thread of old. I was calling display.requestRender() from my logic thread when it was expedient to render a new frame; using RENDERMODE_CONTINUOUSLY the frame rate appears to be more stable. Investigation continues.

I'm a complete Android n00b as I've said, but you should just let other programs and the AWT-like thread use the second core and settle for the increased stability? xD It's difficult enough getting good scaling on a quad core PC!

Right, I have shamefully admitted defeat and moved to a single-threaded rendering and logic game loop using continuous rendering. Apart from being vastly less complex, it maintains a far more consistent rendering rate, it also means that my Galaxy 2 is no longer ridiculously fast compared to all the other phones out there, which is probably a good thing when you're a developer I'm still getting about 2000 sprites before the game drops noticeably below about 28fps so that'll probably do. I'll aim for 1000 sprites max, more likely about 500, and that should be enough to do some nice games with.

It's massively undefined (one of the many glaring holes in the OS docs for Android).

My impression is that it's a fairly naive scheduler, and on any thread context switch(es) it says to itself:

"Hey! You finished! Cool. I'll go do lots and lots and LOTS of OS-stuff now. I don't know much there is to do - I haven't actually been tracking it! HaHa! It'll only take me a few hundred milliseconds to check everything ... just in case... Bye now!"

Yeah.. I've stuck single threaded this whole time, but use the java concurrent executor API to kick off longer running tasks. I have a little framework built up around the executor API to get callbacks and such. IE more powerful than asynctask and cross-platform. It seems a single threaded approach works best for the time being and I easily get 60FPS+ for Q3 levels on Galaxy S2 / Tegra2+. Cas, it also seems that you may be playing around with the stock GLSurfaceView. I wrote my own "GLSurfaceView" that works with my clock/timing thread and handle GLES context creation and use eglSwapBuffers directly. It works well.

Yeah.. I'm still on the fence with multiple threads. I know I got to get there, but for now it seems like a specialized case. For instance in the NVidia glow ball demo which looks great they are spawning separate threads to deal with the cloth / physics simulation, but that is possible because in the demo each tapestry / cloth does not interact with each other. http://www.youtube.com/watch?v=eBvaDtshLY8 I'm first going to take a look at Scala, actors, message passing in Scala and evaluate on how to proceed. Haven't had a chance to dig into Scala yet, but I'm gathering the favor of immutable message passing won't be efficient per se on Android. So I foresee some sort of generalized COP (component oriented) message system similar to how I handle input events.

That being said I don't know how much one can get out of basic / general logic + render threads separated. It doesn't help much since in the general case as unless the rendering thread gets updates it's just duplicate frames being rendered. There is much to do in this area especially for Java oriented frameworks.

Yeah.. I'm still on the fence with multiple threads. I know I got to get there, but for now it seems like a specialized case. For instance in the NVidia glow ball demo which looks great they are spawning separate threads to deal with the cloth / physics simulation, but that is possible because in the demo each tapestry / cloth does not interact with each other. http://www.youtube.com/watch?v=eBvaDtshLY8 I'm first going to take a look at Scala, actors, message passing in Scala and evaluate on how to proceed. Haven't had a chance to dig into Scala yet, but I'm gathering the favor of immutable message passing won't be efficient per se on Android. So I foresee some sort of generalized COP (component oriented) message system similar to how I handle input events.

That being said I don't know how much one can get out of basic / general logic + render threads separated. It doesn't help much since in the general case as unless the rendering thread gets updates it's just duplicate frames being rendered. There is much to do in this area especially for Java oriented frameworks.

Why do people have so much trouble with multithreading? It's not hard to implement at all.

What about network threads? So right now i am doing a game mockup/prototype. I use a thread for rendering (awt right now, but lwjgl later), a thread for the "game engine" and threads for reading data from socket connections as its multiplayer.

Will this not work on Android? Do i need to consider full single threaded flow with non blocking IO? That would be a PITA.

I have no special talents. I am only passionately curious.--Albert Einstein

My experiments with threading were only concerned with visible jitter - I was rendering things, and the eye is adept at picking up tiny discrepancies in movement that are caused by the frame rate varying. You have to use threads for asynchronous IO, end of story. That is, in fact, the very thing they were designed to do.

Libgdx is massive and fancy pants. I'd use my own SPGL library but it too is massive and fancy pants. However: I think it is time for another rant.<edit>Doh wrong thread, completely got wired crossed. My SPGL library predates libgdx by years, and is vastly better for my purposes Not to mention it only took about 2 weeks to wrangle it on to Android. The process of wrangling it also made it so much nicer I'm going to backport it back to J2SE.

Why do people have so much trouble with multithreading? It's not hard to implement at all.

Uh, I highly doubt you have an efficient and general purpose framework for separating various logic oriented sub systems from rendering. Working with threads is not difficult on the surface and of course is the essential approach with networking, I/O, or long running processes that can easily be separated from the main update / render loop.

As I mentioned in the particular example case of glow ball NVidia has great multithreading here because the separate cloth / tapestries _do not_ interact with each other. I just recently attended an NVidia presentation at AnDevCon where they showed some very fancy tools for debugging GLES on Android and such; I do believe they will be public soon and are amazing for seeing thread / core utilization and debugging shaders and frame by frame rendering by each step / GL call.

What this example translates too is that it's easy to implement a trivial example in a multithreaded context if you know exactly how things will interact and they don't overlap. It's much harder to create a general purpose framework where there isn't advance knowledge of the task at hand which can scale from one to X processors.

There is absolutely a reason why Carmack stuck with the single threaded approach until Rage. Even at that if wikipedia is to be believed it only was likely in the last ~3-4 years that the switch got traction w/ id tech 5 given the WWDC '07 anecdote. http://en.wikipedia.org/wiki/Id_Tech_5 Even with all the neat potential cool stuff id Tech 5 does it still is a relatively closed system with 100% known sub systems thus the multithreading could be relatively baked in a specialized manner into the engine.

Massive yes, but it is only as fancy pants as you want. If you only want to use the OpenGL abstraction and game loop, you can do that. You are going to run into issue after issue on your own (Android is a f**king minefield), and it is all time wasted not working on your game.

... that may have been the case but it turns out I've solved them all on my own in 3 weeks, and in the bargain I got to learn about android, and keep all my existing tooling and workflow The only thing stopping me from writing some games now is the fact that I've only got two hands and have a shedload of Steam stuff to do.

I'm pretty sure it'll misbehave on a couple of devices out there but it's really so brain dead simple, this code, it really shouldn't misbehave. The code itself is near as dammit identical to the code I've used for the last however many years on the desktop too - it's had a lot of lovely testing.

As it happens it's been tested on a pretty wide variety of devices and basically it seems to function fine on all of them, just with rather varying performance depending on the phone in question (obviously!). I'm aiming at sort of mid-range phones available right now, and hoping that by the time I actually release something most people will own such a phone, and get me about 500 sprites a frame to play with.

I'm pretty sure it'll misbehave on a couple of devices out there but it's really so brain dead simple, this code, it really shouldn't misbehave. The code itself is near as dammit identical to the code I've used for the last however many years on the desktop too - it's had a lot of lovely testing.

Very cool. Android caused me to step up my game as things go too. Testing never hurts.

Some of the silliness on Android is well just blatant lack of testing on big G's part. I'll throw out some stuff starting with Froyo though it goes back to the beginning; the "official" Java GL/ES 2.0 bindings in API level 8 are incomplete as code gen was used to create it, but it was never tested thus wrong signatures on some methods preventing some VBO usage. Gingerbread for the most part had no new problems as far as I'm aware for game dev. Honeycomb (3.0 / 3.1) had a major Java NIO regression where calling duplicate on a non ByteBuffer caused an _immutable_ endian swap on the buffer! Way non-kosher there and it broke all my GL code as I have a little extended NIO API that used duplicate on a FloatBuffer; that took me 45 hours of debugging to find and then provide big G a test case; I bitched them out over that though. ;P Only time I've seen the Android team fix a major regression in ~3 hours after test case was provided. Most items in the bug tracker languish forever. Last big thing to cause a ~4 day workaround in my efforts, IE 2 to fix, 2 to make clean, is the piss poor annotation support in Class likely taken from Harmony; horrible GC triggering code. All API levels affected up to Ice Cream Sandwich; had to implement caching myself. MotionEvent and such is an abortion of a proper API. So yeah.. There is always another big bomb shell awaiting around the corner. Not exactly an exhaustive list and of course off topic.

You should be able to get far more than 500 sprites on screen w/ mid level hardware. I'm sure the OG droid can pull that off. Fill rate / overdraw is the biggest issue, etc. Err.. this is not about threading anymore.. ;P

Don't worry, it's still on topic! Indeed on the Galaxy 2 I managed some daft figure like 2000 sprites @ 60fps but the problem was that to do it reliably everywhere I had to return to single threaded code and the lowest common denominator that I wanted to support. 500 sprites is still quite a lot!

Sprites and 3d polys are very much like comparing apples to oranges. Sprites are the worst possible case for 3d cards to render, generally. GPUs are designed to render models super-fast by virtue of shared vertices already being transformed. Furthermore they rarely have to worry about transparency. And of course each sprite needs rotating and scaling individually, probably on the CPU. And so on.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org