Audio is usually caused by switches(boolean values depicting something occuring like moving into a new area) so that is pretty easy to do.

Rendering, I just have each class contain it's own rendering definitions and then after calling the game processing function I pass the game processing function into the Render class which calls the game class's render method.

Input I just do a couple bool switches which then end up calling the appropriate methods.

On the rendering side, one approach is to have a Render Queue. Your loop is basically doLogic, then render(renderQueue). As you are going through the game logic things can decide if thay are visible and place themselves on the queue.

When you render, you can sort the objects by render state and transparency and call them back to render themselves.

I think I've ranted on about seperating out the data model and rendering before. Another way to organise things is to keep a model of the actual game world in just pure data. Then run through your Renderable(i/f) objects asking them to render themselfs. They update themselfs based on the data object they represent and decide whether or not to render themselves.

Its worked quite nicely for me in the past, benefits being the ability to change rendering details without worrying about game logic.

Incidently, I've always considered audio as part of the rendering layer since you're rendering stuff its just not visual. Input is always tricky, theres this whole thing about using a controller interface but its never really worked out well for me. I tend to stick in in the main loop although I normally abstract away how the controls are actually being delivered, i.e. keyboard/joypad/etc..

I have used something like the GAGE sprite interface. I have various sprite classes for all my game objects. My parent sprite class is a composite of a java.awt.geom.Area instance (object geometry for stuff like collision detections) and a renderer instance which the sprite delegates to for rendering. Game object behaviour is determined by the methods in the sprite sub-classes.

I use renderer factories to construct the renderer delegates. At the moment I have two factories, the normal game one and a geometric one. The geometric one renders sprites according to their java.awt.geom.Shape and I use it to debug collision detection.

I have no sound currently and I handle input in the game frame class using boolean switches. Oh, I also went as far to abstract the rendering loop, so I can easily swap in different frame scheduling algorithms.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org