Programming a Game Interface

Ok, so as I continue to plow through all the things that I need to do with my project, the next bit of business is an interface. I've noticed some screen shots around here of some very very nice interfaces, and after a bit of research, I learned that they were indeed created by hand. So my next mini-project is to create an interface that I can use. I'd like to model it after apples structure. Responder, Window, View, Control.

The only problem is I am having trouble not only wrapping my head around how the polymorphism will be structured, but where to begin. I'm calling out more specifically to you programmers who have experience in this area, but by all means, if others have suggestions please feel free to post them. So I was thinking something along the lines of:

Responder (no clue really what goes here and/or what it does)
Window (extends responder, multiple instances of windows)
View (these will be placed inside of windows, does it extend the window class?)
Control (this extends view, it gets more specific, and um... yea)
Button (just an example, but this extends control, and looks like a button)

I've read the man pages on all of these, but I am still a little fuzzy. I'd like to get an explanation as to how these interact, and what the best way of implementing my own version of them would be. For instance, how do I process mouse actions. do I just listen for a mouse action and send it to the windows?

I've read this a few times, but once again, its vague and on top of that, DirectX specific. I am just really brain stuck as to where to begin. How to handle all the events in a streamlined way.

Any help is appreciated, I need a serious nudge in the right direction.

Then you can have arrows keys move the cursor to the center of the next and previous elements in the ordered list.
You can define different sets lists for different menus too, should be really easy this way.

I wrote a more or less "full" GUI API for my engine, it only took me about a month. My GUI's "full" in the sense that it has layout management, event notifications/delegation, nested widgets, etc. You can see a screenshot of an example GUI here: http://zakariya.net/shamyl/Trees/Screens...-02-20.png

Now, here's the approach I took ( in C++ ) -- and I intentionally kept it simple.

Widget -- the base class for things to layout on screen. Widgets have a basic set of methods for drawing, resizing, adding children, recirsively drawing them, etc.

RootWidget -- inherits from Widget and acts like a "window" but onscreen in an ortho GL sense. All basic stuff like mouse movement, clicks, etc are sent to the root widget by the application and are distributed recursively to child widgets.

Notification -- an object which contains event information, such as a message, a source, a list of targets, etc.

NotificationSource -- a mixin class for objects which want to send notifications. By being a notification source an object just calls something like 'post( Notification( XYZ ))'

NotificationListener -- a pure virtual interface for objects which want to be notified of notification reception.

My GUI controls, such as sliders, buttons, etc all inherit from Widget and NotificationSource so they can distribute messages when they need to. Controller classes inherit from NotificationListener so they can respond to messages. It's fairly java like, in that you might do something like:

It's easy to over-engineer the GUI for a game. If you look at WoW, well, they *need* a hard-core GUI, but for most situations you probably don't need that much. I tried to write the minimum system for what I need.

My recommendation is to drop down a bit on your aspirations. It is often enough just to have a big list of widgets that respond to mouse-over and mouse-click - and perhaps drag if you're going for fancy. Most GUI problems appear when you want auto-layout and/or nested controls. If you're just looking to display some info, buttons and sliders, then go with a base "Widget" class. It should have Render, Update, Click and MouseOver methods, but it doesn't need anything more. Make a "Window" or "Root" object that just holds a list of such widgets. When the mouse is moved or clicked, the root object just walks the list of widgets and compares the mouse position with the widgets' hit rectangles.

Zakariyas GUI is super-nice, but possibly overkill for a smaller project.

TomorrowPlusX: I was hoping you'd respond. I really admire the work you've done. I was kinda hoping to model something after what you did. Thanks for sharing your approach. Fenris, thanks for the reply. I think I am going to buckle down and go all out as TomorrowPlusX did. Even if I end up not needing it, it will A) be fun and challenging to create, and B) I might use it later on. I've been looking forward to this part of my project. TomorrowPlusX, I saw screen shots of stuff you'd done, and I was just going crazy, I was like "how did he do that?! did he actually do that himself? that looks so clean!" so yea. I really like what you've done.

Now, do you use graphics? or do you generate the appearance during run time?

Ok, so you've got an instance of your root window/widget, now, you just send all the events you want handled to that? and It figures out what to do with the rest, it doesn't "really" listen for the events itself, you tell it when to act.

I'm so excited about this! Ok thanks so much everyone for your help/replies. I'm going to get started.

Fenris *is* right -- if this is the first time you've done this, start small. Keep it simple; don't bite off more than you can chew! I did what I did because I've written custom layout systems for several toolkits, and in general, I've been doing complicated widget stuff for about 10 years...

That said, regarding the look and feel, it's a little of both. Everything's drawn procedurally except the rounded corners, which are a transparent png. I *thought* about making look and feel be dynamic, but then thought it might be a big waste of time Right now, if I want to change look and feel ( aside from colors and fonts ), I have to change the display() methods in my widgets. Too bad...

Quote:Ok, so you've got an instance of your root window/widget, now, you just send all the events you want handled to that? and It figures out what to do with the rest, it doesn't "really" listen for the events itself, you tell it when to act.

And the HUD class -- which maintains a list of root widgets -- distributes those events to the "active" root widget, which then propagates the events to children. The important part is that the HUD translates those events into mouseEnter, mouseLeave, etc. It also makes the mouse coordinates local to the coordinate system of the widget. And so on... and so on...

I would start, like Fenris says, with just a flat list of children and don't bother with layout management. Layout management is *hard*. I spent a while making a good system, and in the end I only made three layouts: VBox, HBox, and Grid. They all use one generic single-dimensional layout algorithm, so at least the dirty stuff is in one place. Good layout management is hard because widgets need to know how big they want to be, and when you've got nesting that gets non-trivial quickly.

I'd recommend an approach like Cocoa takes, where instead of formal nesting and recursive layout, just give widget's edge-affinity, as in, an edge on resize follows left | right | top | bottom | center.

hmmm... ok thanks for the advice, but I have confidence in my abilities. Say TomorrowPlusX, I'm trying to think of the most efficient way to do the texturing. I was thinking about loading the textures in from files for each type of element, but then I was wondering how I would do it without loading a set of textures for EACH element, which would be a huge waste of memory. I was just wondering what your approach was. I am assuming you used a polymorphic approach to the actual interface design.

(I may not be TomorrowPlusX, but hey, I can still answer this one ) In the game I'm currently making, it's small enough that I can just have global variables for textures that I use more than once. In the next game I will be making, however, it will be much larger. For that, I will use a binary tree to store textures (which I've already made, and though I haven't tested it with textures, I know it works in general). Basically, I store it textures based on 2 elements: the file path name and the dimension. Within each node, I then store the texture object. They way I will implement this tree is, whenever you create an object, try to insert whatever textures you need in the tree. If those textures are already in there, just return the texture object already there. If it isn't there yet, create a new texture object, insert it, and return it. I also include a count of the number of times an object is inserted, which is incremented whenever something tries to insert it, and decremented whenever it's removed. (when an object that uses it is destroyed) If this count reaches 0, it's removed from the tree.

My main reason for having a binary tree was because it can hold infinite objects with minimal effort. Well, that and the fact that it's what I thought of first. For the dimensions, it was more of a just in case sort of thing. I didn't know if a situation would ever arise if I needed a texture for multiple dimensions. Probably will never come up, but oh well. It only comes into effect if the names are the same.

I have to ask why are you modelling it after Apple's design and not using Apple's design?

I've had great success using cocoa as my ui, In BitRacer last year I used opengl views that rendered a button and handled that code, composed with images and text in IB. This allowed me to position the images, buttons and text really easily and quickly.

With Tracktor Beam this year I just use cocoa straight off, it's is ridiculously easy to create a window in IB add an image for the background and make it transparent. It looks great. For the UI I just wrote a 15 line subclass of NSButton that renders the buttons in a style that fits in with the rest of the interface. All the logic and links are still handled in cocoa and IB. Why replicate all Apple's work?

Seriously 15 lines of code and maybe an hour or so messing in IB to get the interface right is all tracktor beam's interface took.

willThimbleby Wrote:I have to ask why are you modelling it after Apple's design and not using Apple's design?

I've had great success using cocoa as my ui, In BitRacer last year I used opengl views that rendered a button and handled that code, composed with images and text in IB. This allowed me to position the images, buttons and text really easily and quickly.

With Tracktor Beam this year I just use cocoa straight off, it's is ridiculously easy to create a window in IB add an image for the background and make it transparent. It looks great. For the UI I just wrote a 15 line subclass of NSButton that renders the buttons in a style that fits in with the rest of the interface. All the logic and links are still handled in cocoa and IB. Why replicate all Apple's work?

Seriously 15 lines of code and maybe an hour or so messing in IB to get the interface right is all tracktor beam's interface took.