For those that don’t know, the two characters on the couch of today’s comic are Gabe and Tycho of Penny Arcade, possibly the most popular web-comic around.

I am an avid gamer and have been since the days of Space Invaders. I took a beginner’s class in Japanese just so I could understand some of the words in the imported games. When, on my 17th straight hour, a character in Metal Gear Solid 2 said, ‘haven’t you been playing long enough’, I was too delirious to recognize it was a pre-programmed script.

My tastes in games have always been quite broad, spanning both console and PC and encompassing most genres. Everything from Massively Multiplayer Online Role-Playing Games (MMORPG) to sports games has sucked my life away at some point.

All of this background isn’t just so I could bond with you, as much as I’m sure I like you. I’m simply giving some context before any gamers came out accusing me of being a h4+3R and not L337 enough.

Videogames have, like most applications, been rather hit and miss with their interfaces. I describe interfaces to mean everything about the game, including to some degree, the machine they play the game on. The menu system, the mapping of buttons to actions, the intuitiveness of the in-game interface, etc. are all aspects that require attention. With all these factors to take into consideration, coupled with the need for overall appeal and challenge to the gamer, creating a usable game becomes a non-trivial task.

Yet most gaming companies seem to be lagging behind when it comes to hiring HCI. Sure, console games are especially well known for quality control because crashing on a 12-year old’s Playstation is probably bad marketing but when it comes to investment in specialists, even large companies haven’t caught on. Microsoft has a usability department for games which they also loan to the companies for which MS is publishing. Other than this, I’m not aware at this time of any companies that have dedicated usability or HCI: EA, Blizzard, Sony being among them.

Why the slow uptake? Traditionally, the games industry has been at the frontlines in software innovation and even development. EA Sports has got the regular releases of their franchise titles to a science (although conversely, Valve has gotten vapourware to a science). I have a few theories:

Challenge the Gamer: Making something easy to use is almost contrary to what games are about. The key is in recognizing what parts you’re supposed to actually be challenging them and what parts should be made to support them in overcoming those challenges. E.g., Rotating blocks in Tetris vs. removing lines in Tetris.

Masochistic Gamers: I speak for myself when I say I sometimes revel in the challenge of making something work. ‘I figured out this ultra-complex menu system! ‘ With a community that is willing to put up with so much, why bother making life easier?

Pride (and Prejudice): In the gaming industry, you have graphic designers, modelers, game designers and UI programmers all vying for a piece of the UI. Handing off the ownership to one person or one department would require a fairly drastic shift in a lot of people’s mindset.

Don’t Need a Specialist: The industry has recognized that using an army of teenagers to test their game is no longer sufficient. A knowledgeable QA department with full testing suites has become the norm. With HCI, however, there’s still the belief that the UI programmer or the graphic designer can design it as part of their job. While I don’t doubt many are quite competent at this, many games on the market prove otherwise.

Games Still Sell: Everquest, upon original release, was possible one of the worst user interfaces I’d encountered in a high profile game. Windows wouldn’t resize, hotkeys were unintuitive and arbitrary, and icons were ambiguous. The interface was so bad I had to stop playing out of frustration. They’ve since revamped their UI, I’m told, but it took them about a year and I’m not sure what the new one looks like. The problem is that Everquest is still the most popular MMORPG in existence. It still makes money every month. If Everquest can be one of the most popular games ever with such a poor interface, what’s the business case?

Patch the Game: Forget the interface, a lot of companies are now releasing incomplete features to their customers, spawning an age of ‘paid beta testing’ where you buy the game only to find you’re playing a beta version of the product until they release a patch next month.

These points are not meant to imply that all games have bad UI. Many of them are excellent, through a combined use of beta testing and smart game designers. The Sims, for example, employed context sensitive radial menus perfectly. Blizzard listened to their customers and made adjustments to their ever popular Warcraft 3. However, these cases are the exception, not the norm and for games on consoles where patches are not always possible, fixing after the fact isn’t always ideal.

Ultimately, videogames are businesses, too. As long as games keep selling, there’s little reason to change their tact. As long as the customers keep supporting the sale of unfinished and unusable products, the games will keep selling and the cycle will continue.

Hi KC — just wanted to say that on Mozilla Firebird this page is now very difficult to read (for me anyway). The combination of the tight font and no spacing between paragraphs makes it quite difficult. It was easier to read in the email I got notifying of this week’s update, which can’t be good!

Great comic this week, btw … and I think I’ll go back to my email to read the article with less eye-strain…

Kevin is making the argument that some games have bad UIs and some have good ones, but that overall, game UIs are somehow more inferior than office production or other work related apps.

To me, inferior is taken to mean ‘a UI not like other apps in terms of consistency, ease of use, or some other metric.’

Okay, that is certainly true, but entirely rational when you consider the equivalent world that games live in. Imagine your favorite shareware program; it runs nice on Windows but not on Mac. Fine. The point is that the Windows platform has been around (and windows programmers have been around) essentially since ‘95. (I’m saying if you could program Win 95, you could reasonably be expected to program in Win XP; we could extend this back to 3.1, but it doesn’t matter for the purpose of this discussion.)

The point is that the platform has been around for 10 years. What video game platform has been around that long? They all change about every two years don’t they? Imagine now that you are a programmer trying to write a spreadsheet, but in addition to the program, you also have to write the platform, or you have to use a platform written by other people who don’t write games and you are constrained by their system. In this case, I’m guessing everyone would be using a different UI because they would effectively be writing their own GUI.

In fact, when I used to write games, I had a book on how to design a proper GUI … not how to put buttons and crap on a screen, HOW TO WRITE THE SOURCE CODE to say, move a cursor around a screen and then bring up a pop-up menu. This was how you did UI back then, and this is the tradition that games come from. Any old-time programmer is familiar with a project completely contained in two files: gui.c and guts.c; I’m sure you can guess what each file contains.

So to me, the current state isn’t a problem but more of an artifact of how games grew. The question is if this is a good or bad thing and I think we need to consider an important question first.

Are game users in the same class as typical HCI users in the first place? Why do we think what works for office people works for gaming people? We know people hate reading manuals, yet game manuals are (or at least, were, from my dinosaur age of game playing) beautiful pieces of art designed to immerse you into the world of the game. (Perhaps this was because the graphics sucked ass back then, but I doubt it because they only seemed to suck ass from the perspective of the present. Back then, my little ASCII character that I could move around in a 2D space, composed of two sprites was really, utterly neat and unlike anything else you could experience.)

Supporting this claim of difference comes from friends I have who go out and buy additional manuals and strategy guides for the games they play. These are the same people whose clocks still blink 12:00 because they won’t read one paragraph of the manual, but they’ll spend weeks reading online articles and books learning about say, a new firing technique. (In fact, the strategy guides often seem to have supplanted manuals… maybe that was intentional from the game designers… can sell more things that way.)

So, coming to a point; perhaps there is a difference because gamers are a different breed of users. They are passionate about their software and actually WANT to use it, as opposed to the worker forced to use .

Finally, Kevin implies that all games should have the same GUI, or same set of GUI principles. This to me seems counter to the whole game culture. We play games to leave reality and enter another world. Having an interface that works just like this world would to me, always be a reminder that I was playing a game, drawing me out of the immersive experience. Imagine yourself on the enterprise and having it look like Windows XP. It just doesn’t work.

However, having what is basically a new system to learn and deal with AS IS, I think is immersive. You have to deal with it, you can’t resize windows, etc., because in this world, they don’t believe resized windows, or in this world, they don’t have HCI researchers dictating to them how to create their tools.

I know, it’s just a lame rationalization for crappy UIs, but it is one that will ensure I’ll be happy with my next cool game with a miserable UI.

Besides, hard core games all know the best kinds of interfaces are those like in Half Life/CS, IE, COMPLETELY CONFIGURABLE and mimimalistic. When I play CS, *every* button on the keyboard, keypad, and 7-button mouse is used during the game. No game designer could ever figure out the best way for everyone to lay out the keys, so already we see one area where games are actually MORE sophisticated than productivity software… they are generally more customizable in their interfaces.

Why is this difference? Because gamers CARE about their program enough to figure out how to customize. Most people don’t care enough about Word or Powerpoint enough to customize it, even if you tell them that it will make their lives better, work easier, etc.

Your bring up a lot of interesting points. I’ll try to address them one at a time:

1. I never said office software was BETTER than games in HCI though I suppose it could be inferred based on a lack of HCI in games.

2. Standardization of game interface: I also don’t imply all games should have the same GUI. However, there are certainly cases where this is valid. Sports games, for example, are easy for me to pick up because they generally keep the same controls. Sequels should also keep controls consistent if applicable. We also see games of similar genres use similar UIs to minimize the ramp up. I’m not saying ALL games should follow a set of standards but there is certainly room. As always, it depends on the game and the task.

3. Game Users vs “Typical” users: I agree, they are NOT the same style of users. In HCI, however, we focus on user centred design. This focus means that if the users are different, so are the solutions.

4. Customizable controls: In consoles, customizing controls is much more difficult and also infrequently used by gamers. Secondly, not all gamers customize controls and a large subset are casual gamers. Also, default controls are still important as they dictate the learning curve for new genres. Finally, WHAT functionality to expose to users isn’t always obvious.

I want to give some examples of what I see HCI contributing in games thorugh game examples. The crux of my stance is that the gameplay can be great but needs the support of solid interaction. Here are some questions that HCI might answer:

“What do I automate?”
“Should I force users to move EACH unit in Warcraft?”
“Should I let users hotkey a command to call for help in Counterstrike?”
“What data do i show in Baldur’s Gate? Do I show ALL the stats all the time? If not, what do I show and how?”

Admittedly, these decisions are closely tied with gameplay and the game designer’s domain. This tie is also why game designers are typically the decision makers in this area. However, the game designer is not the only user. HCI provides a tool for the designer to learn through the users what works and what doesn’t before too much is invested.

“Admittedly, these decisions are closely tied with gameplay and the game designer’s domain. This tie is also why game designers are typically the decision makers in this area. However, the game designer is not the only user. HCI provides a tool for the designer to learn through the users what works and what doesn’t before too much is invested.”

I see room for HCI in video games but I definitively have to say definitively a lot of it is tied to the gameplay. I would be very skeptical of an HCI consultant who tells me he can help me out design a better interface but he has only worked in the “enterprise” or “office apps” domain and when asked what games he loves to play he replies with “mine sweeper” or “solitaire.”

If I was going to talk/hire an HCI I would like one that ideally has played tons of games, and loves playing games. One thing to keep in mind is that at the better game companies, usually game designers have played *A LOT* of different games and very importantly, subconsciously or not- analyze it picking the good and the bad from it appart. I would dare to say even the reasonably good game designer will probably be able to at least rip appart a bad interface and talk about better solutions for it.

As a side comment I would say the list of questions you give seem very good for HCI to help (particularly Baldur’s Gate which is by nature more statistics heavy) but the “”Should I force users to move EACH unit in Warcraft?”- I would put that in the realm of a game design decision.

That to me points out what kind of game you are making more than an HCI thing, but maybe HCI can point there out early that the type of game you are making may suck

Did you maybe ask before using someone else’s characters (if so, mention it), or are you just trying to use popular characters to give yourselves a boost? Because the former is legal, and the latter is not…

One point you bring up that I neglected was to include Gabe and Tycho’s copyright in the comic strip. I will add that ASAP.

We do mention who those characters belong to in the very first line of the article and we’ve informed both Gabe and Tycho of our strip, though we haven’t gotten any response yet. Nevertheless, I’m confident in their support and would be happy to co-operate with them if they took offense. As creators ourselves, we feel strongly about copyrights and ownership and do not take stealing lightly … nor accusations of such.

Also, there are laws surrounding the use of names and characters for satire. TheOnion.com is a great example of consistent satire.

We chose their characters because they are the webcomic embodiment of gamers and we were doing a strip on video games and HCI.

Hope that answers your questions. Rather than turn this engaging discussion about the place of HCI in games into a legal discussion, if you’d like to continue this discussion, I encourage you to contact me by e-mail.

I’m not convinced that the game industry is behind the curve — especially given how much HCI owes to software development and game theory as filtered through game design. I’d agree with a number of readers above who say that a lot of what you outline does fall under the domain of the game designer. Seems to me like this is just another one of those infuriating instances - What’s in a Name (or What Exactly Do We Call Ourselves) - where people in our line of work get hung up on what the job title is rather than the work being done.

Daniel: I agree that some things certainly fall under the game designer’s plate and I, too, don’t want ot get caught up defining who does what. Perhaps you’re right, the game industry isn’t behind the curve but it is certainly AS behind as the general software industry. That is to say, there are plenty of companies who are completely unaware of the need for HCI. Some will not suffer from it because somebody else is already doing a great job of it (e.g., game designers) - that’s great. For the most part, however, nobody owns it and nobody’s really thinking too hard about it. I don’t say this arbitrarily but more based on conversations with friends and family in the industry and how they currently go about making these decisions.

One example that keeps coming up in my mind was a rather hyped game called Oni by Bungie software. The game design, in terms of levels, concept etc seemed great. It innovatively combined a third person fighter and a third person shooter (hand to hand combat plus Tomb Raider like action). However, the execution of this was horrendous. The main problem I found was it required the player to manually aim their 3-D chracter, which has always been a difficult thing to do without a first person view. Sure, a good game designer could have caught this but I guess what I’m saying is that a person who knows how to come up with great game ideas and gameplay doesn’t necessarily know the best way to execute on the finer details. An HCI person has training on how to find out what the best way is.

Let me go back to my “Should I make users select each individual unit in Warcraft?” question. Raist3D contends it’s a game designer decision. Game designers have certainly made those decisions, yes and a good one may intuitively be able to come up with the right answer from experience. My opinion is that the game designer needs to be able to design the concepts (what are the races? what are the powers? how many races? what are the rules of the game? what is the story? what are the campaign modes?). Whether the same person needs to know the optimal mechanics to deliver this experience, I’m not so sure.

Admittedly, the lines are pretty blurry but then, since when has a job’s boundaries been clear? HCI (as Raist3D points out, not ANY HCI) has the training and experience necessary to help make a good idea become reality in games. I don’t care whether that’s through an HCI department or a well versed, multi-talented game designer. Right now, HCI in any form other than accidental seems rare in the industry.

“Let me go back to my “Should I make users select each individual unit in Warcraft?” question. Raist3D contends it’s a game designer decision. Game designers have certainly made those decisions, yes and a good one may intuitively be able to come up with the right answer from experience. My opinion is that the game designer needs to be able to design the concepts (what are the races? what are the powers? how many races? what are the rules of the game? what is the story? what are the campaign modes?). Whether the same person needs to know the optimal mechanics to deliver this experience, I’m not so sure.”

If we are talking about Warcraft specifically, I stand by my comment. Check out a change from the first time Warcraft came out and now when Frozen Throne came out: before you could pick up a “power up book” like a tome of experience and use it, and now you just pick it up and it gets used by the hero instantly.

I see there’s HCI elements here, but also there are serious considerations from the game design point of view. The fact that you can’t allow say one of your heros get the power book, then give it to an ally which happens to be in the other side of the map, is one of the ramifications.

Specifically when talking about a game like Warcraft, yes I do believe that picking a single unit will alter the experience to the point where it touches the gameplay. Play Korhal: Immortal sovereigns and compare- it’s like two different games, and they are.

By the same token, I can see games where if I was the designer I wouldn’t mind an HCI expert to pitch in- like I think Baldur’s Gate or Final Fantasy would be a good example - simply because in this case the issue of displaying a lot of statistics is very generic and independent of the gameplay.

I can say this though, if I am a “the boss” I still woudln’t let an HCI guy pitch in unless I know he has a “gamers background.” He doesn’t need game designer credits, but he will have to survive a “ok so you say you like games, pick your favorite game, what you like, what you didn’t and why” type interview. I am sure there are many views on something like this, but I can say right now for a fact that Blizzard does grill you on this and if you just read “GamePro last night to prepare for the interview” they will find out and won’t hire you

Anyway, more than a disagreement or agreement, this is a clarification. Thanks for reading.

(oh and just in case some are wondering, yes, I do work in the video games industry).

“I still woudln’t let an HCI guy pitch in unless I know he has a “gamers background.”

You’ve mentioned this a couple of times and I just wanted to say that I absolutely agree. I don’t see why the assumption is otherwise, though. You hire people who are suitable to your industry, preferrably enthusiastic. Plenty of capable programmers exist, so you choose ones who are into games. Similarly, an HCIer in games would need to know what’s out there, what’s worked, what hasn’t, preferrably through personal experience and must be enthusastic about making a game absolutely the best experience.

Regarding your other point, ther’es obviously overlap between game designer and HCI. By definition, HCI deals with user experience and the design of a game is all ABOUT the experience (as you’ve pointed out with some great exampels). This kind of overlap isn’t new, however. HCI overlaps with graphic designers, programmers, and even marketing. The key is not understanding where the lines should be drawn but how each person’s background and experience can aid the success of the project.

The most significant source of frustration in games for me is intentional abuse of UI to fix game flaws. An example at the forefront of my thoughts is when a game has a “save point” before a “boss” but does not allow you to skip the 30 minute cut scene introducing the “boss”. My original conclusion was that this was a UI oversight. Having seen it very often lately, I have come to the conclusion that some game designers intentionally reduce usability by introducing this delay in order to compensate for something. In this case, the ease of killing a boss.

But some games do this for balancing issues. My favorite “warcraft-like” game was Total Annihilation. In that game, you could select virtually infinite numbers of units and place them in a group. You could place buildings in a group and the units they produced would thus join the group. You could issue cyclical orders to units so that you never had to micromanage. All of these things in a game many many years old. But I strongly suspect these types of features were intentionally left out of subsequent games in order to make the system ‘balanced’.

It’s interesting how easily game designers can abuse their game’s usability in order to increase difficulty or add balance. Mortal Kombat games would have the computer opponent literally do moves the human player could not do, in addition to inhuman reaction times.

Anyway the point of my long and rambling post is just to say that I think that, to me, the more serious problem isn’t lack of HCI people, it’s the fact that game designers rely on intentionally reducing usability to fix a flaw, because it often is the quickest way to do so.

I first became interested in usability after being shocked when I first played Link’s Awakening back in 1994; it was so easy and usable to do such complex things and use so many items. It was very consistent, very flexible, and interestingly documented from the inside (characters told you how the controls worked). Only very few non-game applications have felt so “right”; Muine comes to mind.