Author: Norm

Input responsiveness is one of the most important aspects of any game. Often overlooked in favor of more glamorous features, a snappy input system is one of those things that the average gamer doesn’t realize they need until they pick up a game that fails to provide it.

When you bring up the subject of input lag most people think in terms of hardware and software processing latency – that is, the time it takes to translate a physical button press into something the game code can actually react to. For most of us the hardware side is out of our control, and on the software processing side the biggest factor is often pure update rate. You can get a lot more granularity out of 60hz than 30hz, which is why many games put their input handles on a separate thread. It’s important to keep both of these latency sources in mind, but what about the design side of latency? The decisions we make when creating our game mechanics can have a significant impact on the responsiveness of our controls and it’s critical that we think carefully about our decisions in this area.

For the sake of simplicity, in this article we’ll be dealing specifically with standard console game pad buttons. Motion controls, touch screens and joysticks have their own unique challenges that are outside the scope of our discussion here. Some basic, standard state terms for buttons are:

Button Up (state) – the button is not being held down. Typically the neutral state.

Button Down (state) – the state of a button when it’s being held down.

Button Pressed (event) – the act of pushing the button down. More specifically, the button was in the up state on the previous update and is in the down state on this update.

Button Released (event) – the act of releasing a button. The button was in the down state on the previous update and is in the up state on this update.

Using these 2 events and 2 states you can define most of the core button actions. We’ll leave aside things like analog triggers, as they can be hammered into this digital framework if need be.

How do we create designed latency?

Ideally, the most responsive way to hook into our game pad’s button is to utilize the button pressed event. This event represents our earliest opportunity to handle an input, and as a result we’re guaranteed that all of our latency is on the hardware and processing side of the equation. For the purposes of this article we can consider this event to have zero latency.

Of course, there are two common situations that keep us from just limiting ourselves to button pressed events and calling it a day: player state changes and button holds. Pretty much every game has the former and many games (though not all) have the second as well. The way in which we design latency into these two systems is quite different, and each one requires a different set of tools to handle smoothly.

Dealing with holds.

To begin, let’s take a look at Halo’s famous Plasma Pistol. The weapon has two methods of firing: a standard semi-automatic projectile that fires if you repeatedly pull the trigger and a charged blast that fires if you hold the trigger down. This combination of inputs means that we can’t rely on the button pressed event to make a decision on what to do – we don’t yet know if the player wants to fire a single shot or start charging for the blast attack. Instead, we need to use the button released event, something that introduces a finite amount of latency into the system. No matter what, the player can’t both press and release the button in the same update cycle. At best, we’re dealing with a minimum of one update cycle’s worth of added lag and in reality we’re almost certainly talking about at least 100 milliseconds before the average player can get his finger back off the button.

What’s more, it’s often not viable to choose the smallest possible window to detect a hold state. In the case of a simple firearm it’s probably safe to hew close to the minimum, but if the accuracy of our hold time is important (in, say, a variable length jump) we need to be a little bit more generous. If we want to be casual friendly, the situation gets worse – it’s not unreasonable to expect some inputs to need as much as 200 milliseconds for a casual gamer to be able to intentionally choose a hold instead of a tap.

Hide your shame.

Using holds to delay or charge up attacks is a core part of Soul Calibur’s fighting system.

We’ve got a few different ways to contain the damage. The first option is to try and hide it, and this is a technique that applies particularly well to fighting games. In Namco’s popular Soul Calibur series, character attacks are never instantaneous – there’s always some amount of windup, even if it’s very short, so that opposing players have a small window in which to react. As a result, it’s possible to hide the input latency of the many charge attacks inside these windup animations. If the player has released the button by the time the attack window arrives, the move completes its animation. If the player is still holding the button, the move transitions into a hold state, usually a slow-motion animation or a pretty particle effect.

This technique is especially powerful because it turns a weakness (needing to wait on additional input data) into a strength. Once the player enters the hold state, you can allow them to vary the duration of the hold or cancel the hold into a different move altogether to create interesting mixups and mind games.

Fake it ‘til you make it.

Another option is to just start doing whatever it is you’d do if the player decided to tap the button and then make adjustments later if it’s determined that, instead, he kept the button down. This is a technique that’s often applied to lofted jumping in platforming games: when the button is pressed the character immediately begins a standard jump, and then a few hundred milliseconds later that jump gets a boost if the button is still held down. The result is a jump that feels very responsive when tapping but still allows for some in-flight adjustment of the distance and height.

This works well enough, but its applicability depends somewhat on the type of game you are. In a more cartoony universe like Super Mario Bros. or Ratchet and Clank, this sort of technique blends in pretty well. In a more realistic setting, such as Red Dead Redemption or Uncharted, this sort of floaty solution sticks out a lot more. As with most solutions in game design, your mileage may vary.

Utilize your latent psychic powers.

A third approach is to try to mitigate the impact of the latency by building a more robust set of detection rules. As a simple example, if your character’s jump action fires on the <em>button released</em> event but the player doesn’t get his thumb off of the button before he runs off a ledge, you could easily have the character automatically jump instead of letting him fall to his death. How generous you decide to be is a function of both how friendly you want your controls to feel and how confident you are that your predictions are correct.

Prediction like this is not without risks. Particularly if you have overloaded inputs – such as a button that doubles as reload and pick up weapon – you run the risk of guessing wrong and doing something the player doesn’t expect. The more complex and context sensitive your input is the more risk involved in trying to guess what the player wants to do. Generally speaking you can just avoid putting hold states on buttons with widely varying functions but, in case you do, it’s something to keep in mind.

A note on double tap inputs.

I didn’t include this in the description of a “hold” state because they’re not especially common, however their impact on latency is essentially same and, as a result, you can deal with them using the same techniques. For example, in Crysis 2 you switch to grenades by tapping the weapon switch button twice in rapid succession. Since both a single tap and a double tap result in a weapon switch action (and thus it’s just a question of which weapon you end up swapping to) this problem is easily handled via the “fake it ‘til you make it” technique.

Dealing with state changes.

When it comes to input latency, the only state changes we care about are the ones that require us to respond in different ways to player input. For example, the player might be able to press the “crouch” button while running or standing idle but not while soaring through the air (due to a jump or getting thrown by an explosion). Unfortunately, if what the player actually wanted to do was start crouching right when he hit the ground, this intent is lost unless we decide to keep the input around. Ignored inputs are often very frustrating for casual players, especially if they were ignored during a finite-duration state (again, like a jump) toward the very end of that state’s window.

Although not latency in the traditional sense, ignoring inputs feels very similar. The player wanted to do something, they asked the game to do it in a way consistent with their expectations, and the game did not react in a timely manner. There are two ways of dealing with this problem: buffering and interruption.

Buffering is helpful (unless you’re RealPlayer.)

Buffering input is pretty straightforward: we record incoming inputs but don’t respond immediately. Consider a standard player jump – unless your game has a double jump action, it just doesn’t make sense to let the player jump again before he reaches the ground. Of course, if he’s trying to do something particularly tricky (like a rapid series of timed hops) ignoring a jump request could end up ruining his day. Instead, we can buffer that input and then act on it the moment the player is back on the ground, maintaining the player’s intent without changing our mechanics to allow for a double jump.

Many complex combos can be created by buffering entire input streams during moves with long animations.

Many complex combos can be created by buffering entire input streams during moves with long animations. Actually recording the input is also simple: we remember a certain number of inputs for a defined window of time. Your buffer can be as large or small as your problem requires. Fighting games like Street Fighter and Tekken have buffer windows that can hold a half dozen or more inputs, and most action games will buffer at least one input when appropriate. How long you keep those inputs around also varies. In the case of short, finite duration states it’s usually best to keep them for the entire state. If the state is particularly lengthy, however, it’s possible that the player will “forget” about that input by the time you get around to processing it. For this reason, time sensitive inputs are typically dropped after a designated duration.

Despite the fact that buffered inputs are far friendlier, by their very nature they introduce a source of latency. Buffering is better than ignoring, but we’re still acting on these inputs well after they were actually received, and if your buffer window is large the character may end up responding to inputs in a way that feels unpredictable. In addition, unless you have some sort of visual reaction to a buffered input it’s likely that more casual players won’t even be aware that it’s happening (which is why intentional buffering is an advanced technique in fighting games). Since it’s often not possible to provide this feedback in a way that doesn’t feel artificial it’s best to keep your input window small.

As was mentioned earlier, great care must be taken when applying buffering to highly time sensitive inputs. For example, if your player was firing his weapon and it ends up auto-reloading when he runs out of ammo, he might decide to hit the weapon switch button to use a different one. The player’s intent is to swap to a weapon that has ammo, but if your game buffers that input the result will be a frustrating mess: the weapon finishes reloading, which is annoying enough since no firing could occur during that time, and then the moment the weapon becomes usable again it starts playing a weapon switch animation that further delays the desired shooting!

Of course, all inputs are time sensitive to some degree, but recognizing the extent to which this is true in each case is critical for determining the correct way to handle them. When dealing with time sensitive inputs, the more appropriate response is to allow a state to be interrupted.

I’m really happy for you, and I’mma let you finish, but…

Continuing with our reloading example from before, if the weapon switch input was acted on immediately (instead of being buffered) then the reload state would end and the player would switch to his usable weapon. What happens in this case is up to your design discretion – in most FPS games, interrupting a reload means that the weapon remains empty when you switch back to it (to avoid exploits). It’s probably more newbie friendly to let the weapon reload anyway, and which one you choose depends entirely on your audience and the particular actions in question.

Guilty Gear’s cancel system adds a ton of depth.

Guilty Gear’s cancel system adds a ton of depth.State interruptions are not an all or nothing matter – there’s no particular reason why you have to allow interruptions at every point in the state, and defining interruption windows can create some very compelling mechanics. For example, in Soul Calibur many attacks can be cancelled pre-hit – that is, their animations can be interrupted at any time prior to actually doing damage. Advanced players often use this mechanic to bait their opponents into a disadvantaged, punishable situation by starting an attack and then cutting if off to create a fake. In Guilty Gear, some moves can have their recovery (post-hit) animation cancelled to allow for guaranteed combos, and doing so is so strong that it actually requires the use of a finite player resource (the Tension bar).

I could easily write an entire article just on the topic of state interruptions and buffering, but I don’t need to because Eric Williams – a designer on the first God of War title – already did! His excellent article Combat Cancelled is required reading for all game designers, and I encourage you to absorb its deep wisdom if you haven’t already.

They go together like lamb and tuna fish.

Both buffering and interruption have their uses, and they’re most effective when used for their respective strengths. Buffering works well if you have a more complicated input possibility space – like most fighting games – or if you have a state where interruption simply doesn’t make sense (such as jumping during a jump). Interruption works well for time sensitive inputs and in situations where waiting for a buffered input to fire might result in unpredictable or undesirable behavior.

Further, there’s no reason that you can’t use both techniques simultaneously. Why not have a state with a defined interruption window and allow the actual input that causes the cancel to be buffered prior to the window? You get the best of both worlds: the latency minimizing aspects of an interrupt with the granular designer control of a buffer!

Wrapping it up.

As we discovered in the introduction, having your inputs react on the button pressed event is always the most responsive option. The addition of hold inputs forces you to react on button released events instead, and as a result introduces latency. When pondering adding hold inputs to your game, be sure to ask yourself if the benefit (another input) outweighs the cost (designed lag).

Even when dealing with normal button presses, player states can muck with your ability to respond immediately – adding buffering and interruptions whenever possible goes a long way toward alleviating this pain. Buffering input should be the rule, not the exception, and if you choose to ignore player inputs it should be a careful and considered design decision.

I didn’t mention it before, but even if true buffering isn’t an option, you should always check your inputs when exiting a state; in that case, we’re looking for the button down state as opposed to the button pressed event. This should go without saying, which is why I didn’t dwell on the point previously, but I wouldn’t be doing my job if I didn’t at least point it out.

Finally, it all comes down to feedback. Even when you can’t respond immediately to an input, it’s critical that you do something to let your player know what’s happening. Hiding your latency behind windup animations, particle effects or sounds can help the player feel like something is happening even if the important part of the action hasn’t yet occurred.

I’m sure that most of you have already seen the recently released Deus Ex 3 gameplay trailer. One of the game elements highlighted in the footage is Eidos’ method of calling out interactive items in the game world: a bold, bright yellow outline and highlight over anything you can interact with that’s more or less in the direction the player is looking.

Some of the fan reactions to the trailer have been quite surprising, with a few gamers going so far as to call themselves “outraged.” They don’t like having information in their face, and what’s more they seem to find the assertion that they need to be vaguely condescending. Some folks have even brought out the dreaded “immersion breaking” phrase. All this fuss over some simple object highlighting! Clearly, most modern games need some way of pointing out what is and isn’t relevant to help streamline gameplay, so what’s the big deal with Deus Ex’s method?

Framing the issue

First, let’s note that there are actually two different problems to address when it comes to interactive objects in modern games:

Showing the player objects that can be used, picked up or manipulated.

Indicating to the player which objects are actually important at this moment.

The concepts are similar but, importantly, the second problem is really a subset of the first. Further, the degree to which these two things differ (that is, how much of the former problem the latter encompasses) can vary a great deal between games. We can indicate the general range of this with the Internet’s favorite tool: a two-axis graph.

I’ve taken the liberty of inserting what I feel are four representative examples, one for each quadrant. I should emphasize that this graph does not assume any sort of value judgments. No quadrant is any better or worse than the others, it’s just a simplified way of quantifying our options.

Fleshing out the axes

Valve Software’s Portal gets categorized as both austere and interactive. Excepting the latter third of the game, there isn’t a lot of stuff sitting around in the Aperture Science testing grounds. This is not a game that focuses on props, but what is included is almost all interactive: auto-turrets that will attack you and can be knocked over; cubes to pick up and move around; giant red buttons to press; bouncing energy spheres to re-direct. We have Spartan environments combined with highly interactive game objects.

Note the clean, uncluttered areas and bold primary colors.

Mirror’s Edge sits at the intersection of austere and static. Though not nearly as focused as Portal, the level design of Mirror’s Edge is still beautifully direct. There are some props around to give the world flavor – piles of boxes on pallets, wooden planks to indicate good jump points; the occasional planter box or advertisement – but very few of these objects are meant to be interacted with. You’ll sometimes find a wheel that needs to be rotated or an enemy weapon that can be picked up, but that’s pretty much the extent of what you need to manipulate in the game.

The detail work in this shot is breathtaking.

Sticking with static but moving along the horizontal axis toward cluttered we have Uncharted 2. This game has some of the most striking environments in this generation (not to mention being one of my personal favorites!) and they’re filled to the brim with stuff. In particular, the Nepal level is practically overflowing with props: burned out cars; piles of rubble and trash; orphaned furniture and appliances; plastic crates and other detritus of a city in conflict. For the most part, though, these things are there purely for purposes of immersion. It’s fairly rare that you need to actually manipulate part of the environment, and these are mostly limited to either cinematic moments (such as bits of train car that start to break as you climb on them) or navigation elements like ropes and doors.

The final quadrant of our graph is represented by Half Life 2, a game renowned for being one of the first to introduce general physics objects as a core mechanic in a mainstream shooter. There is stuff all over the place in Half Life 2, and almost all of it can be picked up and manipulated with the iconic Gravity Gun. In fact, many of the game’s puzzles involve using the various physics objects to solve simple lever or weight problems (and this has been increasingly true in the episodes.) As a result, Half Life 2 is both cluttered and interactive.

The problem of consistency

Now that we’ve broken down our examples, it’s very interesting to note that the two games at the top of our interactive axis do almost nothing to indicate to the player what can or can’t be manipulated. The Gravity Gun in Half Life 2 does have a subtle animation that activates when an object can be picked up, but this is pretty much the extent of their signaling mechanism. There’s no indication at all if you’re just picking stuff up with your hands.

Both of the games at the static end of our graph, on the other hand, do attempt to indicate when objects are interactive. Mirror’s Edge applies the same red highlight treatment that they use to suggest efficient climbing routes, while Uncharted 2 has two different approaches depending on the item in question. For things that are meant to be picked up – which are primarily guns and grenades – they play a flashing effect on the item itself. For environmental objects such as statues or carts that can be pushed the game generates a HUD prompt when the player is sufficiently close to the object to initiate interaction.

Why would games that have less interactive objects feel the need to highlight them? The answer is actually fairly obvious: because these objects are the exception rather than the rule, it’s necessary to counteract the player’s expectation that items in the environment are there primarily for aesthetic reasons. In essence, these games need to momentarily break the immersion they’ve crafted in order to make certain that the player understands what needs to be done.

The problem of fidelity

It’s worth going back to the concept of immersion as it applies to interactive objects. Games like Uncharted 2 fill their environments with interesting props precisely because it makes the world feel more alive, more lived in. This is despite the fact that these objects typically don’t do anything, but that’s because they don’t really need to. From a development standpoint, it’s much easier (not to mention more efficient) to make great looking stuff to fill out the world when you don’t have to find a use for all of them or spend valuable CPU time handling their physics.

Furthermore, the high quality of the environment and the desire for seamless immersion creates pressure to make the objects that are interactive blend in as well as possible. This is, of course, exactly the opposite of what’s easiest from a game design perspective, and it isn’t a new problem. Back in the days of yore, adventure games found themselves in a similarly difficult situation. As hardware improved, background art got increasingly lavish and detailed and, as a result, it became important for interactive items to meld well with these more immersive scenes. One of the side effects of this progression can be found in the phrase “pixel hunt,” a derogatory term that came to be associated with many later adventure games.

Because the worlds were so detailed, filtering objects that were important to the game from the ones that were important for reasons of aesthetics became a matter of hunting around the scene looking for spots that would give you a “use” cursor. This was not a particularly fun mechanic, and the problem contributed to the eventual decline of the genre. More modern takes on adventure games offer various aids to reduce the issue, with many offering player abilities that cause all interactive items to flash or highlight briefly.

Jumping back to our modern game examples, yours truly once spent several minutes trapped in a tiny room in Uncharted 2 simply because I didn’t anticipate that a statue could be moved. There are dozens, if not hundreds, of statues scattered throughout Drake’s adventure, and seldom are they interactive objects. I ended up resorting to the 3D equivalent of pixel hunting, in which I systematically walked around the room looking for a HUD prompt to appear and indicate what it was I needed to do.

Bringing it on home

Let’s get back to our original topic: Deus Ex 3’s object highlighting scheme. We know that the problem it’s trying to solve is real, and that similar techniques have been employed in other games. Given that, why are people so unhappy about this particular example?

The crux of the matter is this: no matter what, any attempt to break interactive objects out of the environment results in a momentary loss of immersion. Even when it takes on a much more subtle form – such as exploding barrels that are all red, or electrical switch boxes that all happen to look exactly the same – the presence of these cues reminds us that our character exists in a world of limited interactions. The result of Deus Ex’s extensive, always-on highlighting is to constantly remind the player that no matter how alive and immersive the world feels, most of it isn’t actually interactive.

Tactical options available.

Of course, different sorts of games require different solutions. Slower paced games (such as adventure games) have more freedom to let the player dink around and discover interactivity, whereas fast-paced games with lots of time pressure situations (such as shooters) have to be more explicit. Some settings are more restrictive than others, as well. A game set in ancient Rome doesn’t have many hooks to integrate something like Deus Ex’s system into the narrative, whereas a more futuristic, sci-fi setting like Crysis 2 is less restrictive. In fact, it’s almost certain that Eidos was hoping to leverage the cyberpunk themes in Deus Ex to make it easier for players to accept their augmented-reality approach.

The ideal solution might be creating world in which everything that should be interactive is interactive, but the reality is often isn’t practical. It’s easy enough to conceive of a game where most of the props can be picked up – just look at Half Life 2 – but what about one where all the doors open? All the vehicles can be driven? All the TVs turned on? Grand Theft Auto 4 probably comes closest, but it’s not clear to me if more linear experiences would be significantly improved by these additions.

I think the most important takeaway is this: the amount of player feedback you need to give is inversely proportional to the amount of interactive objects in your game. That is, the more interactive you are the less you need to worry about getting the player’s attention because you have already created the expectation that things are interactive. If you have less frequent (or less important) cases of this in your game, you need to do something to remind the player that items in the environment sometimes need to be manipulated to succeed. It could be that Deus Ex 3’s scheme goes a little too far given their level of interactivity, in which case their best option is simply to scale it back until the proverbial Goldilocks is just right.

As such, I wasn’t able to secure a copy of it myself. If I’m honest, I can’t say I recall being aware of the game at the time. I’m sure I must have seen some media on it, but it certainly wasn’t on my list of must-have games. I bought it – or rather, I had a friend back in Michigan buy it for me – along with The Sands of Time because I saw a post on Evil Avatar that said their prices had been slashed to $20 within a week or two of release.

Looking back, it’s clear to me that this was one of the best gaming deals I’ve ever gotten. Perhaps, for reasons that will become clear later, it was the best deal of any kind.

I’m further ashamed to say that I didn’t play the game for quite some time. I couldn’t afford to have my friend actually ship the games to me, and in any case I was pretty busy just being in Japan. I finally got around to playing Beyond Good and Evil during Michigan Tech’s spring break of 2005. I had decided to stay in Houghton for the break to save on gas money and take care of some outstanding assignments, and as a result I had the apartment to myself (I lived with two other guys.) In exact accordance with expectations, I didn’t actually do much in the way of work and spent most of the time cranking through my backlog of games.

Beyond Good and Evil was the first one on the list.

It’s hard to say when, exactly, I first started to suspect that what I was playing was fundamentally different than the games I was used to. On the surface there isn’t anything particularly unique about what Beyond Good and Evil does: it’s Zelda with a different coat of paint. The combat system, though it flows well enough, isn’t very deep. There are stealth sections, but none of them do anything that hasn’t already been done better in other games. The photography angle is pretty unusual but it’s not the first game to have you take pictures of stuff.

Instead, what pushes Beyond Good and Evil past its contemporaries is Jade.

Good Characters Start With Firm Foundations

Subtle details, like crayon drawings and toys, add to the experience.

In the world of gaming, Jade is an enigma: a competent, strong, resourceful female lead that retains historically feminine elements like compassion and empathy. More than just a man with boobs, Jade’s character manages to present this without feeling heavy handed or clichéd. Like many game characters she doesn’t begin as a hero, but unlike most the critical aspects of her story are right there to see: she’s a freelance photographer who lives in a lighthouse and provides shelter for the orphaned children created by the war on Hillys. And this information isn’t presented simply for background – Jade’s devotion to the children is a core aspect of her personality, right from the opening attack and through the following sequences that let you explore the lighthouse and interact with the kids.

More evidence of Jade’s investment in the kids.

In that first 10 minutes we’re also treated to two emotional extremes – Jade the fighter, who doesn’t hesitate to risk her life in the protection of others; and Jade the vulnerable, injured and pessimistic but willing to lean on the support of her surrogate family. How many games manage to achieve this kind of character depth across their entire story, let alone in the opening sequence? For that matter, how many other games have had heroes whose defining qualities are empathy and caretaker instincts?

What’s more, most of this is done via the “show, don’t tell” philosophy also found at the core of Valve titles like Half-Life. We don’t learn this information by having it repeated to us via audio logs or presented as exposition. Most of it shows up simply via observing the way the characters act and the things you can find in the world: Jade’s room, full of drying photo prints of the kids and her uncle Pey’j; the lighthouse itself, full of crayon drawings on walls and other signs of the orphan’s presence; her fearlessness in the face of the DomZ attack; the fact that the group’s income source is Jade Reporting, a business she started herself.

Stylish with a hint of sex appeal, but still clearly utilitarian.

Even her choice of clothing conveys meaningful information. A sense of utility combined with style that isn’t ashamed of her womanly assets but also doesn’t go out of its way to promote them. It’s the kind of outfit one could actually imagine seeing on a young female photographer/martial artist.

I could write at length about how alive the world of Hillys feels, and how watching the progression of resistance within the city (general agitation, graffiti, eventual full-on protests) from behind the mask of Shauni (Jade’s IRIS Network codename) makes for a very interesting experience: you’re the hero, but nobody really knows it. That’s a theme that’s been explored in a lot of other mediums and even a few other games.

Instead, the defining feature of Beyond Good and Evil’s open world was, for me, the moment of the second attack on the lighthouse. The timing of that sequence was not only the most brilliant moment of that gaming generation – it was also the first time I felt like I understood how games could tell a story in a way that was genuinely different from all the mediums that preceded it.

Your Choices Have Consequences

When we talk about modern game storytelling, the element that gets most commonly thrown around is player choice. As designers, we’re always trying to think of ways to give players a legitimate sense that their decisions have an impact in the world. The way most games try to solve this is through simplistic branching trees: you make a decision, and certain paths open while others close. Characters you meet will react differently depending on the sorts of paths you choose, and the differences between them are generally fairly clear (to the point of exaggeration, in many cases.)

Because the world of Hillys is open, you can decide to head on back to the light­house any time you wish.

The problem with this is twofold. First, making all of that additional content is really expensive. In an industry as ruthlessly competitive as ours, the notion of making a game where the typical player might never see a significant percentage of your content is pretty hard to sell to a publisher – there’s a reason why massive, western-style RPGs remain the domain of a few proven studios with rabid fan followings. Second, branching storylines are rarely, if ever, complex enough to satisfy the critics. No matter what you do, it’s virtually impossible to make a branching storyline that approaches even the most basic decision that a person makes every day in the course of normal life. This is a pretty significant problem when the primary purpose of your product is to entertain people with larger than life experiences, and it’s the reason why people like Chris Crawford have given up on games entirely and moved in an different direction.

Beyond Good and Evil gets around this problem in one of those ways that feels obvious as soon as someone points it out: it doesn’t actually give you a clearly defined point of decision.

She often seems to know more than she should.

Because the world of Hillys is open, you can decide to head on back to the lighthouse anytime you wish. For the first half of the game or so, the natural progression of the storyline takes you back there pretty regularly. Whenever I went back I made it a point to chat up the orphans because they often had interesting things to say about how events were proceeding. They had their own take on what Jade was doing and, in some cases, were slightly mysterious characters in and of themselves.

Around the time of the Nutripillis Factory mission, however, that changes. Once Jade gets deeply involved with the IRIS network their headquarters in the city becomes your central mission hub, and as a result there’s no specific game requirement for you to visit the lighthouse anymore. Around this time the missions also start to get longer, and you’re traveling to the farther reaches of the map to accomplish them. In between missions there are Alpha Section warehouses to raid, animals to photograph, races to win, smugglers to chase and the increasing effects of your actions to observe: the citizenry is getting more and more riled, and it’s fun to bask in the secret glow of your success and the irony of NPCs who think you haven’t heard about Shauni’s latest exploit.

Arriving at the island to find all of the children gone and the lighthouse itself in ruins was one of the most painful moments I’ve ever had in a game.

The Slaughterhouse mission, in particular, is quite long and has a number of emotional moments, but even with those the overall feeling is one of triumph. Your investigative photography is cracking the abuses of the Alpha Section wide-open and you can feel yourself progressing toward the inevitable showdown. As you finish that mission, you’re finally given a reason to return to the lighthouse to find the Beluga, a spaceship Pey’j was working on in secret. So excited was I to try out this new toy that I ran in and out without really talking to anybody and headed to the upgrade shop to purchase the space engine that it needed.

It’s after you purchase that engine, however, that the second lighthouse attack occurs.

I never had a choice in this. The Alpha Section was going to attack and destroy the lighthouse when I bought that space engine regardless of what I did or didn’t do over the course of the game – it was a scripted moment and it will play out the same way no matter how many times you play the game. But at that moment, when I exited the shop and saw the explosions, saw the foreboding pillar of smoke, and heard Jade’s frantic exclamations confirming what I could already see, that fact never occurred to me.

As I raced back to the lighthouse as fast as the hovercraft’s booster items could make it go, all I could think about was how long it had been since I’d really been there.

I could have gone back to that lighthouse at any moment, but I didn’t. It certainly wasn’t going anywhere…right?

Arriving at the island to find all of the children gone and the lighthouse itself in ruins was one of the most painful moments I’ve ever had in a game. There was a deep, profound sense that what had happened was my fault. That I (and by extension, Jade) had become so wrapped up in my adventures and the broader mission to save Hillys that I had completely forgotten what got me started: the desire to protect those kids. I wondered what I might have missed, what interesting story tidbits the kids might have had to say that I’d missed because I couldn’t be bothered to go back home.

The scene that follows is heartbreaking, with Jade defeated and despondent over the loss of the kids. You can see the guilt on her: she knows that this is her fault, and her feelings closely mirror what the player is feeling in that moment. Never mind that this was a scripted event that couldn’t have been prevented; in the moment, there’s no thought of that. There’s only a shared emotional connection between what happened on the screen, what the character is feeling, and what the player is feeling.

Of course, this sort of thing is perfectly mechanical, and other mediums have been executing this framework in varying ways for centuries. Every good story needs its low point, when everything has fallen apart and there seems to be no hope of proceeding. The difference here was that I wasn’t just mirroring the emotions of the protagonist, but instead genuinely sharing aspects of her grief.

I could have gone back to that lighthouse at any moment, but I didn’t. It certainly wasn’t going anywhere…right?

The Personal Experience Is What Matters

I’ve never been certain whether or not the buildup to that scene was intentional on the part of Ancel and the other designers. Certainly the scene was meant to be emotional, and even without the buildup it’s one of the more effective low points in any game. But did they specifically design the missions to keep you away from the lighthouse? Did they make a conscious decision to make sure that the most interesting things to do were in the city, or on the outskirts of Hillys, instead of at the lighthouse?

In other words, were they trying to steer me away from Jade’s home to deepen the impact of that scene?

I’ve never seen Ancel write anything on that subject, and in fact I’ve never seen the question asked in an interview. Short of having the opportunity to ask him myself I’m not sure the question will ever be answered. Certainly, I’m not the only one who found that scene especially heart wrenching!

But therein lies the critical takeaway from Beyond Good and Evil: in the end, designer intent doesn’t really matter. What’s important is what the individual player takes away from the experience, and what our medium can do to enhance those feelings. We don’t have to engineer choices into our games, with a flow chart indicating how each one will affect the player’s alignment, mission choices, NPC reactions, whatever. If you give the players the ability to make a decision as simple as where to go at any given moment, you’ve already created the potential for emotional impact far surpassing what’s achievable in a static medium like movies or literature. The ability to have a truly individual experience is one area where games have the clear upper hand.

In the end, Beyond Good and Evil is important to me not just because of my memories of the game itself, but because it’s one of the two games that convinced me to pursue game design as a career. Up until then I’d been a gamer for a long time, and had been interested in the critical aspects of games for quite a while, but I hadn’t entertained the idea that creating them was something I wanted to do.

After that second lighthouse attack in Beyond Good and Evil, it was hard to imagine wanting to do anything else.

This post is part of a series of articles on the games that have shaped me as a designer. They tend to be long, focus less on hardcore design analysis and are honestly closer to love letters than anything else. It probably goes without saying, but each of these articles will be full of spoilers. For other posts in this series, check out Games I Love.

Game development has always been about metrics. Even the simplest games needed a basic set of variables around which the mechanics could be balanced. How fast does the character move? How high can he jump? How fast do the projectiles fly? As games have become more complex and teams larger, metrics have taken on an even greater importance. The game programmers want to know the player’s stats so that AI can be properly balanced; the level designers need to know how far the player can jump so that they arrange accomplishable paths through their block outs; modelers might need to know the player’s crouching height so that they can design usable cover objects.

Although one can arrive at these values via experimentation (and play testing should always be used to double-check) it’s best to think critically about the details surrounding the metric you want to lay down.

Take the relatively common problem of horizontal jump distances in first person shooters, for example. Say you’re working in an engine where the game units are equivalent to 1 centimeter and you’d like the player to be able to comfortably jump 4 meter gaps. Your first guess would probably be to set the player’s jump distance at 400 units. If you’re forward-thinking, you might make it 412 or so to provide a little bit of forgiveness.

Your first test of this setting is likely to be quite surprising: you’ll almost certainly find yourself falling well short of clearing your 400 unit gap. Why would this be the case? Let’s take a look at some screenshots from Valve’s Half Life 2: Episode 2 and see if we can spot the problem.

Note the location of the highlighted edge…

In this screenshot you can just barely see the edge of the ledge I want to jump from (highlighted in red.) Once this edge leaves the bottom of my field of view (FOV) it is essentially out of my mind. Since I can’t see it anymore, my brain will reflexively start my jump right before or just after that moment. However, if we look at this next screenshot, you’ll see that in reality I’m quite far away from the edge!

The reason for this is straightforward: the height of the player’s camera (eye-height) and the vertical FOV angle create a predictable blind spot in front of the player character’s feet. Depending on these two values, the point at which the edge disappears from the player’s view can be a quite far. In my experience, this additional distance can easily be as much as 70% of the camera height with a wide-screen FOV of 70 degrees. The blind distance increases as the FOV angle decreases, which makes sense since narrowing the FOV is how one accomplishes zooming (such as with a sniper rifle.)

The edge has totally disappeared with the FOV reduced to 70.

The edge actually looks much further away with the FOV increased to 90.

Notice how this issue can also be affected by the player’s view pitch.

Looking up slightly, the edge disappears once again.

Looking down, the edge is both in view and appears further away again.

Some players will naturally look down as they approach a jump, which means they’ll be able to get much closer before losing sight of it. A player who leaves their view flat will have less distance, and players who happen to be looking up will find themselves with significantly less distance! Below is a quick diagram I whipped up to show the relevant values that determine this player blind spot.

The blind spot is the distance from the player to the edge.

With this information in mind, in order to create a newbie-friendly jump we need to make our jump distance quite a bit longer than the actual gap we want to clear – using our previous example, something in the neighborhood of 500 units is safer. The exact value you want will depend on your game’s FOV, your player’s eye height and your valid pitch range (i.e. how far up or down you expect the player to be looking.)

There are other factors to consider as well: the speed of your player; the minimum reaction time of a typical gamer; the size (in units) and type (axial-aligned bounding box, capsule, etc.) of the player’s collision. However, these have a much smaller overall effect on the end result.

Of course, this information leads to another metric: our un-jumpable gap size. Since advanced players will learn to wait out the blind spot (rather than jumping immediately) we’ll need to make sure those gaps are at least 512 units away. Longer is better, where possible, since it will be visually clear to the player which gaps he can jump and which ones he can’t. You can also use this data to create more difficult jumps with less padding; however, this can easily create confusion among your more casual players.