Menu

Post navigation

One thing you learn after a few years of professional creative work is this: money loves certainty. Not just any kind of certainty, mind you. Not the kind we produce in science labs through decades of careful experiments; not the kind that comes down from the mountain on the lips of old philosophers. It’s the feeling, the sensation of certainty: that wild glow behind a gambling addict’s eyes that compels them to bet their savings on roulette.

This should not be especially surprising; what is creative commerce, after all, if not an exercise in risk? We gamble the money of our financiers (and, often, our own livelihoods) on outcomes no honest person could predict. Will we complete our project on schedule, if indeed we complete it at all? How many people will like the thing we make; how many will pay money for it, tell their friends about it, come back to us for more? How many people will even hear that it exists?

If you are a creative executive—someone tasked with determining how much money to spend on which idea—you must attempt to predict these very outcomes, honestly or otherwise. You sit on a pile of money your bosses ordered you to spend very carefully. Your job, as we’ve established, is to gamble it; yet if you said so out loud you’d be tossed from the building and replaced with whomever had the good sense not to speak. Instead you must convince many rooms full of high-powered biz people (most of whom privately want to strangle each other) that something, anything is a sure thing: that the money they entrust to you will turn like magic into more money, and later into more money still. This is all money ever wants to hear.

What follows is a story about the ways creative people can and cannot be certain. It’s a story of gamblers, crafters, hucksters and true believers. Mostly it’s a story about the age-old gulf between artists and business folk, which only seems to widen as we cram ourselves into smaller and smaller rooms. If we read this story together, and we read it with sympathy, I think we can make it across.

In university I’d spend lazy afternoons picking through videogame-related internet forums. It made me feel connected to a community of fellow enthusiasts: helped me digest the latest industry gossip, and develop with grave diligence all my best Videogame Opinions.

People there would often bemoan the presence of ‘bugs’ in the games they purchased. ‘How can the developers get away with this garbage?’ someone would exclaim in mock bewilderment. ‘Do they not bug test their games at all? It’s inexcusable! It’s as if they spent all their time on the graphics, which are shitty anyway,’ and so on. It’s become an undercurrent in videogameland: a wellspring of populist outrage, fit to spice up any review, Let’s Play or livestream.

Back then I would have typed up a furious, factual defence of the game developer I hoped one day to become: first filling up the little ‘quick reply’ box, then copying-and-pasting my growing manuscript into a full-blown Word document. Today, however, I’ve learned these posters’ questions were a sort of rhetorical Trojan Horse. In debating whether some otherwise-perfect game experience has been marred by a shifty behind-the-scenes computer programmer, I’d already accepted two bad assumptions: first that the ‘perfect game experience’ can objectively exist, second that I might purchase that experience in a store. And though the conversation always cloaked itself in fact—this particular game developer, that particular variety of computer glitch—it was never really about facts. Instead it was about feelings, and about status. It was about persuading people that they’d lost something (that, in fact, someone had stolen it from them) when in truth they’d never had it to begin with. It was about all the things advertisers manipulate when they transmute ‘wants’ into ‘needs’.

To understand why videogames contain so many bugs—and why people find this so upsetting—it helps to think about the gradual extermination of all life on the planet Earth.

This is part 1 in a hopefully-ongoing series about videogame programming techniques I’ve acquired over the past several years. Broadly, it’s intended to help you solve common problems you encounter in your daily game development life. Yet rather than merely throwing a github or Asset Store link at you, my goal is to present a comprehensive programming style: how to think about the problem, why to think about it this way, and ultimately how to attack similar problems with valour and grace.

I once read a Gamasutra post describing something called ‘The Curtain Problem’. Here’s the basic idea: you must program a big Broadway-style stage curtain for the videogame you’re creating. You’ll use this curtain firstly as a loading screen: closing it whenever it’s time to unload the previous scene, then reopening it once the new scene has loaded. Yet your level designer is musing about reusing this curtain for dramatic effect at various points during the middle of gameplay. (Your antagonist, it turns out, is an underground R&B idol called ‘The Bleaknd’ who changes outfits often; it would be cool if each outfit swap caused the curtain to close and reopen around him.)

Had you brought this problem to an entry-level programming class, you’d inevitably find yourself staring at a projector slide with the following text on it:

C#

1

2

3

4

publicclassStage

{

publicboolIsCurtainClosed=false;

}

The very first thing universities teach us about object-oriented programming is that this variety of public member access is, for some reason, sinful. Yet it’s difficult to explain to a novice programmer precisely why this is the case. In the land of C# and Unity3d, the first thing your professor would do is show you the ole’ getter & setter pattern:

C#

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

publicclassStage

{

publicboolIsCurtainClosed

{

get

{

return_IsCurtainClosed;

}

set

{

_IsCurtainClosed=value;

}

}

bool_IsCurtainClosed=false;

}

It’s apparent to the student that this code snippet is functionally identical to the previous one, except that it’s many lines longer and, therefore, much harder to write correctly on the back of the handwritten exam Your professor is preparing to give you. They explain that this technique entitles you to a fresh and steamy mound of OOP shit™, which smells strongly of a thing called ‘encapsulation’ (best remember this one for your first job interview). The two of you soon arrive at something like:

This snippet, unlike the previous one, rises above complete pointlessness. It’s sort of neat that you can tell ‘Stage’ to open or close the curtain without needing to know what CurtainController is or what it’s doing. It’s sort of helpful that CurtainController is inaccessible from outside the Stage, and therefore has only one thing feeding it commands. You could probably reuse CurtainController somewhere else—perhaps as part of some other class—though you’ll probably never need to.

Inevitably, the very next problem you’ll encounter shall concern the sharing of control. What if your antagonist is frenemies with a local rap superstar (notorious, it turns out, for ‘runnin through the Styx with his ghosts’)? What if both characters need to open and close the curtain at arbitrary intervals? What if there were four characters, or six, or twenty?

Anyone who reads a lot of criticism must learn to deal with friends who accuse them of ‘over-analyzing’ stuff. Granted, there are people in the world who’d love to hear me expound on the hidden meanings behind The Shining’s conspicuously-placed Calumet Baking Powder cans; many others, however, would find it tedious. I think it’s polite to read the room a bit before opening my mouth.

The Shining is one of those ‘complicated’ films, full of dark corners in which to build dense essays and hour-long YouTube analyses. I like complicated films! But I also appreciate those ‘CliffsNotes’ kinds of films: equally-great works that place their themes right out in the open. You rarely need an essay for those ones. Sometimes all it takes is one sentence. For example: I think the sled in Citizen Kane represents the protagonist’s childhood innocence. It’s not the most exciting thesis, but it’s concise, and it’s relatable, and that’s nice!

In the land of videogame criticism—which is my country of origin—we tend to get the worst of both worlds. While Kane spends its runtime showing us snowglobes and sleds, the time I’ve spent with Fallout 4 consists mostly of digging medical syringes out of garbage cans so I can use them to magically heal bullet wounds. It’s hard to decide what I think the garbage cans represent. I’m pretty sure it’s not my character’s childhood innocence.

The videogame industry would hate for you to ‘over-analyze’ these elements. Its spokespeople would have us believe that said elements don’t mean anything. They’d claim these elements exist simply to gratify: to be fun and exciting, rather than enlightening in any particular way. Playing a videogame, by this mode of thinking, is essentially like receiving an impersonal blowjob from your computer. You’re there to be stimulated; you don’t much care for the company.

In The Friend Zone is a Twine game by me that you can play (for free!) on itch.io.

My work on Friend Zone began with the game’s ending: a sort of prolonged joke riffing on a parable called “Before The Law” by a writer named Franz Kafka. Here are the parable’s opening lines:

Before The Law stands a doorkeeper. A man from the country comes to this doorkeeper, and requests admittance to The Law. But the doorkeeper says that he can’t grant him admittance now. The man thinks it over and then asks if he’ll be allowed to enter later. “It’s possible,” says the doorkeeper, “but not now.”

The man might overpower the doorkeeper if he wanted to, but behind this doorkeeper is another; behind that one is yet another, and so on. Each, if you believe what the doorkeeper says, is more powerful than the last. The man tries for years to talk his way in. He begs, he pleads; he bribes the doorkeeper with everything he owns. Nothing works.

Eventually the man is old and dying, and still he has not seen The Law. Then, as his death approaches, blinding light shoots from the doorway. He experiences an epiphany. All his thoughts and memories coalesce into a single shining question, which he puts forth to the doorkeeper: “Everyone strives to reach The Law,” says the man. “How does it happen, then, that in all these years no one but me has requested admittance?”

The doorkeeper tells him that no one else could have passed through this door. This door was made only for him.

I think the parable is about mistaking the subjective for the universal. The man imagined The Law within his own mind, so vividly that he mistook it for something outside himself: something tangible, something real. He further mistook it for something after which everyone strove, when in truth only he could strive after that which only he had imagined. The man desired something to seek, and not to feel alone in seeking it; and so, like a dog chasing its own tail, this man came to chase The Law.

My joke—what would become the conclusion of my Twine game—plays off the very same mistakes, though replacing “The Law” with “The Sex”. Before The Sex sits a casual acquaintance. A man from Reddit comes to this acquaintance and asks to gain admission to The Sex. He believes that as a man he must pursue some universal ideal of manhood: that this is his purpose and birthright, sought by him and all men like him. In truth he is more like a dog; though he hopes chasing tail will bring meaning to his life, the only tail he really chases is his own.

A year and a half ago I was commissioned to write a piece for a cool new alternative games publication called Arcade Review. I was halfway down a rhetorical spiral that kicked off with my Problem Attic piece; a followup, called Cult of the Peacock, became the first thing I’d ever written to garner more than ten thousand readers. The AR piece was supposed to conclude my little games crit trilogy, solving once and for all the problem of ‘form’, ‘content’ and videogames! (lol.)

I made it five thousand words in before realizing I would never hit my deadline. I then decided to split the thing in two, sending the first part to AR and resolving to work the rest out later. I called this piece “Form and its Discontents”. It goes something like this:

At great cost it is possible to draw players along a trail of breadcrumbs through the labyrinthine structure of a videogame. Yet what beauty will they find in there with their eyes fixed so firmly upon the ground before them? What will they think of you when they step past the final crumb and look up, at last, to discover nothing but an unskippable twenty minute credits screen?

If you read me very often, you might know what became of part two. Yet part one has now become available—free as in gratis—on AR’s website! It’s about how a film form is ‘exposed’ while game form is ‘subterranean’: how popular attitudes around spoilers and consumption do not apply consistently to both media, forcing game critics to approach each one differently. You can find the piece here, alongside all manner of truly excellent words about videogames:

I was very surprised, upon visiting Uncharted developer Naughty Dog’s new downtown office, to discover that the walls were all stained yellow. It’s a bold choice, coming from a developer known for its games’ exceptional polish. Their Production Lead, who identified themself only as K, smiled oddly as they led me down the tunnel to what they called the studio’s ‘testing apparatus’. Uncharted 7 will make use of an exciting new haptic technology, they explained, to really immerse you in the vibrant world of Nathan Drake. When they first showed me the apparatus, I admit I was skeptical; the upfront cost of 270 motorized syringes would price most gamers out of the market, never mind the ongoing expense of fluid cartridges. I asked K if there would be an option to play the final game WITHOUT strapping into the needle machine, but their PR person told me there are no details about that at this time.

Uncharted 7 will be the first Naughty Dog game to run at a solid 500 frames per second. They can do this, K explained, because the game contains no computer graphics whatsoever. Instead the forthcoming adventure will be rendered directly onto the player’s back using the aforementioned motorized syringe array! That’s right: every second, 500 needles will inject differently-mixed pain fluids beneath the surface of the player’s skin, communicating all the action directly via their central nervous system. K said there is far less latency than a traditional computer setup, but I was in shock for most of it so I can’t confirm this firsthand.

It was a little weird, in retrospect, that there was no sign above the studio’s front entrance. And that K’s PR person had talons instead of a face. Oh well. Uncharted 7 is coming to all needle-based gaming platforms on the Eve of Dar’ghul’s Awakening, which I assume is like Fall 2018? Stay tuned for our obligatory 5-star review and, hopefully, some much-needed skin grafts.

The philosopher Slavoj Zizek is distressingly fond of a rhetorical trick we’ll call Confirmation By Negation. The idea is that by searching carefully through seemingly contradictory notions we arrive at some fundamental truth. You tell me you want to drink this latte, but what if the opposite were true!? You may want to hold the latte, to sip from the latte, but never would you want to finish it! Never would you desire to grow fat from its milk and sugar, or to suffer the negative effects of its caffeine! In this notion of the dairy-free, sugar-free, caffeine-free latte we discover the truest ideals of postmodernity! And so on.

So it is with many of today’s open world videogames. They seek to let you do anything, which of course means there is nothing to do! Their attempts to regulate your rise to power flatten them to an exercise in exchanging meaningless junk for the means to accrue more meaningless junk, and their deluge of tiny consequences for all your tiny actions of course renders them consequence-free. Their unique storyworlds—immersion in which is supposed to be the entire point of playing—somehow become completely external to them, since of course every ‘open world’ from Middle Earth to Mad Max manages to be exactly the same. You say you want an open world, but my god, don’t you actually want the opposite!?

We can view these games as a sequence of empty gestures designed to mediate our experience of the dreadful Other who lurks on the opposite side of their ending cinematics. We climb many instances of essentially the same tower to reveal wide swathes of essentially the same territory, in which we perform essentially the same regimen of busywork. We do this so that many instances of the same Orwellian super-presence will pass ‘control’ over this territory from itself unto us. In Assassin’s Creed we transact with the evil and mysterious Templars; in Batman it’s the evil and mysterious Arkham Knight. In The Elder Scrolls: Oblivion, perhaps most tellingly of all, we transact with hell itself. The game opens scores of its Oblivion Gates—all of which are essentially the same Oblivion Gate—and invites us to close them one at a time. By investing many hours of our labour, we sew shut the bursting seams of eternity! What better way to relieve ourselves of boredom?

What is boredom, if not fear of our inevitable death?

Within each of these games we pilot the same walking contradiction: something that is at once us, and not us. Where our real world body cannot have very much of what it wants, our open world body can have everything that is available, which of course is many instances of the same thing. But you see, it is also more! Confirmation By Negation tells us that ‘possession’ is predicated both on what we have and what we can’t have. When there is nothing in the universe we can’t have, the idea of ‘possession’ does not meaningfully exist; there is, in fact, nothing to possess. Thus when Fable II’s Orwellian super-presence permits us to transact from it marriage with every character in the world (each of whom is essentially the same character and thus a non-character), we discover little point in marrying anyone in particular. We discover the very essence of the open world system: a machine permitting us to learn, by manipulating it, precisely how and why manipulating it can have no effect.

The ‘open world’ is a mirror through which we can view, indirectly, the abject emptiness lying beyond our realm of experience. Within its reams upon reams of collectible things, we recognize and find comfort in the absence of any particular thing; we enact the ritual of collecting junk until there is no more junk to collect, at which time we discover triumphantly that we have succeeded in gathering the pure, distilled essence of nothing. We hold it in our hands, we add it to our gamer score, then we reach for a second world that is in truth another instance of the first. Here is Zizek’s dairy-free, sugar-free, caffeine-free, never-ending latte! Drink as long as you please, for your stomach can never fill.

The phrase ‘Flash is dead!’ isn’t so much a declaration of fact as it is an expression of one’s political alignment. To discover the nature of this alignment we need only ask ourselves: if indeed Flash were dead—if, somehow, a medium for creative expression were capable of experiencing death—who would we say had killed it? Was it Steve Jobs in the hardware biz with the iPhone? Or was it the front-end web person in tech services, scheming to fix the broken relationship between HTML and CSS? Was it Google Chrome, whose commitment to ‘openness’ has predictably come to preclude any software its parent company can’t manipulate? Or was it JavaScript, that mangy nightmare of a programming language whose hunger will consume the world?

Should we rejoice in our ever-impending freedom from all of Flash’s dreadful security problems (to be replaced, one assumes, by every other platform’s dreadful security problems)? Or the tremendous memory management and performance benefits made possible by… uh… cross-compiling to JavaScript? Shall we shower ourselves in the splendour of CSS3 on Chrome, then shower ourselves in the splendour of CSS3 on Firefox, then shower ourselves in the splendour of CSS3 on IE 10.0 and up, then media query into a set of stylesheets that works on more than fifty percent of recent android phones?

As I said, it’s very much a question of politics.

What gets lost in the ideological shuffle, though, is how wonderful a programming language Flash’s ActionScript 3 is. It’s both powerful and flexible, which is nice; yet beyond that, AS3 is fun. Where Java is verbose, consistent and largely insufferable, AS3 gives you getters and setters to break up the monotony. Where C# gives you a giant kitchen sink in which to deploy any programming pattern ever conceived by humankind, AS3 lends itself to a more particular, off-beat style of code.

Most centrally, AS3 provides all “The Good Parts” of JavaScript while at every turn being worlds better than JavaScript. You of course get all the whacky closure-based nonsense you’d find in JS and any other function-y language; yet AS3 provides syntax for strongly-typed variables, permitting software devs to write honest-to-goodness APIs with coherent, half-readable method signatures! You get IDEs capable of auto-completion, tool-assisted-refactoring and reference lookups! You get the gift of sanity! Do you know what sort of creature writes software in Notepad++ with naught but some crummy syntax highlighting plugin? A WILD ANIMAL. Possibly a rabid one.

Then when it comes time to program some ambiguously-typed, data-structure-melting atrocity, it’s as simple as casting all your variables to * and hacking away the night. I must concede that kitchen-sink languages like C# present all sorts of interesting rope with which to hang yourself in these situations—generics, operator overrides, reflection, occult contracts and so on—yet I’m not certain a jungle of generics has ever proven much more comprehensible than a few AS3 dictionaries packed full of mystery meat. In neither case is anyone going to understand anything you just wrote two months from now.

Speaking of dictionaries: AS3’s dictionaries are obtuse and absurd and absolutely marvellous. For some reason you use ‘for(x in dict)’ loops to iterate through keys but ‘foreach(x in dict)’ loops to iterate through values. Anything can be a key, and anything can be a value; strict typing is not permitted, so the language almost begs you to do something cool/weird. Astonishingly, dictionaries can use weak-referenced keys! In fact, any AS3 event dispatcher can use weak-referenced callbacks! Not even C#, that notorious syntactic sugar thief, is able to do that!

There is a sublime quality to Flash’s outrageous RAM and CPU overhead that doesn’t quite come across until you’ve used it to program games. I once built a Flash game that streams hand-painted full-resolution environment art from the internet and loads all of it into memory at once:

I once programmed a dynamic 2d lighting engine that renders circular gradients into a bitmap every frame, blitting it back across the screen as an overlay:

I wrote all this using Flash’s excellent 2d graphics API, which treats me like a human being by asking for shapes, fills and gradients rather than vertices, triangle maps and my first-born’s blood. I never had to explain the foreign notion of a ‘line segment’ to some illiterate thousand-threaded polygon pump, futzing with triangle fans and clipper libraries and “programmable” shaders. The libraries I used were entirely CPU-based—which is to say slow as hell—and I still got away with murder.

To use my favourite Bogost-ism, I find certain a squalid grace in hogging 2 gigabytes of RAM and 3 gigahertz of computing power just to draw a 2d videogame. It speaks to a promise fulfilled: a world where we can make sprawling, beautiful, outrageous software without fear of over- or under-utilizing the hardware against which we labour. A world free of scarcity, in which computers serve programmers rather than the other way around. A world without limits.

Yet in the shadow of Flash’s ever-impending death—an era of innovation for its own sake, leaving everything behind amidst our fumble for the bleeding edge—that world cannot exist. The hardware will always be too slow; the software will always be half-broken and impliable. This is not cause for celebration; if anything, it’s cause for sorrow. There is value in the old and the venerable. There is culture. There is expressiveness. There is joy. We’d be wise to treat it with respect.

Spotted Elk, a man known later in life as Chief Big Foot, lived in a place we now call South Dakota from ~1825 until December 1890. Though he was a notable figure in life, today the United States remembers him mostly for his death. He remains famous there, by sight if not by name, because a Chicago newspaper published a photograph of his corpse lying bent up in the snows of Wounded Knee Creek following the massacre that bears its name. Archivists preserved this photograph through the intervening century so that it now adorns many a textbook and webpage. When I look at it I’m tempted to see a sensational primary document depicting the death throes of the Lakota people, the Great Sioux Nation to which they belonged and, in a larger sense, the final defeat of First Nations civilization at the hands of European expansionism.

But then I consider the ways in which this image is deceptive. As I understand it the photographers posed Spotted Elk’s body posthumously, propping him up for the camera before they took the shot. I wonder whether they shovelled that snow onto his thigh to connote the passing of the Sioux into the land, never to be seen again. I wonder if they wretched at the sight of the bullet wound in his neck while working to hide it from view; or perhaps they did not flinch, having already grown accustomed to articulating dead bodies. The most famous version of the photograph, the one you see above, crops out the United States soldiers mulling about in the aftermath of the massacre; it excludes the hundreds of dead Sioux surrounding Spotted Elk in the snow, and the mass grave into which the soldiers shovelled their bodies. It frames the victim rather than his attackers, suggesting he was taken by some disembodied force instead of a soldier’s Winchester rifle. The serene expression on his face makes him seem like some spirit vanishing quietly from the world rather than a sixty-five year old man who was just shredded to pieces in a hail of gunfire. The photograph does not reveal the fact that Spotted Elk died right in front of his twelve year old grandson; it can only hint at the fact that the child miraculously escaped.

What we see in this photograph is not the Sioux as they truly were in 1890. It is the Sioux as Chicago wanted them to be: Tragically and conveniently eradicated. Wounded Knee was to become the curtain call for what whites dubbed “the Indian Wars”; the mighty engine of Manifest Destiny was to be at last decommissioned, victorious in its quest to colonize the continent, and the times of trouble were supposed to be over. In the 1870s, when white settlers still surged into Sioux territory, newspapers might have painted Sioux as wanton rapists and pillagers. Yet as their capacity for resistance gradually waned and they ceased to pose a threat, a gentler image came into fashion: That of the noble savage, a piteous being driven to extinction not by human beings but by the spirit of ‘historical progress’. In truth, of course, the Sioux were never wanton rapists and pillagers nor did they ever go extinct. They are still here, no matter how hard the US government tries to ignore them, and Wounded Knee Creek has remained a battleground (both literally and figuratively) for over a century. Spotted Elk’s family survives to this very day; you can find some of them on Twitter.

I came to study the Wounded Knee Massacre via the unlikeliest of tangents: I heard about it in Bioshock Infinite, a big-budget videogame in which the racist inhabitants of a floating city named Columbia present an astonishingly ‘1870s’ view of the event:

Here we see Wounded Knee as a Disneyland exhibit, were the theme park run (as Columbia is) by the unrepentant perpetrators of the massacre. Inside we find more photographs; more painstakingly articulated corpses, their photographers hoping to sell us a story about the people who killed Spotted Elk that day at the creek. The photographs are nested within one another, each showing conflicting perspectives. On one level we see the perpetrators hoping to erase their guilt from history; on another we see the creators of Infinite seeking to highlight this attempt at historical revisionism. It speaks once again to how images mutate over time. In Chicago 1890 they mourned the noble savage; in Boston 2013 we reconsider who is most guilty of savagery.

The Bioshock franchise purports to tell us ghost stories in a multiverse with three constants: There’s always “a Man, a Lighthouse and a City”. These, however, are not the constants that interest me. When I look at Infinite I see hundreds upon hundreds of photographs, along with the corpses those photos depict. Among these number cardboard Sioux warriors, as we’ve discussed; beside them are cardboard rebels from the Boxers of North China; lastly there are American frontier people, personified by residents of Columbia and the various incarnations of ‘1870s Doomguy’ Booker DeWitt. Recently we’ve all gotten the chance to photograph the corpse of Irrational Games itself, which closed its doors in the aftermath of Infinite’s perilous production and complicated reception. (This article constitutes another such photograph.) I’ve come to believe that in the histories we write about our world there are only two salient constants: There’s always a corpse, and there’s always a camera.

What follows, then, is a ghost story from me. It’s about what happened to the Boxers and the Sioux, whose stories bear an uncanny resemblance to one another despite their separation by an ocean. It’s about what happened to the spirit of Columbia, who was a real world national myth before she became a fictional sky city. Lastly it’s about what always happens to men like Booker DeWitt: The stories they steal from their victims, the messianic cults they fashion for themselves and their ultimate fate lying dead in the very same grave they dug for their enemies. This story begins, once again, in the Dakotas.