Breaking The Wheelhttps://www.breakingthewheel.com
Breaking The Wheel is about making games better. There are too many horror stories of game development hell, crunch, and lay-offs. I want to change that.Fri, 11 Aug 2017 16:50:12 +0000en-UShourly1https://i1.wp.com/www.breakingthewheel.com/wp-content/uploads/2016/08/cropped-Breaking-The-Wheel-Orange-Logo-Only-Facebook.png?fit=32%2C32&ssl=1Breaking The Wheelhttps://www.breakingthewheel.com
3232113777076Guest Post: On The Subject of Trade-Offshttps://www.breakingthewheel.com/guest-post-subject-trade-offs/
https://www.breakingthewheel.com/guest-post-subject-trade-offs/#respondMon, 31 Jul 2017 17:40:18 +0000https://www.breakingthewheel.com/?p=1463My friends at Black Shell Media were kinda enough to host another of my scribblings, this time on the ever-present and ever-important notion of trade-offs: how to think about them, traps to avoid when dealing with them, and why it’s so important to know yourself when faced with them. Click here to read on: On The Subject of Trade-Offs

]]>My friends at Black Shell Media were kinda enough to host another of my scribblings, this time on the ever-present and ever-important notion of trade-offs: how to think about them, traps to avoid when dealing with them, and why it’s so important to know yourself when faced with them. Click here to read on:

]]>https://www.breakingthewheel.com/guest-post-subject-trade-offs/feed/01463Conflict of Interest: The Fancy Mess of Scrum, Part 3https://www.breakingthewheel.com/conflict-of-interestthe-fancy-mess-of-scrum-part-3/
https://www.breakingthewheel.com/conflict-of-interestthe-fancy-mess-of-scrum-part-3/#respondMon, 26 Jun 2017 15:00:45 +0000https://www.breakingthewheel.com/?p=1423In Part 1 and Part 2 of this series, I talked about the functional issues of scrum. In this post, I want to talk about the larger, economic problem with scrum. Namely, what was once an idea designed to support other industries has become an industry unto itself. And with that comes what economists would call a “conflict of interest.” By Reading This Post, You Will Learn: How the idea of scrum has become the institution of scrum Why scrum is as much about marketing as it is about productivity Why scrum is an “entrenched interest” The definition of a “conflict of interest” and why the institution of scrum fits that definition The Idea of Scrum Versus the Institution of Scrum The Crow, Brandon Lee’s final movie, is so early 90’s it hurts. The soundtrack alone is a relic of a bygone era. It’s not a great movie, but I[...]

]]>In Part 1 and Part 2 of this series, I talked about the functional issues of scrum. In this post, I want to talk about the larger, economic problem with scrum. Namely, what was once an idea designed to support other industries has become an industry unto itself. And with that comes what economists would call a “conflict of interest.”

By Reading This Post, You Will Learn:

How the idea of scrum has become the institution of scrum

Why scrum is as much about marketing as it is about productivity

Why scrum is an “entrenched interest”

The definition of a “conflict of interest” and why the institution of scrum fits that definition

The Idea of Scrum Versus the Institution of Scrum

The Crow, Brandon Lee’s final movie, is so early 90’s it hurts. The soundtrack alone is a relic of a bygone era. It’s not a great movie, but I still have an indelible love for it. At the start of the final act, the villanous Top Dollar (player to a T by the ominously voiced Michael Wincott) has gathered his minions for a meeting. He desires a change of pace for their annual criminal endeavor, Devil’s Night. A pivot, if you will. “A man has an idea,” he proclaims. “The idea attracts others, like-minded. The idea expands. The idea becomes the institution.”

That’s how I feel about scrum.

Scrum started as an idea – a response to the challenges of software development, created by real professionals in a commercial setting. It was a good idea – a means to an end. That idea attracted other developers who also wanted to improve their processes. And then, much as Bruce Lee found with martial arts pedagogy, somewhere along the way the means became the ends. The objective of having an efficient, effect process was superceded by the end goal of having an authentic scrum framework.

The idea of scrum has become, quite literally, the institution of scrum. And I mean the literal version of “literally”.

Haters Gonna Hate (But I’m Not a Hater)

I am not anti-scrum. It was an essential starting point for me as a project and process manager. It offered a means for me to provide value for employers and clients. It’s a great foundation for effective production. Further, it’s great foundation for teams looking to improve their modus operandi.

That said, our goal should always be to increase efficiency, not to strictly adhere to dogmatic execution of a pre-defined framework.

A Note

This article assumes a basic knowledge of the scrum framework. Both describing and critiquing the framework in the same post would make for an excessively high word count.

There’s Nothing Like Marketing to Ruin a Good Time

Scrum is now undeniably, empirically, unequivocally an industry unto itself. There are scrum associations, scrum certifications, scrum conventions, scrum consultants†, scrum coaches, and scrum trainers. What was a means to ship products became a product unto itself. And while I don’t begrudge any of the people involved their livelihoods, this also creates a real problem of incentives.

Scrum Is No Longer a Solution – It’s a Product

First of all, there’s the issue of marketing. I have an MBA with a dual concentration in marketing and entrepreneurship. Which, essentially, means I have a degree in marketing and highly specialized marketing. What’s more, I earned that degree from Northwestern University’s Kellogg School of Management, renowned as one of, if not the best school for marketing in the world.

Point being: I know marketing when I see it.

Marketing has three essential functions: convince you that you have a problem, convince you to purchase a solution to that problem, and then convince you to increase your consumption of said solution.

Now let’s look at scrum. We have a clear problem: product development is hard. Scrum provides an effective solution to that problem in the form of training and certification. Okay fair enough. Now, for the third piece: consumption.

Do me a favor: go look at the Scrum Alliance website’s certification page. There are three tiers of certifications, each with multiple varieties of certificates. Each, in turn, carries it’s own training and certification costs, and many of which require regular re-certification (again, for a cost) and participating in ongoing “scrum education units” (also at a cost).

The Conflict of Interests at the Heart of Scrum

I’m not arguing that these certifications are worthless or fraudulent. But scrum has become a commercial enterprise unto itself. Many people now make their living not in product development but in selling some aspect of scrum. Those people depend on the existence and popularity of scrum as industry to put food on the table. And that, in turn, makes scrum an entrenched interest.

Economists refer to this phenomenon as a conflict of interest: “multiple interests, financial or otherwise, one of which could possibly corrupt the motivation or decision-making of that individual or organization” (source: Wikipedia). The people who make their livelihoods propagating scrum are nominally (and sincerely) working to improve product development. But they also need scrum to remain the premier development framework, or their livelihoods dry up.

A Thought Exercise

Now, let’s say an empirically, verifiably better production framework came along. Like, let’s just imagine that there was no contest – this new framework wiped the floor with scrum. Further, let’s imagine that this new framework is gaining popularity – it’s a viable contender to scrum.

Do you think the collective entrenched interest of scrum would just fold up its tent and roll on out? Would all of the multiple people who make their livings from scrum all gladly call it a day and hit the unemployment line?

Granted some forward-thinking individuals would evolve with the times. But a large contingent of people would be incented to fight this new framework tooth and nail in order to protect the institution of scrum, and they would do so. They would fund research studies shitting all over this new framework. They would sponsor articles and opinion pieces in industry magazines and blogs calling it a sham or snake oil. Or, they would seek to ostracize, repudiate, and denigrate any figureheads associated with it.

So What Do We Do Instead?

In all seriousness, the last thing we need to replace one compromised framework is another. I’m not here to say to everyone, “Let’s toss out scrum and go with kanban!” (although I do think that would be a good start).

So my recommendation would be not to focus on frameworks as an end goal at all. Again, established frameworks like scrum are good foundations for production, but they are also problematic because they are provided as “one size fits all” and because they will, almost inevitably, lead to associations and entrenched interests down the line.

Instead, we should focus on our mindset. I don’t advocate for scrum anymore. Nor kanban. Instead I’m an advocate for lean production. Lean is less a framework than a philosophy: waste should be eliminated. Comprehensively, systematically, and ruthlessly. We need to analyze our activities from the perspective of value-adding and non-value-adding. We need to gather, analyze, and optimize the metrics behind our processes, not just at the macro, sprint or story level, but at the micro, step-by-step level. Rather than rigidly applying frameworks, we should rigidly apply the scientific method.

By all means, take good ideas from scrum, or kanban, or extreme programming. Use them as the starting point, the petri dish for your operations. But then synthesize them into something new and optimal for your company, project, and culture.

Further Reading If You Enjoyed This Post

The Fancy Mess of Scrum

Now again, to reiterate, I am not anti-scrum. If my choices are between seat-of-my-pants management, waterfall, and scrum, I’ll take scrum every time. But scrum is a means to an end, not an end in and of itself. And in talking with practitioners, it sometimes feels like we’ve forgotten that. “That’s not scrum” is a meaningless, vapid rebuttal to any process change suggestion, and yet I hear it invoked as it were a decisive argument.

Indeed, there are practitioners out there trying to move the framework forward (story mapping is a great example). But that progress is still done under the umbrella of scrum, which means it is subservient to the institution.

Bruce Lee summed up his disdain for the stagnancy of martial arts by describing it as a fancy mess. And that’s the danger I see ahead for scrum. The goal should be to maximize productivity and reduce cycle times, not to strictly adhere to dogma.

As Top Dollar said at the end of his speech, “The idea has become the institution. Time to move along.”

Indeed.

Key Takeaways

Scrum has mutated from a framework into an institution, and thus has become an entrenched interest

The institution of scrum shows the telltale signs of a marketing-led organization: selling a solution to a problem and working to increase consumption of said solution

The number of people who and groups that make their living from propagating scrum create an entrenched interest

This entrenched interest, in turn, creates a significant conflict of interest for people who make their livings from scrum

While there’s nothing wrong with using scrum as a starting point for effective production, the goal of effective management should be to eliminate waste and increase productivity, not adhere to dogma

If You Enjoyed This Post, Please Share It

† Full disclosure: “scrum consultant” is my roll at one of my current clients. “Hypocrisy!” you cry, but the truth is I haven’t been on a scrum team at the client for over a year. I’ve made sure that the teams I’m on are using continuous flow processes, and I’ve been a consistent advocate for moving away from scrum.

]]>https://www.breakingthewheel.com/conflict-of-interestthe-fancy-mess-of-scrum-part-3/feed/01423The Higher Order Consequences of Sprints: The Fancy Mess of Scrum, Part 2https://www.breakingthewheel.com/higher-order-consequences-sprints/
https://www.breakingthewheel.com/higher-order-consequences-sprints/#commentsMon, 19 Jun 2017 15:00:18 +0000https://www.breakingthewheel.com/?p=1422In the Part 1 of “The Fancy Mess of Scrum”, I talked about the flawed intuition behind sprints: how they batch work, obfuscate inefficiencies, and are superfluous in terms of extrinsic motivation. In this post I want to delve deeper into higher order negative externalities that sprints spawn – the consequences of the consequences. By Reading This Post, You Will Learn: Why scrum is a mismatch for game development from the word go due to art asset pipelines How the need to allow time for QA and product owner review creates significant churn The problem with having developers move ahead on stories in future sprints How limiting developer throughput extends the flow time of feature development Haters Gonna Hate (But I’m Not a Hater) I am not anti-scrum. It was an essential starting point for me as a project and process manager. It offered a means for me to provide value for[...]

]]>In the Part 1 of “The Fancy Mess of Scrum”, I talked about the flawed intuition behind sprints: how they batch work, obfuscate inefficiencies, and are superfluous in terms of extrinsic motivation. In this post I want to delve deeper into higher order negative externalities that sprints spawn – the consequences of the consequences.

By Reading This Post, You Will Learn:

Why scrum is a mismatch for game development from the word go due to art asset pipelines

How the need to allow time for QA and product owner review creates significant churn

The problem with having developers move ahead on stories in future sprints

How limiting developer throughput extends the flow time of feature development

Haters Gonna Hate (But I’m Not a Hater)

I am not anti-scrum. It was an essential starting point for me as a project and process manager. It offered a means for me to provide value for employers and clients. It’s a great foundation for effective production. Further, it’s great foundation for teams looking to improve their modus operandi.

That said, our goal should always be to increase efficiency, not to strictly adhere to dogmatic execution of a pre-defined framework.

A Note

This article assumes a basic knowledge of the scrum framework. Both describing and critiquing the framework in the same post would make for an excessively high word count.

Batching Has Some Nasty Higher-Order Effects

The batching nature of sprints creates what my friends across the pond would call “knock-on effects” – unintended, negative consequences.

Problems Right Off the Bat

Let’s start with the simplest one: scrum is not neatly compatible with art asset creation. Character models, animation sets, and other asset creation pipelines (generally) do not cleanly match sprint cadences. So, if you try to pigeonhole art assets against a sprint cadence, you’re going to have team members who perpetually either can’t commit or who blow commitments.

Sure, you can make the tail wag the dog. You can mutilate asset pipelines so that they line-up with a sprint cadence, irrespective of the actual time the various processes take. But that’s just imbecilic: you’re forcing the reality to match the process rather than the other (and proper) way around.

So, right off the bat, sprint and game development have compatibility issues.

Churn

Moving on, let’s say you have a two week, ten working day sprint. Awesome! Your devs have ten days to reach their commitment.

But, wait – you can’t just give the product owner a sprint’s worth of work at 5:00 PM on the last day of a sprint and expect her to verify all of it (or assume it’s all defect free, for that matter). She needs at least a day or two to go through the stories thoroughly, among all her other responsibilities. So, in order to facilitate that review, the work needs to be ready a couple days in advance of the end of the sprint.

Okay, fine. So your 10-day development cycle is really an 8-day development. Big whoop. But hang on: if you run QA passes as part of your sprint (which you should), then QA also needs to check the work to ensure it’s defect free. That takes time as well. So your 8-day development cycle is now down to a 5- or 6-day cycle.

This dynamic creates two third-order effects.

Third Order Effect, #1

First, instead of having one deadline (the end of the sprint), you end up with two or three: the end of the sprint, the deadline for product owner review at t-minus two days, and the deadline for QA review at t-minus 5 days. And, in my experience, those micro deadlines create significant churn.

All of your developers rush to get all of their work slammed in for QA review right before the QA deadline. Naturally, the rush to get the work in carries with it a lot of defects, which makes for a hectic QA review/bug fixing process. Then there’s a similar slam of work right before the product owner review deadline.

It’s like doing arm curls with a water-filled weight. Rather than a smooth, continuous motion you’d have with a standard dumbell, you have a lurching arch with its own destructive momentum.

A 5-day story has a 10-day development cycle in a two-week sprint. As discussed in Part 1, that’s already bad enough (a 50% flow time efficiency). And if you start that work early, that 10-day cycle becomes an 11-, 12-, or 13-day cycle. Or worse. Your flow time efficiency takes a nose dive.

What’s more, getting a head start on work, in turn, carries a fourth-order effect. If your developers are perpetually getting ahead of the next sprint, then it follows that QA and the product owner will accrue a backlog of stories to review. In as few as two sprints (I’ve seen it happen), that backlog might get so big that reviewing these logged jammed stories consumes an entire sprints worth of bandwidth for QA and the product owner. And then what do you have your devs do? Should they take a sprint-long smoke break or get even further ahead of the product owner?

Resources That Informed And Influenced This Post

If you have ad blockers turned on, you may not see the above links. Breaking The Wheel is part of Amazon’s affiliate program. If you purchase products using the above links, the blog will receive a portion of the sale (at no cost to you).

The Pantry Effect

Now, again, there is a counter argument: ideally, the work in a sprint gets finished, tested, and verified at an even, ideal burndown rate. But, the raison d’être of the sprint is to provide little deadlines along the way to keep production moving. And if there’s one universal truth about deadlines, it’s that work compresses in the direction of the due date. It’s just human nature.

Psychologist refer to this dynamic as “the pantry effect”: the more you have of something, the less self-regulation and discretion you demonstrate in it’s consumption. Lots of food in the pantry? Chow down. Lots of time to get work done? Then kick back.

But let’s say you are either so effective at keeping people moving or you have a team that is superbly disciplined at working at a steady clip, either of which is entirely possible. Well, if your team does move in a continuous, harmonious cadence, then a sprint production model is entirely superfluous. If your team is moving at a constant, sustainable velocity, then you don’t need the intermittent deadlines sprints provide. And so you should, once again, simply focus on minimizing the cycle times for each step of your development process.

But, even if that’s the case, you still need to reserve time for QA and product owner review. Even if the review work is evenly distributed across the sprint, it’s places an upper bound on the amount of work developers can pursue. And this still jacks up your flow time efficiency.

The Math of a Constricted Throughput

Okay, so lets assume your developers have a specific throughput of stories (R). For any given groups of stories (I), it takes T time to process the work (T = I/R, according to Little’s Law). If artificially restrict R by limiting what developers can process, and hold the amount of work (I) constant, then, mathematically speaking T must increase.

Example: Your developers could, absent sprint constraints, process 20 story points of scope in a two week sprint. Your product owner grooms at a sufficient pace to always keep 40 points of scope in the backlog on average. That means the total flow average flow time for individual stories, from when they enter the backlog to when they are processed, is 4 weeks. This is your theoretical flow time.

Now, lets say your constrict your developer’s bandwidth to 16 points per two-week sprint in order to facilitate product owner verification. If your product owner keeps grooming at the same rate and maintains 40 points of work, then average flowtime protracts to 2.5 sprints, or five weeks (40/16 = 2.5). Your flow time efficiency has dropped to 80% (2/2.5 = 80%).

Having My Cake and Eating It Too

If you’ve made it this far, you might be confused. I seem to be arguing both for and against restricting developer throughput. For, because it prevents developers from working ahead and then creating an every present backlog of verification work for the product owner (thus extending flow times). And against because it makes your flow time efficiency take a dump (thus extending flow times).

So which is it?

The answer, as one of my favorite professors liked to say in grad school, is “yes”.

This perplexing, damned if you do, damned if you don’t dynamic is exactly why I hate sprints. The creation of arbitrary micro-deadlines has nasty repercussions for efficiency. All in the name of creating urgency. We’re using sprint deadlines to keep them moving, while at the same time creating a managerial infrastructure that slows down flow times.

As the old saying goes, we’re cutting our noses to spite our faces.

Key Takeaways

Art assets don’t map neatly against sprint calendars, which means that scrum already has a lack of alignment with game development

Sprints need to account for the time required for QA and product owners to review stories

That requirement creates significant churn within sprints

It also means that developers will perpetually be under-utilized or they’ll get ahead of the next sprint; either option increases flow times

]]>https://www.breakingthewheel.com/higher-order-consequences-sprints/feed/11422A Comprehensive Guide to Indie Game Pre-Launch Campaignshttps://www.breakingthewheel.com/comprehensive-guide-indie-game-pre-launch-campaigns/
https://www.breakingthewheel.com/comprehensive-guide-indie-game-pre-launch-campaigns/#respondMon, 12 Jun 2017 15:00:24 +0000https://www.breakingthewheel.com/?p=1444In a day and age where new titles hit the market on a daily basis, being able to stand out from the crowd is super important. In 2016, 4,207 games launched on Steam. Steam doesn’t let you launch games on weekends, so that’s approximately 16 games per day. How do you differentiate yourself from the 15 other games launching at the same time as yours? Note from Justin: The following is a guest post from Jennifer Mendez of Black Shell Media. The folks at Black Shell are friends and comrades of Breaking the Wheel. So when co-founder Raghav Mathur asked if I’d be up for hosting this, I jumped at the chance. There’s a lot of useful, actionable material for anyone looking to kick off an indie marketing campaign, so dig in! One step in the right direction is having a solid pre-launch marketing campaign to drum up some hype[...]

]]>In a day and age where new titles hit the market on a daily basis, being able to stand out from the crowd is super important. In 2016, 4,207 games launched on Steam. Steam doesn’t let you launch games on weekends, so that’s approximately 16 games per day. How do you differentiate yourself from the 15 other games launching at the same time as yours?

Note from Justin: The following is a guest post from Jennifer Mendez of Black Shell Media. The folks at Black Shell are friends and comrades of Breaking the Wheel. So when co-founder Raghav Mathur asked if I’d be up for hosting this, I jumped at the chance. There’s a lot of useful, actionable material for anyone looking to kick off an indie marketing campaign, so dig in!

One step in the right direction is having a solid pre-launch marketing campaign to drum up some hype before your game even hits the digital (or physical) shelves. Building up a community of early supporters—who believe in your game enough to take a chance and get involved early—is incredibly powerful. These early backers are your biggest fans and will provide you with invaluable feedback and engagement. Having a successful pre-order campaign is a surefire way to have an even more successful launch, especially in the AAA sector.

In this article we’re going to break down what exactly a pre-launch/pre-order campaign looks like, what it can help you do and how to effectively run one as an indie developer.

Accomplishing Goals

One major reason studios push pre-launch campaigns is to encourage pre-orders of their game. Pre-orders are chances for players to claim a copy of a video game prior to release. While they can’t play it until release day, they can rest safely knowing their copy will be held in the store for pick-up or pre-loaded on their PC, ready to go for when the game becomes publicly available.

The beauty of pre-launch campaigns is that they help enormously with word-of-mouth marketing and advertising. Any player who wears a shirt of your game becomes your game’s advertisement. Whether they’re getting gas, groceries, or ice coffee, they are helping promote the game. Plus, being an early adopter is often highly valued in hardcore gamer communities, and any player who buys your game early on will definitely be telling all their friends. It’s a win-win situation—your early buyers get exclusive items and the knowledge that they’ll be the first to know about new content, and you get to start building a passionate community around your game.

But before setting out on a pre-launch campaign, it’s important to consider your studio’s needs. What is the goal of the campaign? Apart from the obvious marketing and advertising, campaigns can meet individual studio goals, like gaining more public awareness for the studio as a whole, improving community engagement between the developers and the players, and increasing social media followers for expanded outreach moving forward.

Depending on the studio’s needs, the campaign can be molded in a variety of ways. For instance, a studio in need of more public awareness for the company itself should opt out of creating things like stickers or a world map of the fictional game world. Instead, they should try running a social media campaign with targeted hashtags and valuable information about the studio. What makes them so important, and why should people care? Links and eye-catching graphics also help to stand out on outlets like Twitter, where walls of text usually get overlooked. A great way to build your studio’s presence is by doing an AMA on Reddit or a Q&A session on Twitter/Facebook. Consider doing a Facebook Live stream and answer audience questions in real time.

If your goal is to move pre-order units, then you should make sure to highlight the incentive for buying the game early. For example, No Man’s Sky offered a special ship to those who pre-purchased the game and made sure to highlight this ship and other pre-order features in all of their marketing materials.

Keep Your Numbers in Mind

Something to consider when creating a pre-launch campaign is the balance of investment and profit. It’s important to draw up some numbers so you don’t risk losing money at the end of it all. What are the goals for direct sales (after eCommerce provider expenses), other sales (after publisher/distributor expenses) and total sales? What about quarterly download and sales goals for direct distribution? How do pre-order sales fit into this? Keeping conversion rate, download and unit goals in mind is crucial for any campaign.

Additionally, since many pre-launch campaigns focus on drumming up hype via social media and other communities like Discord, consider your non-financial goals and how you can achieve positive return on investment(ROI) in every channel. Set some goals for follower counts, Discord community members, number of people entering giveaways, etc..

Make sure the goal of every single marketing action or project you undertake is clear—are you trying to sell copies, gain more followers, or attract Discord community members? Keep all of this in mind and make sure the end result ties in to the means of getting there. During a pre-launch campaign, if you are accepting pre-order and also trying to build a following, make sure you are measuring ROI on these two things separately, and also choosing which of these two goals to focus on.

Choosing the Right KPIs

Here’s something else to consider: key performance indicators, better known as KPIs. These are metrics that are chosen to evaluate how successfully a certain channel’s marketing activities are performing. For example, the number of social media followers you have is a KPI, as is your total sales amount. If you’re doing a pre-launch campaign but you aren’t accepting pre-orders, the KPI for you to measure your success might be Twitter followers. If you’re accepting pre-orders, it will likely be the number of pre-order units you can ship.

Nathan Lovato, a writer for Game Analytics put it wisely with this example:

“You may be looking to target new players with your next game. Before the game is released, one key performance indicator would be your subscriber count’s growth on social networks. As a company, you may want to increase your revenue to create bigger and better games. Two KPIs to track in that case could be the amount of new weekly customers and how much each customer spends on your products on average. Those are streams of data each employee in your company can understand.”

When it comes to bringing in new fans, driving highly targeted traffic to your pages is very important. Once you have more traffic, incorporate tools that can help optimize it and maximize performance. For instance, Google Analytics is arguably the most popular analytics tool available. It breaks down individual regions, including states, provinces and cities. Combined with its simple, easy-to-use interface, its information comes across crystal clear. However, users new to tracking tools might not agree. Another example is the Xsolla Tracking Tool. Since it is deeply integrated with other Xsolla services and the game itself, developers can see many significantly valuable metrics automatically. More so, they’re able to track performance not only from the links, but also from distributable keys and promotional codes.

It might sound like a lot to track, but stick to tracking methods that everyone on the team can understand. The objective here is to provide an in-depth view based on specific questions, like who is playing the game? How long do they play in each session?. Where are my followers talking about my game? By having valuable information like this, pre-launch campaigns can shed light on who to target moving forward, post-launch. They can even shed light on the overall success of a game, the potential for virality, and the average player base your game studio tends to target.

The Makings of a Promising Campaign

Unfortunately, guaranteeing that you have a return on investment and accomplish all of your promotional and marketing goals is impossible. However, you can work your hardest—and plan wisely—to help increase the chances of success.

For instance, in the AAA sector especially, pre-launch campaigns start off by planning what to sell along with a pre-order, how long it will take for production, and finding the right vendors to facilitate these tasks. Merchandise must be attractive enough to be in demand, while being priced at a good value for the customers.

At the center of it all, a landing page and website from which to sell and measure metrics is a must. From these websites, bundles can be created, games can be promoted, and a chance for players to pre-order can be presented. Start promoting your pre-launch marketing materials and offerings as soon as you have something solid to show off, such as any bundles or pre-order keys. Create plenty of promotional materials that significantly highlight the unique bonuses players get for pre-ordering. Show off some of the features in the full game and tease players with upcoming information.

Use tools like PayPal or Xsolla’s Pay2Play to sell games directly from the website to increase profit margins and streamline the process. Both Pay2Play and PayPal offer easy-to-use widgets. PayPal has a powerful API and a lot of customization options if you want to deeply integrate its features into your website—for example you could set up a subscription service where users are automatically billed monthly for access to certain content. Xsolla’s Pay2Play solution encompasses a wide variety of internationally used payment methods and has a lot more accessibility for international users. You can also use Pay2Play to generate download keys for your game, streamlining the process even more. The Humble Widget has similar features but is a little more basic, and allows you to easily sell DRM-free copies of your game. It can also help provide Steam keys to early buyers after the game goes live on Steam.

Downloadable content (DLC) is often on developers’ minds when they start looking at their launch strategy. If you plan on shipping DLC, it’s important that you don’t mislead players. Use the studio’s blog to explain what players are getting. This is a serious AAA problem—DLC pre-orders, with little to no detail provided to the players. If you’re going to create awesome bonus content for players, it’s important that your work doesn’t become seen as simply a cash grab. Explain your plans and talk to your community so that everyone is in the loop and feels that their voice is being heard.

Influencers are also becoming more and more important in the modern gaming market. YouTubers are often contracted by studios to make sponsored content prior to a game’s release, so that their followers get excited about the game and pre-order it, or at least keep it on their radar to purchase at launch. However, these sponsored videos with large YouTubers can cost well over $100,000, and for indie studios that is often a bit out of reach. In that case, target the most relevant YouTubers and streamers who play games in your niche and send them a review copy and presskit a few days prior to launch. If they decide to play your game you will end up with a lot of free exposure.

Successful Pre-Launch Campaigns

There are many examples of successful pre-launch campaigns, especially in the AAA sector. You should definitely study what others have done and how well it worked out for them before designing your own campaign.

For instance, the Bioshock 2 campaign involved the creation of a narrative-led “Something in the Sea” viral site, which according to Gamespot was aimed at the “core community of rabid fans.” It also involved meticulously placing 8 “wine” bottles, based on a fictional drink in the game, around the globe. They were meant to look as if they’d washed ashore on beaches, but each one featured a hype-inducing poster inside of it. This became a scavenger hunt, as the viral site hinted at the locations of the bottles, encouraging fans to track them down and get some posters. All of this work wasn’t in vain either. In its first week of release, the game became the best-selling Xbox 360 game in both the UK and North America.

Another campaign that went well and used more traditional techniques was Grand Theft Auto 4’s. Rockstar used Steam to give out a free copy of GTA: Vice City to anyone who pre-ordered Grand Theft Auto 4. They used guerrilla and viral marketing to drip feed information to the press. They used their fansites and fan community as a catapult for marketing, whenever they weren’t using TV or billboard advertising. And unsurprisingly, it sold 6 million copies during its first week alone. Granted, GTA has an enormous fan base and is arguably one of the biggest IPs in gaming history. However, they managed to make the most of this existing community very effectively.

Of course, these are AAA games. Most tend to have a following already, and at the very least, a big studio name behind them. Even new IPs have a fair chance at successful pre-launch campaigns. So, what about their indie counterparts?

Can Indies Do Similar Pre-Launch Campaigns?

Mastery of the basics can lead to some pretty colorful campaigns.

Obviously, your average game development studio can’t afford the type of marketing that Rockstar or other large studios can. It’s extensive and expensive and there’s not enough manpower to make it happen. The good news is that indie developers can do similar campaigns for a fraction of the cost.

Obviously, starting with the basics is important, and that includes a trailer, screenshots, a press release, a proper landing page, and a blog. This also means blogging regularly leading up to the game’s launch, as well as building a community. You of course have to do the basics of any good marketing campaign–social media, community engagement etc.—but what about larger campaign ideas?

For starters, gaining a spot at conventions like PAX East is a great start. The exposure gained in these conventions often comes from press interviews as well the players who sign up for your mailing list or who are generally interested. If you have a large following already, try doing a virtual scavenger hunt. Or maybe host a local event in your city (if you have a following there). At the very least, using fansites and fan community, GTA style, isn’t out of reach. Set up a subreddit or Discord server for your community and talk to them regularly.

Just like Rockstar gave out a free copy of Vice City to people who pre-ordered GTA4, consider giving out copies of your previous games (assuming this isn’t your first title) along with your game in the early or pre-launch stages. This can help provide a frame of reference for your potential to be successful as a developer, as well as generally drum up hype and get people excited.

There are several indie game campaigns to refer back to as resources, or even just inspiration. For instance, Austin, TX-based Devolver Digital is known for their support and release of Hotline Miami, the neon, top-down game from Dennaton. For their pre-launch campaign, Devolver opened a phone line in Miami, Florida so people could call and leave voice messages. After the game launched, the studio made a trailer using these fan-created voice recordings.

Additionally, the game’s cover art was created by Swedish painter Niklas Åkerblad, originally the character designer and animator at Shortfuse Games. The artwork, combined with the music for the game, was used in marketing but has since taken on a life of its own. The electronica/synthwave soundtrack alone has a mixtape aesthetic (picture Drive meets Cocaine Cowboys), and became so huge that Laced Records produced the collector’s edition vinyl for it. If your game has awesome features like a custom soundtrack or beautiful visual aesthetic, try tapping into other relevant tangential niches and getting exposure there. There is a lot of overlap between these communities.

Further Reading If You Enjoyed This Post

Takeaways

Pre-launch campaigns are valuable for the success of a game. They are great for establishing a solid community before the game goes live, and help drive sales upon launch.

Creating a successful pre-launch campaign does require effort, but it should be entertaining every step of the way. Explaining the DLC to players helps with engagement, while hyping up the game. Using resources like blogs, social media, and YouTubers is invaluable in game marketing. It very much boils down to giving people something, whether it be information, a t-shirt, exclusive footage, or in the case of the reliable content creators, a Steam key. When you give to your community and interact with them, they give back and will stay loyal to you.

If you enjoyed this article, please go ahead and share it on social media! We love feedback and making new friends here at Black Shell Media, so don’t hesitate to reach out!

]]>https://www.breakingthewheel.com/comprehensive-guide-indie-game-pre-launch-campaigns/feed/01444The Flawed Logic of Sprints: The Fancy Mess of Scrum, Part 1https://www.breakingthewheel.com/flawed-logic-sprints/
https://www.breakingthewheel.com/flawed-logic-sprints/#commentsMon, 05 Jun 2017 15:00:52 +0000https://www.breakingthewheel.com/?p=1409Back in the heady days of 2010, I was a newly minted scrum master, fresh off my training seminar. I was excited by scrum’s potential, but I also took care to maintain some agnosticism. I always told people that scrum was the best production framework I’d seen, but that I would happily kick it to the curb as soon as I found something better. With several more years of experience under my belt, I’ve come to the conclusion that there are, in fact, better ways of managing development. And with that understanding came the further realization that I want to leave scrum behind. By Reading This Post, You Will Learn: Why the sprint cadence is problematic from an operations standpoint A better approach to measuring productivity Why the intuition underlying sprints is flawed Bruce Lee and the Fancy Mess of Martial Arts When he developed his own martial art system, Jeet Kune Do*,[...]

]]>Back in the heady days of 2010, I was a newly minted scrum master, fresh off my training seminar. I was excited by scrum’s potential, but I also took care to maintain some agnosticism. I always told people that scrum was the best production framework I’d seen, but that I would happily kick it to the curb as soon as I found something better. With several more years of experience under my belt, I’ve come to the conclusion that there are, in fact, better ways of managing development. And with that understanding came the further realization that I want to leave scrum behind.

By Reading This Post, You Will Learn:

Why the sprint cadence is problematic from an operations standpoint

A better approach to measuring productivity

Why the intuition underlying sprints is flawed

Bruce Lee and the Fancy Mess of Martial Arts

When he developed his own martial art system, Jeet Kune Do*, Bruce Lee said he was motivated by what he deemed “the fancy mess of martial arts.” In Lee’s opinion, the major branches of martial arts – karate, kung fu, tae kwon do, and the like – had solidified what should have been fluid: combat.

Lee, no stranger to fist fights himself, knew from experience that real world combat was messy and unpredictable. And yet, martial arts training had become sterile: obsessed with fixed positions and sequential move sets. Martial arts students weren’t learning moves as a means of self-defense. They were learning moves as ends unto themselves. The tail, in Lee’s opinion, was firmly wagging the dog.

Lee’s view of the stagnation of martial arts served as an inspiration for me to turn a critical eye on my own realm of expertise: scrum. And it turns out that scrum, for all its popularity and effectiveness, has significant structural issues. Some are logistical, some organizational. But all have brought me to single conclusion: scrum is only a starting point for effective project management, not the be all end all.°

Haters Gonna Hate (But I’m Not a Hater)

I am not anti-scrum. It was an essential starting point for me as a project and process manager. It offered a means for me to provide value for employers and clients. It’s a great foundation for effective production. Further, it’s great foundation for teams looking to improve their modus operandi.

That said, our goal should always be to increase efficiency, not to strictly adhere to dogmatic execution of a pre-defined framework.

A Note

This article assumes a basic knowledge of the scrum framework. Both describing and critiquing the framework in the same post would make for an excessively high word count.

By Design, Scrum Batches Work into Sprints

One of the key elements of scrum is the sprint: a defined time box of work during, in which a team completes a given commitment of user stories. The intuition of the sprint makes sense. Give team members small, incremental deadlines in order to space out work, and also to provide built-in points to review progress and re-calibrate priorities.

But batching is a bad practice. I cover the math in detail in my posts about kanban and heijunka, but the short version is that batching extends cycle times. For instance, let’s say you have a user story that, in total running time, took 5 days, from a dev picking it up, through coding and QA, to product owner verification, to closure. But, it’s in a two-week, ten-working day sprint. So it isn’t considered complete until the product owner reviews it on the last day of the sprint. What should have taken 5 days took 10. In other words your flow time efficiency is 5/10 or 50% – the story spent half of its production life span not doing anything.

Well Then Review the Story Earlier!

Now, of course, the counter-argument here is that the product owner simply can verify the story as soon as it’s ready for him/her. And that’s absolutely true. But, if your team is using velocity (points closed per sprint) as the primary unit of measurement, then as long as the story is closed within that sprint time box, the inefficiency will not be apparent. The team met its commitment, so further inspection is not required.

To put it another way, if the primary measure of a team’s effectiveness is velocity, you are obfuscating efficiency issues by aggregating those points in multi-week clumps. To put it still another way, if you base your ongoing sprint commitments against the velocities of prior sprints, you are letting inefficiencies fester. Your are comparing inefficient apples to inefficient apples.

Yes, a good scrum master should constantly be pushing his/her teams to improve its velocity. But velocity at the sprint and even the story level aggregates the impacts of multiple factors. And the devil, as they say, is in the details.

What You Should Do Instead

Utilize a continuous flow process: products owner(s) groom stories (with input and feedback from the team, and then add and prioritize them in the backlog. Available developers grab the story that’s at the top of the backlog, execute it, and repeat. Mangers evaluate performance by measuring the cycle time (how long a story spends in a given state) or the point velocity at each step.

For example, break up the entire life span of a story into logical activities:

Grooming

Ready for Development

In Progress

QA Review

Product Owner Review

Closed

Then measure the individual speeds of those activities. For example, if you wanted to analyze the Grooming activity:

Calculate the cycle time: the average time to groom a story is 5 days

Or measure the velocity: 40 points of scope move from “Grooming” to “Ready for Development” per week

Resources That Informed And Influenced This Post

If you have ad blockers turned on, you may not see the above links. Breaking The Wheel is part of Amazon’s affiliate program. If you purchase products using the above links, the blog will receive a portion of the sale (at no cost to you).

But What about the Notion of Sprint Commitments?

First off, let’s review the rationale behind sprint commitments.

Rationale #1: Provide Accountability for the Team to Process a Given Amount of Work in a Given Amount of Time

This rationale is fair enough. But let’s take a discerning of view of the efficacy of that accountability. Knowingly or otherwise, by using a commitment model, you are attempting to leverage what psychologists call “self-consistency bias”. People prefer to act in accordance with their established view of themselves. So, if the team establishes a view that it will get a given amount of work done within a sprint, it will be more likely to complete that amount of work so as not to violate it’s own collective sense of self consistency.

But is a sprint goal really necessary to trigger that bias? You can just as easily utilize the same bias by getting the team to commit to a given velocity over time (or average velocity), or constantly compare the current velocity to recent velocities (eg, the past two weeks versus the two weeks before that).

Rationale #2: Provide a Deadline

Have you ever wondered where the term “deadline” comes from? The etymology is actually pretty grim. And literal. The American Civil War suffered no shortage of horror. But even in that context, the Andersonville prisoner of war camp stands out as especially atrocious. And that camp featured a line past which any prisoner would be shot on sight. Guess what they called it?

Even without that background, a deadline carries an implicit threat: get your work done by this date, or else. Or else a bad grade, or delayed payment, or penalty fees, etc, etc, etc.

But, here’s the problem with threats: if a threat isn’t carried out, further threats are meaningless. Your bluff has been called.

So, what happens if a sprint commitment is violated? Well, you can go the straight-up nuclear option of an abnormal termination and revert all changes in the sprint. But show me dev studio, cash strapped and cranking to towards its next milestone payment, that’s really going to bite the bullet on that one. An abnormal termination is one of those ideas that makes sense in an academic, behavioral economics sense, but in actuality is cutting your nose to spite your face.

There are other options, of course. But my point is this: while deadlines can provide intrinsic motivation in the form of the self-consistency bias, they are only useful as extrinsic motivation if there is a distinct, painful, and certain consequence for failing to meet them. If you are not willing to drop that hammer consistently, the deadline tiger loses it’s teeth.

And if that is the case, then, again, why is a sprint commitment superior to a target velocity commitment?

Rationale #3: Upward Management

Commitments also serve as a check against management running rough shod over a team’s focus and bandwidth. The team elects to work on the things they can reasonably deliver in a particular window of time (which is as it should be), and management also commits to not screw around with the team’s priorities for that window of time (which is also as it should be).

Again, perfectly logical, but there are some implied problems here as well. First off, the need for a commitment device from managers is a damning allegation against those managers. The fact that we need some sort of buffer to control the impulses of product owners (and we often do) speaks to a severe lack of discipline on their part. It also implies a lack of understanding on their part of how damaging randomization is to developers’ productivity. And I’ve witnessed both failures multiple times.

The above isn’t something a rank-and-file scrum master or developer will be able to solve single-handedly. But sprint commitments seek to treat the symptom (randomization). We as managers or scrum masters, we always need to be focused on the root cause and finding was to alleviate them.

We’re tying our own hands

And in implementing this fix for a symptom, we also constrain our ability to respond to change (which, I’m told, is more important than following a plan). Especially in these days of multiplayer, micro-transactions, an live service, the needs of the organization might be more urgent than the next sprint will accommodate.

Instead of agreeing not to change the stories in a print, you can run a continuous flow process. Then, establish the rule that the product owner can’t tamper with any work that’s already in progress, but can re-prioritize the backlog at any time. Team members will simply pull whatever is at the top of the backlog when they are ready for more work.

This is the process I’ve used for a year and a half and it works incredibly well. Product owners feel like that have increased flexibility as far as tweaking priorities, but developers are still able to focus on one story at a time without fear of randomization

Further Reading If You Enjoyed This Post

But Wait There’s More

I’m not quite done grousing about sprints. Beyond the flawed mechanism of the sprint itself, there are negative second-, third-, and even fourth-order effects that you should consider. And that is what I’ll be addressing in my next post. Stay tuned.

Key Takeaways

The sprint-based nature of scrum creates multiple operational issues

Cycle times are arbitrarily extended, reducing flow time efficiency

The various rationale for sprints are based on flawed premises

A sprint-based system is not the only way to satisfy the tenants of agile

]]>https://www.breakingthewheel.com/flawed-logic-sprints/feed/21409GDC17 Feedback and Responseshttps://www.breakingthewheel.com/gdc17-feedback-responses/
https://www.breakingthewheel.com/gdc17-feedback-responses/#respondThu, 18 May 2017 15:00:45 +0000https://www.breakingthewheel.com/?p=1401GDC 2017 was my first GDC ever. So, I figured “Why not be an asshole about it?” and signed up to give two presentations. 6’ish months later I found myself at GDC, sweating bullets and shitting bricks. I should also mention that the longest presentation I’d ever given was about 10 minutes, and had signed up for a total of 90 minutes of speaking time. Anyhoo, both presentations went well and nobody died. And then, a month and half’ish later, my compiled speaker feedback arrived. It was largely positive. But, of course, there were a few people (4 in each session, based on the reviews) who took umbrage with ol’ Justy. And some of the negative comments bothered me. Not because people disagreed with me (that’s to be expected, after all) but because I couldn’t respond. But then I realized that not only could I respond (having a blog and[...]

]]>GDC 2017 was my first GDC ever. So, I figured “Why not be an asshole about it?” and signed up to give two presentations. 6’ish months later I found myself at GDC, sweating bullets and shitting bricks. I should also mention that the longest presentation I’d ever given was about 10 minutes, and had signed up for a total of 90 minutes of speaking time. Anyhoo, both presentations went well and nobody died. And then, a month and half’ish later, my compiled speaker feedback arrived. It was largely positive. But, of course, there were a few people (4 in each session, based on the reviews) who took umbrage with ol’ Justy. And some of the negative comments bothered me. Not because people disagreed with me (that’s to be expected, after all) but because I couldn’t respond. But then I realized that not only could I respond (having a blog and all), there was an opportunity to discuss and engage some dissenting opinions. So, here you go.

Criticism For My First Presentation: Strategic Design, Or: Why Dark Souls Is The Ikea Of Games

I based my first presentation around the post “Strategic Design: Why Dark Souls is the Ikea of Game Development“. If you have read that, then you get the gist of the session. If you haven’t, the talk focused on how to use an understanding of your target audience to focus design and spending decisions, using the famous (in business circles) Competitive Advantage framework proposed by Michael Porter.

Critic #1: “Useful presentation, but a bit obvious for experienced game developers. Still, novice developers will definitely find it full of sensible advice and insight into gamedev.”

I take Critic #1’s point, but I think he/she oversimplified my thesis. Absent any other comments of contexts from Critic #1, it seems that he/she was saying that experienced developers are already well aware of the need to make trade-offs and focus designs. Fair enough, but the meat of the session was the framework for making those decisions for maximum effect.

I could be totally off base on this one. Maybe Critic #1 totally followed me and found the application of Porter’s Competitive Advantage framework to be a blinding flash of the obvious. But that doesn’t quite hold water for me either. If that were true, we’d be seeing less homogeneity in design, fewer fast follows, less scope creep. And way fewer shooters. Few businesses in non-gamedev industries properly or consistently apply Porter’s framework, so I have trouble believing that our industry has a lock on in.

Perhaps I’m assigning the crimes an missteps of ineffective publisher marketing departments at the feet of devs, but the standard MO tends to be “Think up a design, attempt to execute, get forced into trade-offs by budget or time constraints, and make the best of it”. What I proposed was a framework we’re we can make those trade-offs proactively rather than reactively.

Critic #2: “Meh.”

Alright there, edge lord.

Criticism For My Second Presentation: Better Development Though Science

I culled my second presentation from various chunks of the “Game Planning With Science!” series. Specifically, parts 1 and 2 (to establish some process flow fundamentals) and the sections dealing with lean production. I leaned heavily on the famous Toyota Production System as a model for eliminating waste in sequential processes.

Critic #3: “It’s 2017, and we are still seeing talks about straight lifts of other industries’ process frameworks, it seems.”

Putting the snark aside, I’m not sure if this person got upset because a) he/she’d rather see frameworks unique to games, or b) the notion of needing a framework from another industry is so antiquated/tiresome/regressive as to be ridiculous.

Either way, I don’t understand the gripe. Good ideas (and I’m speaking about the Toyota Production system, not my presentation) are good ideas, regardless of the origin. Proven good ideas, like TPS, even more so. Having completed business school, I can tell you, definitively, that EVERY industry combs every other industry for good ideas. This is why there is a market for Harvard Business Review case studies.

So why should the management of game development be above such exploration? Why would we want to be the one industry that only looks internally for new ideas? Especially when we have such an abysmal aggregate track record for effective management? Absent any other feedback or context from Critic #3, this just seems like a pretentiously silly line of reasoning.

I would also push back against the notion that I proposed a “straight lift” of Toyota’s system. That’s an unfair characterization. I dissected TPS and explained how those activities map or compare to activities that we already use in game development (eg, user stories, automated testing, kanban). Then I explained how we can use those existing activities in concert with an eye to reducing waste.

If Critic #3 took some form of specific umbrage with the applicability of lean production to games, he/she didn’t elucidate. That is a conversation I would be interested to have.

Critic #4*: “The closing caveat of ‘discovery is necessary’ is all well and good, but depending on the project, discovery can account for 20% to 60% of the overall effort. A presentation of methodology that’s frankly hostile to prototyping concepts, should go into further detail on that topic.”

Some context:as part of the session submission process, you get an assigned mentor from the GDC board. As my mentor and I worked through the presentation, it became clear to me that I needed to distinguish between activities that your can systematize (what I called “process”) and those that you can’t effectively systematize because they are unknown (which I called “discovery”).

This line of thinking was the seed for the “Preface to Game Planning With Science”. In a nutshell, the outcome of discovery is hard to predict, because it is highly variant. But, if we can systematize and streamline our processes – the things that we know how to do – we can create more buffer to absorb discovery’s variance.

Now turning to the criticism, I get where this person is coming from with regard to the notion of operation science concepts being “hostile” to prototyping. I heard similar feedback when I was doing trial runs of the presentation ahead of the conference. There’s an inherent tension between systematizing process flows and the persistent need to build out and test gameplay ideas before committing to them. Between the need to eliminate waste and the fact that we will try things that simply won’t work. Between art and science. And that’s a fair criticism or counter-point to make.

Buuuuuuuuut…

However, I would rebut by saying that this argument is throwing the baby out with the bathwater. “Hostile” is a strong word. The notions of taking risks and being disciplined in our actions are not mutually exclusive. A person can both invest in a risky start-up and maintain discipline in her spending habits. There is a world of difference between spending time on a prototype that might not work and being laissez faire in its pursuit, a distinction between risk and waste.

For instance, are you clear on the objective of the prototype? Does the prototype explore a both well-defined AND disprovable hypothesis? Are you investing only the minimum amount of time and energy necessary to verify or disprove that hypothesis? WIll this hypothesis, even if proven true, provide value to the end user? Are you keeping the fast-n-dirty, ad hoc prototyping code quarantined from your production repository?

In other words, even in situations with unknown outcomes, a disciplined, rigorous approach is still applicable. In fact, discipline might be even more vital when lost in those exploratory weeds.

As I mentioned in my closing thoughts during the presentation, the goal of lean production isn’t to curtail experimentation or exploration. The goal is to facilitate MORE of it by eliminating waste: reduce the cycle time of experiments in order to run more of them in a given period.

Now, I would be remiss if I didn’t address a subtext in this comment: the notion that I did not adequately explain the prior four paragraphs in my session. And if not, then that’s on me. But, holy shit balls, I had a lot of ground to cover with this stuff!

Further Reading If You Enjoyed This Post

Summary

So, there you have it. There are my rebuttals to two reasonable people and two snark barons. Putting the douchiness of some session comments aside, I do think it’s important to welcome and engage dissenting opinions, particularly when those opinions might reflect the feelings of other readers who stumble onto Breaking The Wheel. And formulating my responses to these comments is a useful exercise in clarifying both my own thoughts and how I communicate them.

So, thanks for the fodder you four! Even if you never read this post, you still helped me. Well, except for you, Critic #2. You’re just an asshole.

If You Enjoyed This Post, Please Share It!

*This very well might have been part of Critic #3’s feedback. GDC sends you feedback in one large blob of text, so it’s impossible to separate one note to the next. This comment was decidedly more constructive and snark free, so I read it as a coming from a different person.

]]>https://www.breakingthewheel.com/gdc17-feedback-responses/feed/01401Bottlenecks and Hindsight: Why Auteurs Make Horrible Economistshttps://www.breakingthewheel.com/bottlenecks-hindsight-auteurs-make-horrible-economists/
https://www.breakingthewheel.com/bottlenecks-hindsight-auteurs-make-horrible-economists/#respondWed, 10 May 2017 15:00:18 +0000https://www.breakingthewheel.com/?p=1394This post is about an empirical issue: the economic cost of being an auteur. When I originally posted this entry on Gamasutra back in 2014 it was not without its detractors. David Jaffe even dropped a line on it, saying he thought it was neat, while simultaneously implying that I was full of shit. Nonetheless, in retrospect, I still feel this idea is worth considering in an industry like ours, one that consists of both public personas and massive-team-based endeavors. By Reading This Post, You’ll Learn Why more than your artistic reputation is on the line What marginal personal, marginal external, and marginal social costs are What an externality is The definition of survivor bias Why having all of your creative eggs in one basket is a massive risk Stories about auteur-led projects are rife with anecdotes about decision bottlenecks and wasted work. But so what? If the end result is[...]

]]>This post is about an empirical issue: the economic cost of being an auteur. When I originally posted this entry on Gamasutra back in 2014 it was not without its detractors. David Jaffe even dropped a line on it, saying he thought it was neat, while simultaneously implying that I was full of shit. Nonetheless, in retrospect, I still feel this idea is worth considering in an industry like ours, one that consists of both public personas and massive-team-based endeavors.

By Reading This Post, You’ll Learn

Why more than your artistic reputation is on the line

What marginal personal, marginal external, and marginal social costs are

What an externality is

The definition of survivor bias

Why having all of your creative eggs in one basket is a massive risk

Stories about auteur-led projects are rife with anecdotes about decision bottlenecks and wasted work. But so what? If the end result is great, who cares how inefficient the production was? Why should we restrict the creative process of game designers with decidedly non-artistic concepts like budgets or ROI? Well, if your studio is just you then you shouldn’t. Make the art that feels right to you and hold onto it until it is absolutely the best thing you can make.

But, if you are going to spend someone else’s money on development – on a contractual basis, as an employee, or even when using Kickstarter – you have an obligation to be responsible with that money. And if you are going to hire someone to help you make a game/movie/product, you are responsible for that person’s livelihood and, in many cases, the well-being of his or her family. And in those scenarios, budgets and ROI are supremely important.

If you are going to spend someone else’s money on game production, you have an obligation to be responsible with that money. And if you are going to hire someone to help you make a product, you are responsible for that person’s livelihood.

Bottlenecks Destroy Value: A Simplistic Economic Example

Imagine a bridge that sits on a major commuter route. The amount of time it takes each car to cross the bridge is equal to the number of cars on the bridge. So if 20 cars are trying to cross the bridge at once, it takes each of them 20 minutes.

Now, imagine that you and your coworker both commute over that bridge everyday. You can carpool or you can drive separately. If there are already 38 cars on the bridge, it doesn’t really matter to you whether you carpool or drive separately because the difference in your commute time would only be a minute. The marginal personal cost (MPC) you incur by driving yourself is only 1 minute greater than if you were to carpool (40 minutes vs. 39). And, really, who cares about +/- 1 minute?

The other 38 drivers, that’s who. In addition to your MPC, you incur a marginal external cost (MEC) on them. If you and your friend carpool, you both increase the commute time of the 38 other drivers by 1 minute. In other words, your action consumes 38 minutes of someone else’s time. Further, by deciding to drive separately, you consume an additional 39 minutes of someone else’s time.

In mathematical terms, every car that someone adds to the bridge incurs an MPC of n and an MEC of n-1, for a total marginal social cost (MSC) of 2n-1. And that assumes that every car only carries a single occupant. If the other cars have additional passengers, the MEC and MSC go up.

Free Markets Aren’t Free: Meet The Externality

This is what economists call an externality: the economic impact your actions have on those around you. Externalities can be positive (you repaint your house and the value of your neighbor’s house increases as a result) or negative (you paint your house black with pink polka dots and the value of your neighbor’s house decreases).

And when you incur an economic cost against someone else, you are destroying value. And if you incur X cost against Y people, the total value you destroy is X*Y.

That concept is ludicrously simple. You might even feel insulted by the fact that I bothered to spell out an equation that basic. But I want to drive home the point that a short delay across several people can add up to a massive waste of time. And the real world often overlooks this dynamic.

Decision Bottlenecks Are Just As Effective At Destroying Value

Let’s apply this same principle to a game production setting. You are an auteur and you want to make sure everything that goes into the game meets your full approval before it’s actually integrated into the build. In other words, you are the canal through which all decisions flow. And you’re a real stickler for details: you want to go through everything with a fine tooth comb.

So, let’s assume that, on average, it takes you 30 minutes to review a potential submission for a feature or an art asset. And you can only effectively review one thing at a time. Five people need you to review their work before they can submit and move on or start addressing your feedback. No big deal. It’s going to take you 2.5 hours to get through it, but that’s your job as the creative lead. You’re being productive and things are totally awesome.

Except they aren’t. The marginal personal cost for each task you review is only 30 minutes. But you’re exacting a massive external cost on your team. Remember: these guys can’t move on to something else until you’ve approved their work. So the first person in your queue loses 30 minutes of productivity. The second person loses an hour (waiting to meet with you and then meeting with you). So on and so forth, to a grand total of 7.5 hours of time someone has spent waiting for or sitting in a meeting with you.

Your bottle-necking just consumed almost a full day of productivity across those five people (or around half a day if you’re crunching).

Now, Scale That Externality

What if your typical daily queue is more like 10 people? For every day of work, you’re now losing 27.5 hours of productivity. What if your queue is 5 people, but each of those folks has an average of 3 other team members waiting for direction from them? In that case, you’re eating up 30 hours. Every day, you are destroying more than a day’s worth of productivity.

Now, let’s extrapolate:

So, if you’re bottlenecking like a maniac, and your project goes on for 3-years (the low end for many of the highest profile auteurs in the industry), and you have a five-person queue at any given time, you destroy 234.38 days of productivity. Not working days – 24-hour days. You’ve wasted an aggregate total of 234.38 FULL DAYS of someone’s life.

Certainly, this example is simplistic. Professionals will find ways to be productive whenever humanly possible, and people generally have more than one thing to work on. I’d be surprised if even the most controlling of control-freak auteurs really bottlenecked every decision that badly.

But, if you want to be an effective manager and leader, these are the sorts of death-by-a-thousand-cuts time sinks you need to be aware of. You don’t get centralized decision-making for free.

Down with Auteurs? Of Course Not

I’d be ignorant to say that auteurism is only downside. Clearly there is value in a having a single vision drive a project. Some of the best characters and series have emerged from auteurs. My point isn’t that auteurs are terrible, destructive people. But, if you are going to take or endorse the auteur-route, you need to be sure you understand the trade-off you are making. The more control one person maintains, the greater the marginal external cost to the rest of the team.

And let’s be fair: the opposite risks are true to distributed decision making. People move faster but it can be harder to maintain project coherence. Any development strategy carriers trade-offs in time, resources, or quality. You can’t have your cake and eat it too. My point is not to say that auteurism is invalid, but to point out its risks when pursued recklessly.

People often site Miyamoto’s famous quote, “A delayed game is eventually good, a bad game is bad forever.” There’s a logical fallacy in that statement. Yes, on an infinite timeline a delayed game will eventually be good, much as those monkeys will eventually type Shakespeare. But, on a finite timeline, the sunk costs of a delayed game can become financially irredeemable, especially in the very finite window after launch that publishers care about. And that’s where the auteur theory breaks down.

The Unified Musk Field Theory of Elon Musk1

One need not look very far to find the examples of wildly successful control freaks in history. Steve Jobs is an obvious one. Elon Musk another. Surely, their success is a feather in the cap of the auteur theory.

Not so.

There are a few things to keep in mind when it comes to looking at the these success stories. First off, SpaceX and Tesla are TERRIBLE examples to use for small businesses. Why? Because Musk was already a god-damned multi-millionaire when he started the former and took over the latter. In other words, he could financially muscle through the extravagances and operational issues that his control-freak, perfectionist nature created. And even then, he and both companies almost went broke doing it.

Survivor Bias

But he makes amazing products, so he got results, right? Yes, HE got results. He’s also an eidetic genius who only needs to sleep 6 hours a night, is consistently willing to bet his entire fortune on his own abilities, and can afford to hire an army of assistants and nannies to help manage his personal life.

And even then, Musk almost lost SpaceX and Tesla in the same month. Both companies narrowly survived by getting a round of funding days before running out of cash. He even had a handshake deal with Larry Page for Google to buy Telsa. If that eleventh-hour funding had not come through – if the gambles and bluffs he made to secure that funding had failed – he would have been exactly what the haters at Valleywag thought he was: a modern day PT Barnum.

In other words, if it wasn’t for luck, the world wouldn’t revere Musk the way it does.

Now, I’m not saying that you aren’t a genius, just that you can’t base your managerial model on one hyper-successful unicorn. You also need to make your analysis based on the people who went down in flames.

If you base your calculus on the Musks, Jobs’s, and Kojimas of the worldthen you’re only seeing the the people who actually survived the brutal filter of commerce. You’re falling for something called survivor bias.

Abraham Wald and Bomber Analysis

During World War II, the Navy did an analysis of the most common places returning aircraft had been hit by enemy fire. The analysts, then made the logical conclusion: add more armor to those locations. Fortunately for the bomber crews, a man named Abraham Wald suggested something counter-intuitive: put heavier armor everywhere else.

Why would he put armor in the areas where the bombers weren’t damaged? Because Wald could see the forest for the trees. If all the planes that returned had bullet holes in the same places, then it stood to reason that the bombers could sustain damage in those areas without crashing. On the other hand if few or no bombers returned with damage in the other areas, then it also stood to reason that damage to those areas was catastrophic.

In other words, if you only look at the surviving plans – if you cherry pick the data – then the (completely wrong) conclusion is that planes only get hit in certain areas. On the other hand, if you take the entire data set into account (including loses) you have a more accurate picture of where the points of failure are. Planes take damage everywhere, some areas are simply more critical than others.

The Successor Problem

The other trick with the auteur theory, or the Steve Jobs-style central idea man method of leadership, is that it creates a problem when the idea-man or woman exits the stage. If one person is the sole-source of ideas, if one brain is the fountainhead of ideas, then what happens when that person leaves?

Has Apple done anything exciting since Tim Cook took over? By his own admission, SpaceX would not survive is Musk left the picture. If Chris Roberts decides he’s done with Star Citizen, how many people will still pay upwards of $1000 for limited edition space ships?

In short, if all of your eggs are in one, very-smart basket, what happens when that basket leaves? Or, alternatively, what happens when that idea-person stops putting out top-tier work? (See: George Lucas)

Everybody Loves a Bad Idea When It Works

Here’s the trick with genius: it’s only apparent with the benefit of hindsight.

Peter Molyneux famously took 10 Amiga computers from Commodore International when its representatives confused his company, Taurus, for a networking software company called TORUS. That legendary story of video game entrepreneurship helped launch an industry luminary.

But it’s only a great story, and Molyneux only looks like a daring genius, because it worked. If he had been caught deliberately misrepresenting himself and his company, or even been punished in a civil or criminal court, he would have looked like an asshole.

Auteurs Are Only As Good As Their Last Game

The same is true of auteurism: you are only as much of a creative genius as your last game was a critical and commercial success. If your game fails, you can go from a genius with exacting standard to an out-of-touch, high-maintenance, pretentious artíst just as fast as Polygon or Destructoid can post an exposé about it.

As much as this industry – professionals, journalists, and fans alike – loves and adores its heroes, it loves schadenfreude even more. We revel in the bloodshed of a fall from grace like Lisa Bonet in Angelheart. If you’re the creative figurehead for a project, you are also the avatar of its failures. Nobody takes any of the claims Molyneux makes about his upcoming games seriously anymore. Denis Dyack has become a pariah. Ken Levine’s stock has taken a serious dip.

The Slow Burn Of Genius

The time wasted by decision bottlenecks is expensive and the opportunity costs begin to skyrocket.2 Pair that with the reputation for exacting standards and scrapping/replacing existing content that is common for auteurs, and it’s not a surprise that many of them regularly take 4 or 5 years between games. That magnitude of sunk cost is hard to recover. Making games is already a high-risk endeavor, and indulging such a large need for creative control is gambling the fortunes of the publisher, studio, and employees on the convictions of one person.

And that cost carries disastrous consequences when auteur projects fail. To reiterate my previous statement, as a studio head, manager, or lead you have a responsibility to the people who invest in your project. If you don’t deliver on that responsibility, your creative prowess will only carry you so far. Silicon Knights is gone. Junction Point Studios is gone. Irrational Games is gone.

Careers Go Down The Toilet With Failed Games

Putting aside the damage done to the reputations of people like Levine, Dyack, and Warren Specter, their teams were negatively impacted as well. Employees bear their own opportunity costs. They forwent other employment opportunities that might have provided more stability. The employment options they might have after a studio closure may not be as profitable as those they turned down to work at your studio in the first place.

And to reiterate my previous statement again, as a studio-head, manager, or lead you have a responsibility to avoid disrupting your team’s lives and those of their families. Your vision is important, certainly. But is it MORE important than someone’s family?

Business is business. Some amount of failure is inevitable. Every employee in every company bears risk. But, if you want to be an auteur, keep the economic costs you incur in mind. The career you impact might not be your own.

Key Takeaways

Bottlenecking decision-making to avail one person of creative control creates significant, compounding costs in terms of time

“Survivor bias” is the logical fallacy of basing your analysis only on individuals that succeeded, and ignoring those that failed

Having one central idea person creates massive problems for companies when that person departs or loses his/her edge

High-risk moves are only genius in hindsight; nobody looks smart when they do something risky an fail

If you are the auteur, you will own the failures as much as the successes; make sure you appreciate that trade-off

1With all apologies to Ashlee Vance for stealing a chapter title from his biography of Elon Musk

2An opportunity cost is the return available from the best alternative use of your resources. This is the key difference between accounting profit and economic profit. Your accounting profit is your cash-in minus your cash-out. Your economic profit is your accounting profit minus your opportunity cost/s. For example, a venture capitalist can invest in you or in a bond that yields a 10% return. If she invests in you, and you provide a 7% return, you have destroyed value for her even though you turned a profit: she would have made more money with the bond.

In the auteur example above, if your resources are employees who are sitting around waiting for you to give them feedback, the best alternative use for those resources would be to have them actively produce code/content/etc. Wether the value destroyed by inefficient use of resources is offset by the value created by a potentially superior product that sells better is probably too abstract a question to answer definitively. The important takeaway is that it’s not just accounting profit that you should worry about.

If You Enjoyed This Post, Please Share It!

]]>https://www.breakingthewheel.com/bottlenecks-hindsight-auteurs-make-horrible-economists/feed/01394Root Cause Analysis: The Five Whys – Game Planning With Science! Part 15https://www.breakingthewheel.com/root-cause-analysis-five-whys/
https://www.breakingthewheel.com/root-cause-analysis-five-whys/#respondMon, 24 Apr 2017 15:00:08 +0000https://www.breakingthewheel.com/?p=1367On January 28th, 1986, seven astronauts boarded the Challenger for its tenth launch into space. Its previous missions had included the first space walk and, at various times, the first American woman, African-American, Canadian and Dutchman in space. 73 seconds after lift-off, the Challenger broke apart, killing the entire crew. Why? Because the fuel tank exploded. So, the solution, of course, is to send the next shuttle up with a fuel tank that doesn’t explode, right? Only if you assume that the exploding fuel tank was a 100% isolated incident, completely unrelated to any other events. If that sounds fishy to you, it should. And this is where root cause analysis comes into play, a practice colloquially known as “the five whys”. By Reading This Post, You Will Learn: What a “root cause analysis” is How root cause analysis can help you identify and resolve the systemic issues that caused[...]

]]>On January 28th, 1986, seven astronauts boarded the Challenger for its tenth launch into space. Its previous missions had included the first space walk and, at various times, the first American woman, African-American, Canadian and Dutchman in space. 73 seconds after lift-off, the Challenger broke apart, killing the entire crew. Why? Because the fuel tank exploded. So, the solution, of course, is to send the next shuttle up with a fuel tank that doesn’t explode, right?

Only if you assume that the exploding fuel tank was a 100% isolated incident, completely unrelated to any other events. If that sounds fishy to you, it should. And this is where root cause analysis comes into play, a practice colloquially known as “the five whys”.

By Reading This Post, You Will Learn:

What a “root cause analysis” is

How root cause analysis can help you identify and resolve the systemic issues that caused an acute, incidental problem

How to conduct a root cause analysis using “the five whys” format

What Is A Root Cause Analysis?

A root cause analysis assumes that any critical failure is not the result of a single event, but a chain of events starting with a root cause. You can think of each of these contributing factors as the branches of tree. A tree of EVIL. Naturally, you want to destroy this tree, it being evil and all. But, if you only address the acute issue (like a fuel tank exploding) you are only cutting off one branch. It’s preceding branches will all live on to spawn other catastrophes. And killing any plant means getting at its roots.

And you get to the roots by asking “Why?”. A lot. Why did this critical issue occur? Because of Event A. Well, let’s fix Event A, but why did that occur? Event B. Then let’s sort out Event B, but in the meantime, why did that event occur?

A root cause analysis isn’t scientific in the sense of being supported by experimentation and peer-reviewed studies. But I’m including it in “Game Planning With Science!” for two important reasons. First, it is a form of rigorous analysis, which is the foundation of all credible science. Second, it’s very much a lean way of thinking: you are trying to eliminate waste and failure not topically, but at the source. You are not just seeking to treat the symptom, but the disease itself.

Case In Point: The Challenger

Here’s an example of a real-world root cause analysis:

Why did the Challenger’s external fuel tank explode?

Answer: the booster rockets warped during liftoff and started leaking flames and propellant which, in turn burned a hole in the side of the fuel tank.

Answer: the launch was approved for temperatures below those for which Challenger had been certified to fly

Solution: redefine mission criteria to prevent launches in weather conditions for which the space craft in question have not previously been tested/certified

Why did NASA approve the launch during untested weather conditions?

Answer: internal NASA politics bypassed safety protocols and forced a launch over the objections of the engineers

Solution: increase transparency and accountability for observing established protocols

Implications Of The Five Whys

There are three crucial observations to make about a root cause analysis.

First, every “why?” results in a distinct issue (and implied fix) AND a path for further discovery. You are trimming the problematic branch (and all the other branches that might spring from that node), and then following the branch further along to the next deepest node.

Second, it’s crucial to observe how different the first “why?” is from the last. Why #1 is specific, while #5 is systemic. Why #1 is a technical problem – the barn door after the horse has left. #5 is organizational – why did we make the bad decisions that led to leaving the barn door open?

Third, consider the potential for further catastrophic failures if you had only fixed the boosters. Or even if you had stopped with using better O-rings in future missions. The root cause of inadequate launch safety criteria and reckless politicking at NASA would still linger to spawn future problems. By way of example, consider the (admittedly simplistic) flow chart below. Even assuming that any point of failure had, in the worst-case analysis, only two potential points of further failure (an optimistic assessment to say the least), you’re still looking at 15 possible negative outcome scenarios in addition to the actual Challenger disaster.

In other words, if the politics at NASA hadn’t forced a launch in unfavorable weather conditions, none of the other failures would have occurred, and the Challenger would not have exploded. This is not to say that there aren’t other root causes that could result in the Challenger’s demise, but this specific loss of life would not have happened.

The Five Whys In Game Development

Root cause analysis can be applied to any ex post facto investigation of an operational failure. For instance, let’s say you own a game studio and are running on online multiplayer game. The server goes down for an afternoon, costing your company an estimated $10k in lost micro-transactions. You dig through Git and find the offending submission, submitted by one Bobby McBork. Now you can begin and end your follow-up on the issue with reprimanding and/or firing Mr. McBork. Or, you can take a more holistic view and try to identify the sequence of events that lead to the server failure.

Solution: establish a new protocol that NO submissions should be merged into the production code base unless they are designed to fix an ongoing, red alert issue in live service; make sure Jimmy understands the new protocol

Why did Jimmy direct Bobby to merge to production instead of the QA testing server?

Answer: the QA server had been down all week

Solution: bring the QA server back online

Why was the QA testing server down all week?

Answer: because no one took the time to fix it

Solution: assign an engineer to bring the QA server back online

Why didn’t anyone take the time to fix the server?

Answer: no single person was responsible for fixing it, thus no one prioritized the work against his/her own to-dos*

Solution: establish a protocol for dealing with QA server failures and assign first responder/s for such incidents

There’s Always A Catch

The above illustrates a catch in the Five Whys protocol. As a manager, if you ask why enough times, the path may very well lead up to your own door. What appears to be the failure of an individual (poor Bobby McBork) was in fact a failure on your part to establish clear protocol both for deployments and for handling server failures.

That’s not an argument against root cause analysis, mind you. It’s a recommendation that you need to check your ego at the door if you want to fix systemic, root cause issues rather than topical ones.

Does It Have To Be Five? No More, No Less?

As Emerson said, “A foolish consistency is the hobgoblin of little minds.” Don’t ask why five times just for the sake of asking it. And don’t stop at five if there is further discovery to be had. If you have reached the root cause, stop. If you haven’t, continue. The number five is simply a rule of thumb.

Summary

Even the leanest, most efficient process will experience errors and failures. A mistake is a learning experience. Repetitions of that mistake are waste. So, if you want to avoid recurrences of the same issue, don’t just identify the cause. Find and correct the source. The savings, in terms of waste and lost time, will be orders of magnitude greater.

Key Takeaways

Root cause analysis is the process of moving through the chain events of a specific, acute problem in order to identify the underlying systemic causes/s

The typical format for root cause analysis is the five whys – literally asking why five times

When using the five whys, solve each contributing issue you come across as you move toward the root cause

Five whys is a rule of thumb, but root cause analysis doesn’t LITERALLY have to consist of exactly five whys

The point is to try to fine the underlying systemic problem that facilitated the acute issue

If you can find the problem in 3 or 4 whys, fine; and if it takes 7, so be it

If You Enjoyed This Post, Please Share It!

*This is an example of what psychologists call “diffusion of responsibility”. It’s a cognitive bias best summarized as “if everyone is responsible, then no one is responsible”. The most extreme example is Kitty Genovese, who was murdered in the courtyard of her apartment building while her neighbors all watched and did nothing to help. Their inaction was not based in indifference, but occurred because they all assumed that, surely, someone else would intervene or call the police. In my experience as a manager and consultant, management types drastically underestimate the impact of this bias.

]]>https://www.breakingthewheel.com/root-cause-analysis-five-whys/feed/01367Heijunka: Why Batching Is Not Your Friend – Game Planning With Science, Part 14https://www.breakingthewheel.com/heijunka-batching-not-friend/
https://www.breakingthewheel.com/heijunka-batching-not-friend/#commentsTue, 18 Apr 2017 18:16:55 +0000https://www.breakingthewheel.com/?p=1356A commonly held belief is that it’s best to batch work – to handle similar tasks in large, consolidated chunks. The notion makes intuitive sense. It allows you to focus on one activity at a time and avoid so-called switching costs of switching activities. But as with so many other instances of unverified intuition, this particular notion is flat-out wrong. Batching may avoid switching costs, but it greatly protracts flow time, which, in the long run, can end up being far more expensive. Which is why the Toyota Production System introduced the concept of heijunka – “leveling”. Previously on “Game Planning With Science!”: Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 Part 7 Part 8 Part 9 Part 10 Part 11 Part 12 Part 13 By Reading This Post, You Will Learn: Why batching to avoid switching costs is a problematic solution The cost of batching Why the traditional production->alpha->beta->certification model essentially batches your entire game The[...]

]]>A commonly held belief is that it’s best to batch work – to handle similar tasks in large, consolidated chunks. The notion makes intuitive sense. It allows you to focus on one activity at a time and avoid so-called switching costs of switching activities. But as with so many other instances of unverified intuition, this particular notion is flat-out wrong. Batching may avoid switching costs, but it greatly protracts flow time, which, in the long run, can end up being far more expensive. Which is why the Toyota Production System introduced the concept of heijunka – “leveling”.

Why it’s important to run QA along the way and focus on minimizing the cost of QA passes

Understanding The Impact Of Batching On Efficiency

Let’s start with physical products as we work to wrap our heads around this topic. Let’s imagine you have a factory, and you manufacture two items, a Red product and a Blue product, and each item requires to manufacturing activities, Step 1 and Step 2, which can each process on unit at a time. Adjusting each activity from processing Red to processing Blue, or vice-versa, imposes a switching cost in terms of time and money.

In order to minimize your exposure to that switching cost, you batch process the items. You run 100 units of Red through Step 1 and then prep Step 1 for Blue units. You then start to run all 100 Red units through Step 2 to finish them while moving 100 Blue units through Step 1. Next, you prep Step 2 for processing Blue units, and reset Step 1 for Red. And on and on, ad infinitum:

Except there’s one really big problem you’re overlooking. By batching in this way, you are maintaining a perpetual amount of inventory in the system (the purple line):

So What?

There are two major problems here.

First, think back to the kanban post. The more stuff in the system, the longer the queues get in front of activities. And the longer the the queues, the longer the actual flow time. This, in turn, pushes your actual flow time further and further away from your theoretical flow time. Which, consequently, means your flow time efficiency goes right down the crapper.

But there’s another reason that’s a little more subtle. And potentially more expensive.

Businesses operations run on something called “working capital” – this is the amount of money sunk into actually running a company from day to day. And the more money tied up in working capital, the less money you have to invest in other opportunities or return to shareholders.

And one of the largest sources of working capital is inventory, both for manufacturing (components for building sell-able inventory) and retail (inventory on the shelves).

Examples Always Help

Here’s a practical example. Let’s say you own a company that manufactures widgets, and that company’s cost of capital is 10%. Through inefficiencies, you have $10MM of excess inventory in the production line. This is inventory you don’t need to be able to maintain your current throughput. It’s just bloat that you could eliminate if you were more efficient. And at that cost of capital and that magnitude of bloat you have a yearly opportunity cost of $1MM per year.

In other words, if you could streamline your operations and free up that $10MM of working capital, you could apply it to other investments and reasonably expect to make $1MM of profit a year. This is why there is an entire field of study called “supply chain management” and why huge corporations spend millions of dollars forecasting demand. They want to have the minimum amount of inventory on hand to reasonably satisfy demand, and not a cent more.

Enter The Toyota Concept Of Heijunka (平準化)

If batching occurs because we want to avoid switching costs, then it follows that, to eliminate the need for batching, we should focus on reducing switch costs. This is what effective operations managers focus on: the switching costs. Your goal should be to have switching costs so low you can cost-effectively have batch sizes of a single unit.

Toyota refers to this notion as heijunka (literally, “leveling”) – putting an emphasis on keeping all work-in-progress inventory at a minimum level by avoiding batches. It doesn’t want to make 100 red Camry’s, then 100 blue Rav4’s, and then 100 black Forerunners. It wants to make 1 red Camry, then 1 blue Rav4, then 1 black Forerunner.

No queues. No batches. Every component piece spends the absolute minimum amount of time on the factory floor. Actual flow time moves towards theoretical.

Resources That Informed And Influenced This Post

If you have ad blockers turned on, you may not see the above links.

The Overlap With Game Development

Great, Justin. Factories. Blah, blah, blah. Your point?

Here’s where this becomes relevant to game dev: we batch like crazy people. But the problem only becomes apparent depending on how you apply the word “done”.

In the game industry, we generally use the term “done” or “complete” with regard to a feature or user story to mean that we’ve coded that feature or user story and merged it into the build. Under that regime, there doesn’t seem to be any obvious batching. We code one feature, submit it, and then code the next. No worries.

However, if we change the definition of “done” from “coded and merged” to “ready to ship” the problem becomes more apparent.

If a feature isn’t done until it’s ready to ship – ie, until it’s been run through the QA ringer – then our typical Production → Alpha → Beta → Certification sequence indicates a massive operational problem. If you are waiting until the end of your production schedule to perform dedicated QA testing and fixing, you don’t have “batches”. You just have a batch. One.

YOUR ENTIRE GAME:

Not pictured: the day one patch

We code a game’s worth of features (the yellow line), and accrue defects (the red line) at some multiple. Since the features have defects (and thus aren’t ready to ship) they reside as work-in-progress inventory. Then we go through the madness of post-production: we struggle to un-fuck our buggy house of cards until finally we throw our hands up and say “SHIP IT!”.

And, from an operations science perspective, that is pure, unadulterated lunacy.

The Consequence Of The Alpha/Beta/Cert Mentality

There are a couple of problems that a late QA cycle creates.

First: your flow time efficiency doesn’t just go down the tubes. You essentially negate it. By leaving QA until the end of the project, your actual flow time is so far removed from your theoretical flow time that your efficiency ratio is effectively zero.

The second critical issue is that leaving QA until the end of the project pays no head to the time value of fixes. Bug fixes are cheapest when they are implemented as soon as possible. While leaving them to linger doesn’t guarantee that each and every defect will increase in scope, you do squander your ability to sort out issues before they fester.

What’s To Be Done?

Move from a production/alpha/beta/cert mentality to a build→fix, build→fix, build→fix cadence. Make QA testing and hardening part of the definition of done, and part of the sequence of feature development. QA testing shouldn’t be the last stages of production, it should be the final steps of development for every feature. Ideally, you want a single QA pass for every feature submission.

Then, focus on making the cost per QA pass (both in terms of money ant time) as low as possible. As in the manufacturing example, you want to eliminate or minimize the switching cost of transitioning a feature from dev to QA.

You Want Us To Slow Down?!!

Any grizzled veterans reading this may balk at the notion of slowing down production to allow for parallel QA testing. Fair enough.

Except, I’m not advocating that you slow down. I’m advocating that you consolidate the work.

Rather than doing 80% of the story (the development) during production and the remaining 20% (the QA) during alpha or beta, consolidate the work. Do 100% effort to get a feature ready to ship in one pass. Defragment your production process the same way you would defragment a PC hard drive.

The Impact Of Heijunka On Muda

Heijunka impacts to types of muda: excess work in progress (we’re pushing features through QA sooner so they are “done-done” fasters) and work queues (we’re eliminating the months long backlog of hardening work that accrues when teams defer QA to the end of production).

Where Do We Go From Here?

So, at this point in our journey through lean, we’ve taken the time to carefully spec out feature requests to eliminate the potential for human error (poka-yoke). We’re using kanban pull-based production to minimize flow time. We’ve put the robots to work on our behalf with jidoka-style autonomation. We’re using disciplined QA processes to reduce muda. And we are leveraging the concept of heijunka to avoid batching. We have all the tools in place to run an efficient, lean development cycle.

Key Takeaways

However, batching also increases flow times and opportunity costs, which is problematic

The goal of effective operations is, therefor, to minimize those switching costs

The traditional game development model of Production → Alpha → Beta → Certification model is particularly problematic because it, in essence, batches the entire game in one large QA pass

The goal should instead be a continuous Build → Fix cadence, with an emphasis on minimizing the cost and time per QA pass

If You Enjoyed This Post, Please Share It!

*Cost of capital is one of the primary driving forces of decision making for businesses. In very simple terms, it refers to the average return a company can expect to make on its investments (the cost of equity) and the average interest rate of its loans (the cost of debt). So, if you look at XYZ, Inc., which has subsidiaries A, B, and C, if it has a cost-of-capital of 10%, it makes a 10% yearly return on every dollar it puts into those subsidiaries. XYZ is also considering spinning up subsidiary D. In order to justify take dollars away from A, B, and C, D needs to have an forecasted return at least 10%, or it is literally not worth XYZ’s time. This is why cost of capital is also referred to as a “hurdle rate” – the proposed project has to clear the target return rate to be worth pursuing, like a hurdler jumping over a gate.

]]>https://www.breakingthewheel.com/heijunka-batching-not-friend/feed/11356The Muda of Defects, Or: Finding Freedom Through Discipline – Game Planning With Science! Part 13https://www.breakingthewheel.com/muda-defects-finding-freedom-discipline/
https://www.breakingthewheel.com/muda-defects-finding-freedom-discipline/#respondMon, 10 Apr 2017 15:00:08 +0000https://www.breakingthewheel.com/?p=1345One of the more interesting characters I’ve encountered in my wanderings through the internet is a man by the name of Jocko Willink. He’s the author of Extreme Ownership and a business consultant. Oh, and an ex-Navy SEAL and a black-belt in jiu-jitsu. So, the man knows a thing or two about getting shit done under arduous circumstances. And his personal mantra is “Discipline equals freedom.” And the more I study operation science and the more I learn about software development, the more I see his point. So, in this post, I’m going to walk you through a multi-step process for testing code and how a little QA discipline can avail a lot of freedom. Previously on “Game Planning With Science!”: Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 Part 7 Part 8 Part 9 Part 10 Part 11 Part 12 By Reading This Post, You Will Learn: Why discipline provides freedom Why its cheaper to fix[...]

]]>One of the more interesting characters I’ve encountered in my wanderings through the internet is a man by the name of Jocko Willink. He’s the author of Extreme Ownership and a business consultant. Oh, and an ex-Navy SEAL and a black-belt in jiu-jitsu. So, the man knows a thing or two about getting shit done under arduous circumstances. And his personal mantra is “Discipline equals freedom.” And the more I study operation science and the more I learn about software development, the more I see his point. So, in this post, I’m going to walk you through a multi-step process for testing code and how a little QA discipline can avail a lot of freedom.

By Reading This Post, You Will Learn:

What Does He Mean By “Discipline Equals Freedom”?

Simple: if you’re disciplined in how you handle your responsibilities, you will have more freedom to roll with what life throws at you. If you are disciplined about eating a healthy diet and getting regular exercise, you will have the physical freedom inherent in good health (injuries and disease not withstanding). If you are disciplined about studying consistently throughout the school term, you will have the freedom to sleep the night before your final exam, because you won’t need to cram. And, if you have the discipline to follow processes that eliminate waste, you will have the freedom to spend more time creating value for gamers rather than fighting fires.

That is the mindset you need to have when thinking about defects in your game code.

The Muda of Defects

In the intro to lean, i described the seven forms of muda and how they map to game development. From an end-user perspective (and, in most cases, a developer perspective), defects are the most painful. And defects breakdown into two broad categories: bugs and missed acceptance criteria. But whatever form they take, defects are best fixed when it is cheapest: AS SOON AS POSSIBLE. I cover this topic in more detail in my post on the time value of fixes, but, in short, a fix today is more valuable than that same fix in the future.

First, because you don’t know if it will actually be the same fix in the future. The longer it lingers, the more code gets built around it, and the greater the chance the scope of the fix will increase. And that risk bears a cost. Second, if the resulting fix does increase in scope as a result of lingering in the repository, you also bear an opportunity cost. The extra time required to resolve the issue if you defer it is time you could have spent creating more content.

So, a truly lean approach to QA would be focused on finding defects as soon as possible using a defined and disciplined process

A Lean Approach To QA

A lean approach to QA breaks down into five stages, with each stage justifying the next (eg, it’s not worth performing Stage 3, until the submission has passed Stages 1 & 2).

Stage 1: Buddy Tests

The first step of a lean QA process is a buddy test. Quite simply, when one developer is ready to submit his changes to the repository, he grabs another team member – another dev, a QA member, a producer, anyone – and asks her to check his work locally on his machine. This is essentially a smoke test: is there anything glaringly wrong with the potential submission? Have all of the acceptance criteria been met?

Stage 1 Goal: Ensure that the submission meets a basic level of quality and catch any glaring issues before the submission goes any further

If submission passes the buddy test, move to Stage 2

If it fails, address outstanding issues, then repeat Stage 1

Stage 2: Automated Testing

Before the developer requests that the submission gets merged into the build, he runs the relevant suite of unit, integration, regression, and functional testing (depending on the nature of the submission). The goal here is as described in the post on jidoka: leverage the speed and precision of automated testing to perform brute-force validation testing on the submission.

Stage 2 Goal: Let the machines catch any issues they can before you spend further valuable dev hours on QA’ing the submission.

Stage 3: Peer Review

Before the developer’s merge request is approved, another team member reviews the changes in the repository. This is similar to a buddy test, except that a) it should be performed intra-disciplinarily (designers should review design submissions, engineers should review code submissions) and b) the review is at a deeper level of inspection. You’re actually reviewing code, scripts, Unity scenes, etc.

Stage 3 Goal: Let domain experts double check submissions to catch any technical issues or risks before they can contaminate the build

If submission passes peer review testing, submission is approved and merged into the next build; proceed to Stage 4

If it fails, reviewing team member identifies issues to fix, developer resolves issues and repeats Stages 2 & 3

Stage 4: Continuous Integration/Build Verification

The continuous integration process merges the code with its next build push. If the build fails, the CI machine notifies the team, so they can identify and fix any and all offending submissions.

Stage 4 Goal: Ensure the most recent submissions result in a stable build before moving forward

If submission passes continuous integration, proceed to Stage 5

If it fails, identify and correct the offending submission then repeat stages 2 and 4 (you can skip Stage 3 unless the necessary correction is large)

Stage 5: Manual QA Testing

QA manually tests the submission according to its own testing plan and scenarios. The objective here is to find the kinds of obscure defects that automation scripts would have trouble finding. QA should also test against the designated acceptance criteria.

Stage 5 Goal: Maintain a high confidence level that any code that goes to a lead for review is defect-free. Remember the that your leads’ time is generally in the highest demand, and thus is the most valuable. Therefor, it behooves you to now consume any of that time until you have the highest confidence that the submission is ready for review.

If QA signs off on the submission, it goes to the lead or other person who needs to sign-off on the work

If QA finds defects, the relevant developer fixes them and repeats stages 2 through 4 (again, you can skip Stage 3 unless the fix is notably large or risky)

Stage 6: Final Review

The person who requested the feature/user story reviews the submission from an end user perspective. Does it satisfy all of the acceptance criteria and technical requirements? Are there any lingering defects that the previous stages missed?

Stage 6 Goal: Verify that the submission satisfies the design, content, and/or technical needs for which it was specified.

If the reviewer is satisfied, the feature/user story/submission is considered complete

If the reviewer finds lingering defects, developer addresses and repeats stages 2,3, and 5 (and 3 if the fix is large)

If the reviewer finds that the submission is defect free (both in terms of bugs and missing acceptance criteria) but the feature, as spec’ed, doesn’t answer the original need:

Decide whether to overhaul the current submission, or

Scrap the submission, re-write the original request and start-over

Resources That Informed And Influenced This Post

If you have ad blockers turned on, you may not see the above links.

A Lean Approach To QA?!

Here’s the trick will calling this stuff “lean”. Some of you might read the term “lean” and balk at a 6-stage process. Because, many of us, when we hear the term lean, think of Mick Jagger or David Bowie. We think of someone slender – without a lot of meat on his/her bones. But I like the body-builder definition of lean, meaning “no fat”. So, when I think “lean”, I think Jean-Claude Van Damme in blood sport. Beefy as all get-out, but with a minimum of fat.

Lean is not about reducing work. It’s about reducing waste. And this beefy, disciplined approach to QA can do just that.

Think about how the stages are arranged:

Ensure you have something that is not obviously broken and has addressed all of the acceptance criteria before you waste any time testing it

Find and resolve all of the issues automation can find before you consume any dev hours with testing

Perform due diligence to ensure a submission is technically sound before you merge it into a build

Verify that the submission does not hose a build before you have QA take a look

Ensure that the submission is defect free before you ask a lead to look at it

Verify that the submission suits both the intention of the original feature request and the needs of the game before you let it linger in the repository

But A Process Like This Takes Lots Of Time!

It can. But, going back to Scylla and Charybdis yet again, don’t just consider the costs, consider the savings. You expend known effort now to avoid variant disaster later.

Further, you have a measurable cost of time, which means you have an improvable cost (thanks, Peter Drucker!).

In other words, rather than stressing about how long a single QA-pass per submission takes, put your effort into reducing the time per QA cycle. Basically, a rigorous QA process is your insurance against build-melting disaster later. And the faster you can move submissions through the cycle, the lower the cost of your insurance.

Practical Ways To Reduce QA Cycle Times

Write Better User Stories/Feature Specs

The clearer you can be about acceptance criteria and technical requirements for each story, the easier it will be to buddy test, peer review, and manually test the resulting submission.

Write Smaller User Stories/Feature Specs

The smaller the story, the less time it takes to develop and test it. This particularly impacts the time to buddy test and the time to run a manual QA test. This means shorter cycle times and more rapid iteration on the build overall.

Give Your QA Team Members Bandwidth To Develop Testing Plans For Each Pending Story

If they can create testing plans ahead of time (eg, while the developer executes the story), then they can hit the ground running when the story moves to Stage 5.

Use Proper Task Tracking Software

Purpose-built services like JIRA and Hansoft make it easy to track user stories, from the basic stuff (owner and status) to the technical (build numbers, environment deployments, etc). The easier it is to maintain situational awareness, the easier it is to keep stories moving. Further, proper task tracking software makes it simple to track the average time user stories spend in each status.

Optimize Your Build Machine

Faster builds mean less time waiting for builds.

Increase The Rate Of Continuous Integration

If you are pushing builds more often, you will make stories available to QA with less leadtime (ie, less time waiting for the next build), and troubleshooting efforts on broken builds will be more efficient (fewer new submissions per build means fewer stones to overturn).

Prioritize the QA Process

Give team members the bandwidth they need to perform proper buddy tests and peer reviews. Better yet, establish a culture where such supporting efforts are as important as feature development. And don’t bog down QA team members with other work. Think of the effort as an investment rather than a cost.

Further Reading If You Enjoyed This Post

Where Do We Go From Here?

Some grizzled vets might be asking “What’s wrong with doing QA at the end of the project? What’s wrong with saving the last few months of production for ironing our the defects?”

A lot.

First, recall the time value of fixes. The longer you leave this issues in the database, the more expensive they become. The second reason is a little more esoteric – namely, if you leave all of your defects to the end, you are “batching” the work in the extreme. And batching is bad. To understand why, we need to delve into our next topic: heijunka.

Key Takeaways

Maintaining disciplined processes can free up time over the length of a project by stopping major production issues before they occur