With GDC happening this past week, I’ve been inspired to revisit and revise my financial modeling spreadsheet and share my approach here. Based on data from a variety of slideshows, articles, interviews and blogs, I’ll summarize the key metrics that go into the model, and show the sensitivity analysis of which variables move the meter the most. Let’s see how the numbers add up! (Note: this will be a multi-post series.)

The General Flow

Of all the analysis I’ve read online, I really like Kontagent’s approach the best. They’re a company that provides social game analytics tools (http://www.kontagent.com/), and with the breadth of clients they service, their aggregated view of ‘typical’ data points seems the most useful (compared to any numbers quoted by a single developer, which would be biased by their own games).

The presentation they deliver at conferences like GDC has a really nice workflow in it for how customer acquisition, retention and spending connect together. (On SlideShare, you can see previous presentations from them, and their slides have gotten much tighter over time with a great informatics diagram too.)

The model I’ve built out uses the following concepts and variables for forecasting:

Player Populations: I’ve divided the players into three populations, who can then have different variable values assigned to them. These are New Players, One Week Players, and Retained Players.

New Players: players who have only just been acquired, but have not yet had any attrition/retention assigned to them.

One Week Players: the population pool resulting from attrition of the New Player pool. In some versions of the model, I’ve extended this group into Weeks 2, 3, and 4 before transitioning the population to Retained Players. Whether or not you think that’s overkill may depend on how long you think it takes for a player to become firmly entrenched in your game and whether their virality and monetization behavior changes sufficiently during this transition period.

Retained Players: the population pool of One Week Players who continue playing the game long term. Rather than create individual population cohorts for every single week and track them until they reach zero population, I think it’s simpler to put all Retained Players into the same population pool. Then you can apply a single ongoing attrition rate to them, and the same ongoing virality and monetization behavior.

Customer Acquisition (free): players acquired from cross-promoting from your other games, for example. Note: this isn’t the same as viral acquisition (see below), but is something you’re intentionally doing, but still doesn’t cost you anything out-of-pocket.

Attrition after 1 week: how many of your New Players (paid, free or viral) leave after their first week of play.

Ongoing Attrition of Retained Players: how many of your Retained Players leave each week.

Virality of 1 Week Players: for each 1 Week Player in the population, how many New Players are generated from viral invitations.

Virality of Retained Players: for each member of the Retained Player population, how many New Players are generated from viral inviations.

Virality of New Player: my assumption here is that only New Players that decide to keep playing the game (becoming One Week players) are going to bother sending out Invites. It’s easy enough to add Virality behavior to the New Player population, though.

Engagement: for each population group, assign Daily Active User (DAU) as % of Monthly Active User (MAU). This will take the overall population of a group, and represent how many of them are actually playing daily. This is important because spending data is often measured in relation to DAU.

Player Spending: For each population group, how much real money are they spending per day, measured per DAU. This is clearly an aggregate number that encompasses the Whales, who spend $100s and $1,000s of dollars, and the Minnows who might spend a couple of dollars for the whole month.

Operations & Content Cost: Some generalized number representing costs for such things as: servers, customer service, community, new content design, programming, etc. One simple way to have this scale with the size of the game is to measure it as a % of Revenue or as a % of Population, and then apply a minimum spend underneath it.

Next Week

So that’s an overview of the variables at play in the model. Next week, I’ll look at some of the values to these variables that are being publicly discussed on the interwebs, and look at a sensitivity analysis of which variables affect bottom-line revenue the most.（source:plotluckgames）

Financial Forecasting for Social Games (pt 2)

By Brian Poel

Last week, I looked at the variables that go into my social games financial forecasting model, introduced a great slideshow from Kontagent on the subject, and also provided some additional research links. This week, I’ll dig into some of the variables a bit more and provide ranges for those metrics, with supporting links where appropriate.

Mature Games vs. New Games

Although I love Kontagent’s Acquisition-Retention-Monetization model, the numbers quoted in their slideshow are a bit discouraging when you plug them into the model. Then I found a video from a GDC panel discussion (hosted by Kontagent) that gives some real-world context to the numbers. These developers point out that Kontagent’s numbers reflect a mature game, but that many of these metrics are much different at the very beginning of a game launch. I’ll reference this observation several times in the rest of this blog post.

One key question to address then is where to draw the line between “New” and “Mature”, and gradations in between. It could be a measure of time or based on thresholds of player population. It really depends on the nature of your game, expected stable population, and your marketing budget to get there. A broad-appeal game may be aiming for 10s to 100s or millions, while a niche game may be happy with a highly invested several million players. And how much money do you have for Facebook ads?

I recommend a dual approach here. First, break your game lifecycle into 3 pieces (early, middle, late) and assign a length of time to each (1 month, 2 months, remaining 9 months of a year long forecast). Then assign different values for the key metrics depending on which phase you’re in (either gradually shift values to the new goal or switch at the dividing line).

Importance of Value Ranges for the Key Metrics

When modeling, I find it invaluable to define expected ranges for each of the key variables, rather than some set value, establishing baselines for “Poor”, “Middling” and “Good” performance of your game in each metric. This way, you’re being realistic about how far above average you’ll need to be in how many metrics in order to be successful — and if you’re below average in some metrics, how exceptional you’ll need to be in the rest.

Customer Acquisition (paid):

* Kontagent’s presentation quotes a $1 to $3 cost of acquisition. Based on other discussions, this seems high and likely represents the cost for mature games.

* In this article from InsideSocial, Hussein Fazal is CEO of AdParlor puts forward that your cost of acquisition increases over time, but your initial spending will be more efficient because you’re getting high click-throughs from the low-hanging fruit.

* In the GDC panel discussion (at the 23:50 minute mark) the developers back up AdParlor’s theories: they discuss how much easier it is at the beginning to attract early adopters to the game, and how critical it is to be successful with the retention of this population.

* Young vs Mature: consider assigning the Good value for the New phase and shift towards Poor in the later phases.

* Variable Ranges:

–Poor: $3

–Middling: $1

–Good: $0.50

Customer Acquisition (free):

* If you have more than one game, cross-promoting to your own players is a no-brainer. Because players often juggle several games at once, you shouldn’t be afraid of cannibalizing one game for another. And if a player is losing steam on one of your games, reinvesting them into a new game of yours is better than losing them.

* As a new developer, with only one game, your only real choice is partnering with link exchange services or with other developers/publishers. Applifier is a good example (http://www.applifier.com) of an easily accessible (and free) cross-promotion.

* In the GDC panel (at the 26:30 minute mark), the developers discuss cross-promotion. Interestingly (at 29:35), they discuss how the best target is often the game(s) that represent your direct competition, as they’re the most qualified players and both developers can come out ahead on such a ‘barter’ arrangement.

* In this SocialTimes article about Applifier, the author quotes an example of 1.5mm clicks for a “best performing app”. I suggest backing into a weekly rate of new customers by stretching this number over some length of time (3 months covers the early phases of the lifecycle), chop it into weeks, and apply some low/middle/good conversion rates.

* Young vs Mature: by limiting the effect of this free promotion to the early phases of growth, I think that’s a more realistic expectation of how long you can expect to pull water from this particular well.

* Variable Ranges:

–Poor: 0.5mm > 10% CTR > 50% install, over 3 months = 2.5k per week

–Middling: 1.0mm > 15% CTR > 60% install, over 3 months = 9k per week

–Good: 1.5mm > 20% CTR > 70% install, over 3 months = 21k per week

–You can always ‘mix and match’ these values for more gradation between values, but this gives you a range of worst-case to best-case.

Attrition after 1 week:

* Kontagent’s presentation, puts this at 90% to 85%. Ouch.

* As mentioned above, in the GDC panel (at the 35:45 minute mark), the developers point out that this kind of attrition is for mature games, and that at initial launch if you don’t see Day 1 Retention of better than 50%, then you’ve likely missed the mark. Of course that’s just Day 1 (not Week 1) retention, but the point is still valid. During the Early phase of the game lifecyle, you can expect (and you need) for retention to be much higher.

* Young vs Mature: consider either applying the Good attrition to the Early phase and the Poor to the Mature phase, or creating 3 ranges of Poor, Middling and Good for each of the phases. I prefer the latter, but the model just gets more fiddly the more variables there are to tweak.

* Note: this variable is one of the most critical factors driving overall success of the model. If your game is a leaky bucket that loses players faster than you can pour in new ones, it’s time to do some major redesigning or cut your losses. Spending more on customer acquisition than the Lifetime Value of your players is a losing game.

* Young Phase (1 month) Variable Ranges:

–Poor: 60% attrition

–Middling: 45% attrition

–Good: 30% attrition

* Middle Phase (months 2 and 3) Variable Ranges:

–Poor: 75% attrition

–Middling: 60% attrition

–Good: 45% attrition

* Mature Phase Variable Ranges:

–Poor: 90% attrition

–Middling: 80% attrition

–Good: 70% attrition

Ongoing Attrition of Retained Players:

* In my model, rather than track every individual population cohort individually, I take all of the New Players who survive attrition into the One Week pool and put them into a single pool of Retained Players, and then apply a static ongoing attrition rate to that population.

* In this blog post from InsideSocialGames, Tim Trefren, from the analytics company Mixpanel, shows that after the initial population drop-off, most games have similar ongoing attrition, ranging from only 20% to 5%. This also serves to highlight how critical initial retention rates are, if later weeks settle into the same range regardless.