Getting the most out of your data often requires you to look at both ends of the analytics spectrum. Calq supports aggregating data over billions of users, but also allows drilling down into individual user timelines.

To make the latter easier we have redesigned our user profile page. New features

Getting the most out of your data often requires you to look at both ends of the analytics spectrum. Calq supports aggregating data over billions of users, but also allows drilling down into individual user timelines.

To make the latter easier we have redesigned our user profile page. New features now include:

Quick activity summary to judge when a user has been active recently.

View all custom properties you have set for a user in a single place.

Full historical timeline including the ability to examine the properties of each event that was recorded.

The updated page has been rolled out to all Calq accounts today.

We are also now allowing full access to user activity streams via our API. Previously this was an internal feature only. You can now query customer streams using the following AQL:

SELECT * FROM $stream
WHERE $actor = 'actor_id' [ AND optional_filters ]
PERIOD from TO to LIMIT 100

Our engineers have been spending a lot of time recently making our query engine perform faster. We have made some major optimisations to the way Calq handles our more popular queries. We have also increased our hardware capacity. The result? Almost all queries should now complete in under 1 second

Our engineers have been spending a lot of time recently making our query engine perform faster. We have made some major optimisations to the way Calq handles our more popular queries. We have also increased our hardware capacity. The result? Almost all queries should now complete in under 1 second, with a few more complicated queries completing in under 10 seconds.

A major design goal at Calq has always been to provide "real time" analytics. Not only do we make your data available via our reports as soon as we have received it, but we also want those queries to be as fast as possible. We want you to be able to explore your data and not have to wait to try things out. This latest update provides lightning fast queries to achieve exactly that.

The hardest part of integrating with an action based analytics service, such as Calq, is actually defining the actions that you want to measure.

Cataloguing actions

Here at Calq we do a lot of data modelling for clients and have developed a process that works well for us. We start by focusing on actions and user journeys rather than the business questions the platform is to answer. We normally find that our customers think of a few questions up front, and then add more and more as they become familiar with Calq's functionality. Calq can only report on data which has already been measured, and this is why it’s normally better to start from an action perspective rather than a questions perspective. If everything that a user can do is being measured from day 1, then Calq can normally make the required reports to answer business questions as they arise.

For a website, we would look at each page in turn and make a list of the possible actions that a user can perform on that page. We do this independently of user journeys (though it’s good to keep them in mind just to make sure you cover all pages).

Keep in mind that in Calq any of the actions you record can also take additional custom data. This custom data is often where you get the most value so be sure to include anything that is relevant. For example, for a “Message Sent” action, you might want to also include, the number of recipients, the length of the message, and how long the author spent composing it.

You can look at user journeys once you have compiled a complete list of actions. The purpose here is just to map out the possible journeys within your app and ensure you have the needed actions for each part of the journey. We normally find we have overlooked a couple of actions the first time through. Mapping out the user flows explicitly also gives you a list of user flows to convert into funnels later on.

Business questions

With a thorough action list the final stage is to get some initial intelligence questions. The business questions you will want to report on typically change depending on the department that needs the data (though in a smaller company, those lines blur as everyone overlaps).

Management normally want top line overviews and have a fixed set of KPIs they want insight in (new users, total users, number of sales etc). Marketing teams often want an idea on how their acquisitions are performing – it’s typical to get a channel that is converting well, but you need to check those users are actually valuable within your app by looking at actions. Lastly, product teams (often the same as the development team in a small business) want to know how the product is being used so they can iterate and improve on it.

Some simple examples to get you going:

Management

How many new visitors do we get per day/week/month?

Average sale value per day/week/month

Number of sales in last day/week/month

Sales & Marketing

What is the retention (the amount a user comes back) for a user acquired through channel X?

Which campaigns are most likely to convert into sales?

What are our users doing before they pay for the first time?

Product

What % of our users use feature X?

We just updated feature Y. Does that make it better?

The goal is to get a list (doesn’t have to be big) of initial business questions that each team wants answering and check that against your list of actions (i.e. do we have appropriate actions, and data to answer this question). It’s quite typical to find you have all the actions mapped out, but you may not have include sufficient custom properties to build a report.

We are always happy to take a look at any model you have made to see if we notice anything. Just reach out to us.

]]>

Calq allows you to run queries against your data using an SQL-like language called the Action Query Language (AQL). This has been designed to feel like SQL so developers can learn it easily.

AQL is used to write advanced queries within Calq where the query builder UI is not expressive

Calq allows you to run queries against your data using an SQL-like language called the Action Query Language (AQL). This has been designed to feel like SQL so developers can learn it easily.

AQL is used to write advanced queries within Calq where the query builder UI is not expressive enough for your needs, or simply when you are more comfortable writing AQL code than using the UI.

AQL is also used "under the hood" by the Calq reporting interface to query your data - almost everything shown in Calq is shown using AQL in combination with the Query API. This means you can write AQL queries to consume data within your own dashboards and admin systems.

Writing AQL within Calq

Specific reports within Calq have a UI option to open the "Advanced Editor". This allows you to write your own AQL. If you have a query (or part of a query) already built then this will be used as a base.

AQL allows you to treat actions as if they were SQL tables, and properties as if they were properties of those tables. The general syntax for an AQL SELECT query for time series data is:

The query is used to fetch data from a single action. The name of the action is specified by the action placeholder. If the name contains spaces or keywords the action will need to be surrounded with double quotes (e.g. "My Action"). This is the same way you would query a table in most databases.

There optional WHERE condition allows the result set to be filtered. The condition is similar to SQL and can include AND and OR operations. Property names as part of a WHERE clause should again be double quoted if they contain spaces or are a keyword (e.g. "My Property").

Note that when using the Advanced Editor you do not specify the PERIOD and GROUP BY line as that is handled automatically by the data picker. You only specify these when using the query API.

Example: Query the number of "Registration" actions, per day, for September 2014.

SELECT COUNT(*) FROM "Registration" PERIOD '2014-09-01 00:00:00' TO '2014-09-30 23:59:59' GROUP BY DAY

Example: Query the number of Registration actions, per hour, for September 2014, where the $utm_source parameter started with the string "Foo_" or "Bar_"

SELECT COUNT(*) FROM "Registration"
WHERE "$utm_source" LIKE 'Foo_%' OR "$utm_source" LIKE 'Bar_%'
PERIOD '2014-09-01 00:00:00' TO '2014-09-30 23:59:59' GROUP BY HOUR

It is possible to write other types of query using AQL including conversion funnels (SELECT FOR FUNNEL ...) and retention (SELECT FOR RETENTION ...). For further information on all the supported query types see the [AQL documentation]((https://calq.io/docs/query/aql).

Optimising in app purchases (IAPs)

Typically the goal of a mobile game is either brand awareness or to drive revenue. Ancient Blocks is a commercial offering and revenue is the primary goal.

The game has an in game currency called "Gems" which can be spent on boosting the effects of in game power ups. Using a power up during a level will also cost a gem each time. Players can slowly accrue gems by playing. Alternatively a player can also buy additional gems in bulk using real world payments.

The goal is to increase the average life time value (LTV) of a player. This is done by converting more players into paying customers, making those customers pay more often, and increasing the value of each purchase made.

Some of the metrics we want to measure are:

Which user journey to the IAP screen gives the best conversions?

The number of players that look at the IAP options but do not go on to make a purchase.

The number of players that try to make a purchase but fail.

Which items are the most popular?

The cost brackets of the most popular items.

The percentage of customers that go on to make a repeat purchase.

The customer sources (e.g. ad campaigns) that generate the most valuable customers.

Implementation

Most of the required metrics can be achieved with just 4 actions.

Monetization.IAP - When a player actually buys something with real world cash using in-app purchasing (i.e. buying new gems, not spending gems).

Monetization.FailedIAP - A player tried to make a purchase the transaction did not complete. Some extra information is normally given back by the store provider to indicate the reason (whether that be iTunes, Google Play etc).

Monetization.Shop - The player opened the shop screen. It's important to know how players reached the shop screen. If a particular action (such as an in-game prompt) generates the most sales, then you will want to trigger that prompt more often (and probably refine its presentation).

Monetization.Spend - The player spent gems in the shop to buy something. This is needed to map between real world currency and popular items within the game (as they are priced in gems).

Action

Properties

Monetization.IAP

ProductId - The number / id of the product or bundle being purchased.

MaxLevel - The highest level the user has reached in the game when making this purchase.

ScreenReferrer - Identifies the screen / prompt / point of entry that eventually triggered this purchase.

$sale_value (added by trackSale(...)) - The value of this sale in real world currency.

$sale_currency (added by trackSale(...)) - The 3 letter code of the real world currency being used (e.g. USD).

Monetization.FailedIAP

ProductId - The number / id of the product or bundle that failed to be purchased.

Response - A response code from the payment provider (if given).

Message - A message from the payment provider (if given).

Monetization.Shop

Screen - Which shop screen this was (such as the main shop, the IAP shop etc).

ScreenReferrer - Identifies the screen / prompt / point of entry that resulted in the shop being displayed.

Monetization.Spend

ProductId - The number / id of the item being spent on.

Type - The type of spend this is (Item Upgrade, Cooldown, Lives, etc).

Gems - The number of gems (in game currency) being spent.

MaxLevel - The highest level the user has reached in the game when making this purchase.

ScreenReferrer - Identifies the screen / prompt / point of entry that eventually triggered this purchase.

In addition to these properties Ancient Blocks is tracking range of global properties (set with setGlobalProperty(...) detailing how each player was acquired (which campaign, which source etc).

Analysis

A great deal of insight can be made using the actions defined above.

IAP conversions

One of the most important metrics is the conversion rate for the in game store, i.e. how many people viewing the store go and make a purchase with real world currency.

In a typical freemium game of this style, around 2% of players will actually make a purchase. However, the store to purchase conversion rate is typically much lower as the store is often triggered many times in a game session. If a game is particularly aggressive at funnelling players towards the store screen then the conversion rate could be even lower - and yet still be a good conversion rate for that game.

To measure this in Ancient Blocks a simple funnel is used with the following actions:

Monetization.Shop (with the Screen property set to "Main") - the player opened the main shop screen.

Monetization.Shop (with the Screen property set to "IAP") - the player opened the IAP shop (the shop that sells Gems for real world money).

As you can see, the conversion rate in Ancient Blocks is 1.36%. This is lower than expected and is a good indicator that the whole process needs refining. As the authors of Ancient Blocks modify the store page and the flow, they can revisit this conversion funnel to see if the changes were positive.

IAP failures

It's useful to keep an eye on the failure rates of attempted IAPs. This can easily be measured using the Monetization.FailedIAP action from earlier.

It's good to look at why payments are failing so you can try to do something about it - though a lot of the time it's out of the developers control. Sharp changes in IAP rates can also indicate problems with payment gateways, API changes, or even attempts at fraud. In each of these cases you would want to take action pro-actively.

The reasons given for failure vary between payment providers (whether that's a mobile provider such as Google Play or the App Store, or an online payment provider such as Paddle). Depending on your provider you will get more or less granular data to act upon.

Comparing IAPs across customer acquisition sources

Most businesses measure the conversion effectiveness of aquisition campaigns (e.g. the number of impressions vs the number of people that downloaded the game). Using Calq this can be taken further to show the acquisition sources that actually went on to make the most purchases (or spend the most money etc).

Using the Monetization.IAP or Monetization.Spend actions as appropriate, Calq can chart the data based on the referral data set with setGlobalProperty(...). Remember to accommodate that you may have more players from one source than another which could apply a bias. You want the query to be adjusted by total players per source.

The results indicate which customer sources are spending more, and this data should be factored in to any acquisition budgets.

Final summary

This 3 part example study is meant as a starting point to build upon. Each game is going to be slightly different and it will make sense to measure different events. The live version of Ancient Blocks actually measures many more data points than this.

Key take away points:

The ultimate goal is to improve core KPIs (retention, engagement, and LTV), but to do this you often to drill down and measure many smaller game components.

Metrics are often linked. Improving one metric will normally affect another and vice versa.

Propose, test, measure, and repeat. Always add refinements or new features to your product. Measure their impact each time. If it works then you refine it. If it doesn't then you should rethink or remove it. Don't be afraid to kill to features if they are not adding value!

Measure everything! You will likely want to answer even more questions of your product later but you will need the data there to answer these questions.

]]>

Just a quick entry to say that the team over at segment.io have completed their Calq integration! Calq is now available as an option within Segment if you have the Startup plan or higher.

Segment is a great tool that allows you to easily integrate with a whole range

Just a quick entry to say that the team over at segment.io have completed their Calq integration! Calq is now available as an option within Segment if you have the Startup plan or higher.

Segment is a great tool that allows you to easily integrate with a whole range of platforms without writing code for each one. Just integrate Segment once, and turn on and off services as you need them. To borrow a quote from Segment:

"Send your data to over 100 tools with the flick of a switch."

Head on over to their integrations page for the full list of compatible services - including Calq!.

]]>

Earlier last week, a vulnerability was discovered in the widely distributed program bash. It was made public yesterday as CVE-2014-6271, but is more commonly known by its colloquial name of "Shellshock".

The bash program is a common shell for evaluating and executing commands from user input and other

Earlier last week, a vulnerability was discovered in the widely distributed program bash. It was made public yesterday as CVE-2014-6271, but is more commonly known by its colloquial name of "Shellshock".

The bash program is a common shell for evaluating and executing commands from user input and other programs. The vulnerability potentially affects all systems using bash. This includes most Unix derivatives such as Linux and OSX. It is also important to consider that many embedded systems, including internet infrastructure such as routers, could additionally be affected.

Calq's platform was largely immune to the vulnerability. Only a small number of isolated systems are running the affected bash program, and these have additional layers of protection in place that made them even less vulnerable.

We have no evidence to suggest a breach has taken place. However, due to the nature of the attack, a lack of evidence is not a conclusive indicator. This is true of nearly all businesses affected, not just Calq. Due to the additional layers of security we have in place, it is increasingly unlikely that Calq was affected by this vulnerability, though as a precaution we have taken proactive steps to maintain the security of the data we store.

This is an on-going process as additional information about the vulnerability is discovered. Our engineers will continue to be proactive in protecting against this vulnerability. If you require further information please contact our support team.

Game balance metrics

It's important that a game is correctly balanced. If it's too easy then players will get bored. If it's too hard then players will leave in frustration instead.

The initial metrics we want to record in this example are:

The percentage of players finish the first level.

The percentage of players finish the first 5 levels.

The percentage of players that quit without finishing a level.

The number of times a player replays a level before passing the level.

The average time spent playing each level.

The number of "power ups" that a player uses to pass each level.

The number of blocks does a player swipes to pass each level.

The number of launches (block explosions) that a player triggers to pass each level.

Implementation

As Ancient Blocks is a resonably simple game we can get a lot of data from just 3 actions: Gameplay.Start for when a player starts a new level, Gameplay.Finish for when a user finishes playing a level (whether or not they managed to pass it), and Gameplay.PowerUp for when a player uses one of the special a "power up" abilities whilst playing a level.

Action

Properties

Gameplay.Start

Level - The number of the level being played (e.g. level 7).

Difficulty - The current difficulty setting of the level being played.

Gameplay.Finish

Level - The number of the level being played (e.g. level 7).

Difficulty - The difficulty setting of the level that was just finished.

Duration - The duration (in seconds) the player took to finish this level.

Success - Whether or not the user passed the level (true) or if they were defeated (false).

PowerUps - The number of times a special power up ability was used.

Blocks - The number of blocks the player moved during this level.

Launches - The number of times a player triggered a launch during this level.

Gameplay.PowerUp

Id - The numeric id of the power up that was used.

Level - The number of the level being played (e.g. level 7)

Difficulty - The difficulty setting of the level being played.

After - The amount of time (in seconds) into the level the user was when they used a power up.

Analysis

Using just the 3 simple actions defined above it is possible to do a range of in depth analysis on player behaviour and game balance.

Initial progression

One of the first things to analyse within Calq is the successful progression through the first 5 levels. This is a good indicator whether or not the first few levels are
well balanced, and whether players really understood the tutorial that showed them how to play.

This is done by creating a conversion funnel in Calq that describes the user journey through the first 10 levels (or more if we want). The funnel will need 10 steps, one for each of the first 10 levels. The action to be analysed is Gameplay.Finish.

Each step will need a filter. The first filter needs to be on the level Id to filter the step to the correct level, and a second filter on the Success property to only include level play that passed.

There will normally be natural drop off as not all players will want to progress further into the game. However, if certain levels are experiencing a signifciantly larger drop off than we expect then those levels are good candidates to be rebalanced. It could be that the level is too hard, it could be less enjoyable, or it could even be that the player doesn't understand what they need to do.

Level completion rates

Player progression doesn't always provide the full picture. It's also good to look at how many times each level is being played compared to how many times it is actually passed.

Taking Ancient Block's 3rd level for an example: we can query the number of times the level has been played and break it down into successes and failures.

To do this in Calq we can use the Gameplay.Finish again, and apply a filter to only show the 3rd level. By grouping the results based on the Success property and showing a pie chart we can quickly see the failure rates for this level.

The designers of Ancient Blocks were targetting a success rate of 75% on the 3rd level. Our results show it's slightly too hard and needs a little tweaking.

Aborted sessions

Another metric which is incredibly useful for measuring early play is the number of people that start a level but don't actually finish it - i.e. quit. This is especially useful to measure for the first level after the tutorial has finished. If players are just quitting then either don't like the game, or they are getting frustrated after the tutorial.

We can create a short conversion funnel within Calq to measure this. Using the Gameplay.Start action, the Tutorial Step action from last week (so we can account for people that dropped off before the tutorial was even finished), and the Gameplay.Finish action.

The results show 64.9% of players (the result between the 2nd and 3rd step) which finished the tutorial went on to finish the level. This means 35.1% of players quit the game in that gap. This is a metric for the Ancient Blocks designers to iterate on and improve.

Over the next few weeks we are going to be publishing some example analytics cases and show where Calq has been used to provide the necessary insight.

This week's example discusses some key metrics for a mobile game. Our example game, "Ancient Blocks", is actually available on the App Store if you want to see the game in full. This example is meant to be a starting point, it is not meant to be an exhaustive list of everything a mobile game should measure.

User LTVs - what is the lifetime value of a player (typically measured over various cohorts, gender, location, acquiring ad campaign etc).

DARPU - daily average revenue per user, i.e. the amount of revenue generated per active player per day.

ARPPU - average revenue per paying user, a related measurement to LTV but it only counts the subset of users that are actually paying.

There will also be a selection of game specific KPIs. These will give insight on isolated parts of the game so that they can be improved. The ultimate goal is improving the high-level KPIs by improving as many game areas as possible.

Retention

As mentioned in our previous article on retention, player retention is a critical indicator. Arguably it's even more important to measure retention than revenue. If you have great retention but poor user life-time values (LTV) then you can normally refine and improve the latter. The opposite is not true. It's much harder to monetise an application with low retention rates.

When the game is iterated upon (either by adding/removing features, or adjusting existing ones) the retention can be checked to see if the changes had a positive impact.

Active user base

The DAU/WAU/MAU measurements are industry standard measurements showing the size of your active user base. From here it's easy to spot if your audience is growing, shrinking, or flat.

Active user measurements need to be analysed with the additional context of the retention report. Your userbase will be flat if you have lots of new users but are losing existing users (churn) at the same rate. If this was the case then time could be spent trying to keep existing users rather than investing in new ones.

Game specific KPIs

In addition to the common KPIs each game will have additional metrics which are specific to the product in question. This could include data on player progression through the game (such as levels), game mechanics and balance metrics, viral and sharing loops etc. Most user journeys (paths of interaction that a user can take in your application, such as a menu to start a new game) will also be measured so they can be iterated on and optimised.

For Ancient Blocks game specific metrics include:

Player progression:

Which levels are being completed.

Whether players are replaying on a harder difficulty.

Level difficulty:

How many attempts does it takes to finish a level.

How long is spent within a level.

How many power ups does a player use before completing a level.

In game currency:

When does a user spend in game currency?

What do they spend it on?

What does a player normally do before they make a puchase?

In-game tutorial

A typical component of most successful mobile games is an interactive tutorial that teaches new players how to play. This is often the first impression a user gets of your game and as a result it needs to be extremely well refined. With a bad tutorial your D1 retention will be poor.

Ancient Blocks has a simple 10 step tutorial that shows the user how to play (by dragging blocks vertically until they are aligned).

Goals

The data collected about the tutorial needs to show any areas which could be improved. Typically these are areas where users are getting stuck, or taking too long.

Identify any sticking points within the tutorial (points where users get stuck).

Iteratively these tutorial steps to improve conversion rate (the percentage that get to the end successfully).

Metrics

In order to improve the tutorial a set of tutorial specific metrics should be defined. For Ancient Blocks the key metrics we need are:

The percentages of players that make it through each tutorial step.

The percentage of players that actually finish the tutorial.

The amount of time spent on each step.

The percentage of players that go on to play the level after the tutorial.

Implementation

Tracking tutorial steps is straight forward with Calq. Ancient Blocks uses a single action called Tutorial Step. This action includes a custom attribute called Step to indicate which tutorial step the user is on (0 indicates the first step). We also want to track how long a user spend on each step (in seconds). To do this we also include a property called Duration.

Action

Properties

Tutorial Step

Step - The current tutorial step (0 for start, 1, 2, 3 ... etc).

Duration - The duration (in seconds) the user took to complete the step.

{
"action_name": "Tutorial Step",
"properties": {
"Step": 2
}
}
```
#### Analysis
Analysing the tutorial data within Calq is very easy. Most of the metrics can be found by creating a simple conversion funnel, with one funnel step for each tutorial stage.
![](/content/images/2014/Aug/GameExample-Tutorial-Funnel.png)
The completed funnel query shows the conversion rate of the entire tutorial on a step by step basis. From here it is very easy to see which steps "lose" the most users.
![](/content/images/2014/Aug/GameExample-Tutorial-Funnel2.png)
As you can see from the results: step 4 has a conversion rate of around 97% compared to 99% for the other steps. This step would be a good candidate to improve. Even though it's only a 1 percentage point difference, that still means around $1k in lost revenue just on that step. Per month! For a popular game the difference would be much larger.
[Part 2 continues next week](http://blog.calq.io/example-case-analytics-for-a-mobile-game-ancient-blocks-part-2/), looking at metrics on game balance and player progression.

]]>

Retention is often the single most important measurement of any web or mobile application. It's arguably even more important than measuring revenue. If you have great retention but poor user life-time values (LTV) then you can normally refine and improve the latter. The opposite is not true. It's very hard

Retention is often the single most important measurement of any web or mobile application. It's arguably even more important than measuring revenue. If you have great retention but poor user life-time values (LTV) then you can normally refine and improve the latter. The opposite is not true. It's very hard to "fix" an application with low retention rates from the outset.

But what exactly is retention? If a user likes your application they will come back and use it again. Congratulations - you retained a user! The retention rate is the percentage of users that come back 1 day later (D1), 7 days later (D7), 14 days later (D14), and so on.

Measuring retention

As with all reports in Calq, the Retention Grid can work with your custom action data. Cohorts can be set, filters can be applied, and follow-up actions can be processed. Want to know the retention rate for users that a) installed your app from a specific ad campaign, then b) went on to view a product 1, and 7 days later? No problem.

This makes it easy to segment your audience and measure retention when A/B testing your application, as well as making it simple to measure retention across custom cohorts.

Active users

Another great tool used for monitoring your application's retention is Calq's Active Users report. This shows the number of users that have used your application in the last 1, 7, and 30 days.

As before, this report can be filtered to specific actions and custom properties.

Conclusion

Know your retention rates and keep a close eye on them. Comparing your values to industry averages will let you know where you stand, but ultimately you goal should be to always improve your rates. Do this through constant iteration on your product and monitoring of the results (preferably with Calq - but we're biased!).

]]>

Calq's conversion funnels are often used to report on critical user journeys within an application. They allow you to define a set of actions that a user takes to reach a goal. Simple examples include site registration, uploading a new product, posting a comment etc.

Calq's conversion funnels are often used to report on critical user journeys within an application. They allow you to define a set of actions that a user takes to reach a goal. Simple examples include site registration, uploading a new product, posting a comment etc.

Take a real world example. One of our customers, Paddle, use a funnel to measure how many of their new developer registrations actually go on to upload a product. Uploading a product is a multi-step process and not everyone makes it to the end.

Calq's funnel report quickly shows the % conversion, a useful metric for optimisation in it's own right, but what do you do with the existing users that started the funnel but failed to finish? Reach out to them!

Using the small "Export Users..." button on the funnel results page allows you to easily export all customers that entered the funnel along with how far they got.

Calq can also include any of the custom data you have provided about your customers. This normally includes their name, email address, company, and perhaps a phone number, but can be any data you have associated with your users.