How Great UX and A/B Testing Can Drive User Acquisition - and App Store Optimization

by Tom Farrell, 13 May 2014

If there’s one thing that any app business likes it’s a five star review. I probably don’t need to explain why that’s the case, but in the spirit of completeness, here’s how I see it:

High average reviews mean more downloads. When browsing a list of search results it’s human nature to check the ratings, and even users who have come straight to your apps download page will take account of your rating before deciding to commit.

Better ratings mean better rankings. The specifics vary from store to store - but the principle remains the same: if your app is well ranked by users, Apple or their equivalent is going to give you a boost up the charts. And we all know what that means.

So how do you get good rankings? Well, firstly and most obviously: by making a great app! That sounds desperately obvious but it’s worth pointing out that unless you’re delivering something of real value, ratings are going to be hard to come by. And of course great user experience is part of that - but it’s not the focus of this piece.

How To Ask The Question

It might be more profitable to acknowledge that some users will love your app, and some will love it a little less. The question then becomes “how do I ensure the former give me a rating, and the latter do not?” And there are a number of ways to make that happen.

This piece was partly inspired by my experience with Hipmunk, a really rather slick travel app that - as it happens - I would be happy to give 5 stars to. But what really impressed me was their take on encouraging me to rate their app, and specifically the two step process shown below:

Hipmunk uses a smart ‘two phase’ system to maximize their ratings[/caption]

By initially asking the user to either give feedback or confirm they love the app, Hipmunk automatically ensures that users who love the app self-select. And only at that point is the second screen shown, and only to those users who have already confirmed that they ‘love’ Hipmunk. It’s an almost fool-proof way to ensure that your biggest fans are telling the world about you - and that your critics are delivering feedback in person (which in itself is desirable.)

It’s worth also making the point that the invitation to comment on my experience was only offered during the fifth or sixth session. Too soon, and you’re unlikely to have generated the ‘superfans’ you need for good ratings, or the experiences that lead to useful feedback. Too late and the initial experience is no longer fresh in the mind, and the audience size is reduced.

The timing felt perfect to me, and it has no doubt BEEN perfected by A/B testing exactly when the first message should be shown, and what that message should say. Hipmunk has clearly understood that you don’t simply ask users to rate the app at a random moment in time, and with an open-ended question. Your rating is far, far too important for that type of laissez-faire approach. You carefully isolate those users who love you, using an A/B tested in-app message, and then pop the question.

By way of comparison, take the following from Gogobot (hey, it’s my job to wander around in travel apps). I don't doubt that this message is also A/B tested to ensure it is as effective as possible, and it’s certainly a little more sophisticated than the boilerplate ‘please rate our app’ text:

The wording is misleading of course: you cannot mandate a user gives 5 or any other number of stars. Gogobot has almost certainly found, however, that users not willing to give 5 stars are less likely to hit the button that asks them to do just that. It’s probably effective enough, although it should also be noted that you can never know what rating was given, and thus tests can only be measured on a combination of click-through and aggregate effect on average ranking.

However, the key first ‘filtering’ step is missing. Perhaps it’s no coincidence that Hipmunk is rated 4.5 stars - and Gogobot just 4.