How to A/B test churn models in win-back campaigns

Churn models identify customers who are at-risk or lost so that marketers can win them back or reward loyal customers.

In the context of win-back campaigns, churn is a counterintuitive metric to test. You cannot measure how good a churn model is based on how much money the win-back campaign brings. If two groups of customers are given the same offer, the group with more active customers will result in more sales per email sent.

A better churn model should make less sales per email than a less good one if the offer to customers and everything else is equal. This is as churn is designed to identify customers who are inactive or less likely to buy.

A good churn model can identify how at-risk customers are more accurately and can save lost sales from unnecessary discounts and damaged brand equity. You can send different incentives to customers based on their level of churn to maximize sales per email.

How can we A/B test churn models?

How can we run an A/B test to compare two churn models? Here's how. Let's say we want to compare the following two churn models:

Customers have not bought in 12 months

Predictive churn rate > 0.5

We look at the customers that fall under only one of the churn definitions. To illustrate this, refer to the chart below, where each dot represents a customer. The x and y-axis represent the two churn models, respectively. We are interested in the customers that fall within Group A and Group C. We ignore Group B for the A/B test as both models agree that they are at-risk.

We run an A/B test with the same win-back incentive for customers in Group A vs. customers in Group C. The group that responds better contains more active customers, which means the churn model is less accurate at identifying at-risk customers.