Automation Center - Testing Programs

The Automation Center has a testing function that lets you make sure that the customer journey you have designed works as expected. You can start to test a program as soon as all its paths are validated, and stop it whenever you want. When you stop a test the program reverts to the state In design.

The standard test function applies only to programs before they have been launched. Programs can be tested after launch using the A/B splitter node.

Testing programs before launch

Once you launch a program your options for changing it become somewhat limited. Since you can stop the test, edit the program and test again as many times as you like, this is an important step to identify weak points and improve them before you go live.

The testing feature is designed to test whether the program behaves as expected. It should not be used to test content, nor if the program has the desired effect on the contacts that pass through it.

In other words, your testing KPIs are more functional than strategic. What you should be looking at is if you get the messages you think you should get from the program, at the right time and after the right trigger.

The test segment

When you test a program you must select a test segment. You can then use contacts in the segment to interact with the program as you require. Only email addresses in this test segment will be processed by the program during the test.

You can now either use the entire segment, or individual contacts in it, to test your programs by triggering the entry criteria. A test segment may only contain 50 contacts. If it contains more, the extra contacts will not be processed by the program.

We recommend creating a test segment from contacts within your own organization (e.g. use a segment where the email address should end in @yourcompany.com).

How does testing affect reporting?

Contacts passing through the program during the test do not show up in the program summary. On the other hand, all messages sent, opened and clicked will show up in the respective email reporting pages. This is because your program may rely on this data, so we need to make sure that it is recorded even during the test.

As long as you test with a small segment, your test responses should not have any significant impact on program reporting once the program has been live for a few days.

Testing a program

To test a program, proceed as follows:

Click Program is in design to display your program options.

Click Test and choose your test segment. Then click Start test.

At this point your program will be validated. Any errors that would prevent the program from being active will be displayed and you must correct them before you can continue.

When your program is in the state In testing, you can begin to test it by using the contacts in the test segment to interact with the program as normal customers would.

For example, if your program starts with a Form node, then you would have to register a test contact using that form and then check that the test contact received the messages you would expect them to receive.

All Wait nodes are disabled for testing, so that your test contacts pass straight through them. Otherwise the program will behave as you have set it up.

Testing programs after launch

Once a program has been launched you can only A/B test individual paths (see below). Any other changes you make can fundamentally alter the nature of the program and make before and after comparisons meaningless. They can also adversely affect the experience of contacts already inside the program.

If you really want to test major changes to a program you can copy it, modify and test the copy, and switch over to the copy once you are happy with the results. Contacts who have already entered the original program will have to proceed through it, but new contacts can enjoy the new version.

If you don’t want to ignore contacts already in a program (e.g. waiting in a timer) then your best option is to pause the program, make the changes you want, then copy the program and test the copy. Once you are happy with the result you can resume the original program with the new workflow.

The A/B Splitter node

The A/B Splitter node is a great way to test minor improvements in your program, or to test multiple emails against each other. In this way you can continually experiment with new ideas and keep optimizing your strategies and improving your customer journey.

You decide how big your test groups are, and how big your control group is, by assigning percentages to each splitter node.

In the example below we are testing two variations of an email with 10% of the launch list each, while the remaining 80% receive the original version.

When you feel that you have tested enough and want to choose one path over the other, increase the preferred path to 100% and reduce the others to 0%. All future contacts will now receive the preferred version.

Before you make your final decision, you should consider one final time if the results are statistically meaningful. The key questions to bear in mind are:

Was the sample group large enough?

Are the differences between the various paths really significant (i.e. would you get the same result 19 times in 20 similar tests)?

About how we assign contacts to paths

You might notice at first that the numbers of contacts passing through each splitter do not exactly correspond to their respective percentages. You’ll be happy to know that there is a very good reason for this…

For statistical methods to work well, we need to make sure that we eliminate any effects that could skew the results. For batch emails this is easy – we simply divide the launch list randomly between the paths. With an Automation Center program it is a bit more complicated, since contacts are passing through one by one, and we do not know beforehand how many contacts will pass through the nodes before the test ends.

Because of this, the only way we can make sure that we don’t skew the results is by randomly assigning each individual contact to one of the paths according to their relative probability. And probability being what it is, it takes a while before the distribution begins to settle down into a stable pattern. It may take several thousand contacts to pass through before the differences become too small to notice. So be patient, wait until your test is stable, and rest happy in the knowledge that your A/B tests are scientifically valid.