AB testing is a powerful tecnique that
lets you gather metrics about different
versions of a feature: it basically
consist into displaying a number of
different variations of it to your
users and tracking the results to see
which variation performed better.

An example? In an e-commerce system,
you usually have an “Add to cart” button:
have you ever though about the impact that
single sentence has on your customers?
What would sound better, between “Add to cart”
and “Buy now”, for example? Copywriters
away, you want data to tell you that!

This is why AB testing is important:
you serve different versions of something,
and track the results to improve the
experience users have while using your
application: for example, Google benchmarked
40 different shades of blue
to find out how the rate of clickthrough
would be altered.

At Namshi we
decided to ease AB testing by creating a
very simple library that would let you generate
and manage tests in a very easy and practical
way: that’s how Namshi/AB
was born.

Installation

You can install the library via composer,
as it’s available on packagist.

Then include it, specifying a major and
minor version, in your composer.json:

1

"namshi/ab": "1.0.*"

Creating and running tests

The library is very small, and it comes bundled with
2 classes, Test and Container: as you can probably
guess, the first is a representation of an AB test and
the 2nd serves as a convenient container for all of your
test instances.

getVariation() will calculate the variation
(default.css or new.css) according to the
odds of each variation (66% for the first one,
33% for the second one) and will return a string
representing the variation.

Persisting the variations through an entire session

Of course, you want to display variations but be
consistent with each user, so that if a user gets
a variation, it will continue getting the same variation
throughout his entire session: to do so, just calculate
a random integer (seed), store it in session and pass it to
each test:

12345678910111213141516171819

<?phpsession_start();if(!isset($_SESSION['seed_for_example_test'])){$_SESSION['seed_for_example_test']=mt_rand();}$test=newTest('example',array('a'=>1,'b'=>1,));$test->setSeed($_SESSION['seed_for_example_test']);// as long as the seed doesn't change// getVariation() will always return the// same variation$test->getVariation();

Soon, you will realize that having a per-test seed
is not efficient at all, that’s why you can create
a global seed and pass it to the container: from that
seed, the container will take care of generating a seed
for each test:

12345678910111213141516171819202122

<?phpsession_start();if(!isset($_SESSION['seed'])){$_SESSION['seed']=mt_rand();}// pass the seed into the constructor$abContainer=newContainer(array(newTest('greet',array('Hey dude!'=>1,'Welcome'=>1,)),newTest('background-color',array('yellow'=>1,'white'=>1,)),),$_SESSION['seed']);// or with a setter$abContainer->setSeed($_SESSION['seed']);

Disabling the tests

Sometimes you might want to disable tests
for different purposes, for example if
the user agent who is visiting the page is a bot:

Of course, never write an application like this ;–)
this serves just as an example.

Additional features

We tried to extensively cover the available features of
the library in its README,
so I will just sum them up here:

the container implements the ArrayAccess interface, so you can
retrieve tests like if they were stored into an array ($abContainer['my_test'])

since AB tests are very useful only when you track
the results, we added a tracking name that you can specify
for each test: this is due to the fact that your test might be
called add_to_cart_text but in your tracking tool, you
have to reference the test with the tracking tool’s ID, which
might be a very clueless string (ie. test_id_4njktn4t4tjjnn4on)

you can also add an array of parameters to each test and retrieve
them later on: this is due to the fact that once you track the test’s
result, you might want to send additional data together with the
tracking name, the variation and the result

Why not choosing an existing library

Of course we checked out what the market was
offering, but weren’t able to find out a very
good, generic-purpose, library in order to
generate AB tests:

jm/ab-bundle
is unfortunately coupled with Symfony2 and Twig, so
you can’t really call it a stack-free library: even though
we love Symfony2, not all of our services run with
it and we don’t want to force a technology just to
have a functionality

FOSS

The library is available on
Github: please let
us know if you
would like to see something different, have a suggestion
or whatsoever: even better than that, feel free to open
a pull request if we screwed up with anything!