Welcome

My Quotes in Instapaper: Monday, Jun. 24

49 Kuotes | 50 Books

Search:

Import Kindle clippings easily

Beta status: Bulk Kuote import!!!!

Log in & Sign in using:

Instapaper: Monday, Jun. 24

Instapaper

Make your customers awesome ?Like? us on Facebook so we can tell you just how radical our company is?no. Just?no. Instead, said Fishburne, make your customer radical, like Betabrand does. The SF-based challenger clothing brand runs a ?model citizen? program that takes submissions from customers outfitted in the clothier?s wares and runs winning images on its home page. This gives customers bragging rights. Remember, said Fishburne, It doesn?t matter how awesome your product is, your presentation, your post. Your awesome thing only matters to the extent that it helps your user?s ability to be a little more awesome.

He recounted his time at cleaning supply outfit Method when the company wanted to attend a trade show, but the booth costs were prohibitive. A colleague in the supply chain department had the genius idea of instead getting an 18-wheeler and parking it at the event?s entrance. Sure, it meant receiving parking tickets all day long?but those were cheaper than the booth costs, and with this option, event attendees couldn?t miss Method.

It?s not about the brand When all a brand does is talk about itself, it?s going to very quickly turn off its audience, said Fishburne, pointing to a Nike effort that did everything but. In 2009, the company partnered with Livestrong during the Tour de France on a campaign that used Chalkbot, a machine that stenciled messages on roadways. The public was invited to submit 40-character messages of cancer support by text, web banner, or the Nike Livestrong site. Each message would be sent to the Chalkbot, printed on the race course, instantly photographed, tagged with GPS coordinates, and then emailed to the person who sent it.

Preferably you will also keep in mind that in analyzing an A/B-test you also want to be able to draw conclusions about segmented data. In that case it is advisable that the returning visitors from the first few days are excluded from this data set. This is to ensure that the people that had already started the dialogue shortly before the A/B-test will not have too big of a change to deal with. It won?t be the first time that the variations only start performing better regularly after a couple of days!

some of the visitors have already visited the website before the A/B-test, and after the start of the test they will still get to see the original version. They will respond differently from the same visitors that after the start of the test have seen a different variation. You could exclude these visitors by only targeting new visitors, but of course this is not a realistic situation. Returning visitors will always be there.

You do have to remember that this also means that a significant part of the visitors have probably seen two or more variations in the dialogue, from the first meeting until the conversion. When a visitor within this set of variations has seen a vastly different variation, this will probably have a different result then if the variations look similar.

Finally, the nightmare of every (A/B-test) web analyst: the impossibility of cross-browser tracking without a central login system, which is also the reason why an increasing amount of websites with any relevance require you to log in. Your visitor doesn?t always immediately convert in the first session. As soon as you start following this unique visitor, you will see that most of the conversions don?t happen during the visitor?s first contact, despite what analytics tools appear to report.

Your A/B-test doesn?t only show the results of one variation against the original. Variations among themselves can also lead to significant understandings. When you have 2 variations next to the original page, you might not be able to declare the winner (variation A) or the loser (variation B), but they do have significant differences between them, because the difference in behavior is large enough. These are also learnings that you want to share in your organization and can provide a good input for a follow-up test.

2. Only focusing on conversions Because you?re focused on behavior and how to affect it, you should not only be looking at the final conversion. This is even more so when your A/B-test page is at the start of the dialogue between your website and the visitor. Besides the potential conversion differences, the variations will see a lot of difference in clicking behavior, exit behavior, time on the page, etc. You will almost always change the dialogue with a new (content) variation. The last step to the final conversion however is a threshold you can?t just improve by slightly adjusting the start of the dialogue. A/B-test learnings should serve the attempts that have been taken to have a better dialogue and learn from their impact. Learning from an A/B-test without a significant increase in conversion can turn out to be even more valuable long-term than of the A/B-test with a significant increase in conversion.

In deciding where you want to do a test, you should have already decided who you want to test and why; what do you want to accomplish? Basically, you?ve seen unwanted behavior by a group of people, caused by ineffective communication, and you want to improve this. By already going into detail in choosing your target audience and the page tested, you prevent and solve these 4 problems in regards to the test population, which is largely the problem of unevenly divided test variation groups.

The common denominator of impure test population problems is the fact that not every group of visitors visits the page you are testing with the same intention. Your total population consists of a number of sub populations that you also want to evenly divide over the test variations, which is actually the job of the A/B-test software.

Wrong motivation for purchase: some visitors have an external motivation to purchase (like those regular fans also have that to a certain extent). Well-known are the affiliate programs that reward you with value points when you can show that you?ve purchased a product or service at a certain party. At that moment, the motivation is in the reward they receive and the tested variations will (not withstanding usability issues) have almost no effect on that. This group of people is often too small to be evenly divided among the tested variations, so you will also need to remove these from the dataset you want to analyze.

Many requests: not all analysis solutions are capable to simply show a report about unique visitors that made a conversion (instead of a report of all conversions). When you thoroughly analyze groups of visitors combined with service provider, browser, platform etc. you will regularly see that some combinations of these details have a lot of conversions per unique visitor. Aside from a few regular fans, these are often external call centers that fill in a contact form on behalf of a caller. Because of one of the basic principles of A/B-testing -you always see the same variation- an eager call center agent (or just a regular fan) can cause too many conversions on the variation tested. So you will want to remove these from the A/B-test analysis as well.