The Kissmetrics Marketing Blog » Testinghttp://blog.kissmetrics.com
A Blog About Analytics, Marketing and TestingSat, 01 Aug 2015 02:35:00 +0000en-UShourly1http://wordpress.org/?v=4.2.38 A/B Split Tests That Made Shocking Discoverieshttp://blog.kissmetrics.com/ab-tests-shocking-discoveries/
http://blog.kissmetrics.com/ab-tests-shocking-discoveries/#commentsThu, 09 Apr 2015 18:00:28 +0000http://blog.kissmetrics.com/?p=27357If your goal is to squeeze the maximum amount of profit from your business, then you want to boost your conversions. A great way to boost conversions is to split test and find out what works best.

However, split testing isn’t always black and white.

Every business is unique, and sometimes common marketing wisdom can fail. Here are 8 A/B split tests that had either shocking results from simple changes or results that defy common marketing knowledge:

1. 400% Conversion Boost by Removing Security Badge

Web designer Bradley Spencer wanted to test the power of “security” assuring images and badges in the sidebar of a coupon site he was working on. He wanted to see if there was a link between the security badge and the number of people who clicked on the coupon button.

So, he removed the security badge and experienced some surprising results.

This is the site before removing the security badge:

This is the site after removing the security badge:

The results?

In just a few days, he experienced a whopping 400% increase in conversions. Four times more people clicked on the coupon link.

Are you using any safety assuring badges on your site? If so, split test a page without the assuring images and see what happens. You might end up with surprising results yourself.

3. Over 38% Rise in Conversions from Tweaking Button Copy

When trying to optimize conversions, it’s easy to pay attention to the flashy details that everyone talks about – headlines, subheads, images, and copy.

But, what about the small details that often get overlooked?

For instance, what about button copy?

While it seems like a small thing to focus on when compared with other potential changes, tweaking your button copy can have some awesome results.

When Michael Aagaard wanted to increase conversions for a client, he tested an interesting hypothesis. He determined that the more value you can convey, the higher your conversions will be, even if it’s the value of something simple like a button.

So, he changed the button copy of the control from “order information” to “get information,” thereby focusing more on what the customer receives instead of what the customer has to do.

This simple change resulted in a 38.26% rise in conversions.

Aagaard also claims to have achieved conversion lifts of between 5-200% by “simply tweaking words.”

The above case study shows how important conveying benefits to your customers is, even if it’s through your button copy.

To optimize your button copy, think of what your customers want out of your offer. What exactly are they looking for when they click that button? Then, proceed from there.

It’s easy to fall into the trap of thinking that a smooth, good-looking design will boost business. But, that isn’t always true. Which of the landing pages above do you think converted better?

If you guessed version B, you’re right.

Even though version A seems more open and creative with its mini speech bubbles, it was no match for the straight-to-the-point forwardness of B. The three solid bullet points make the benefits easy to read, hook the viewer’s visual attention, and quickly convey value.

The lesson here?

Try not to be too artsy with your design, keep it simple. Make things as visually easy as possible, and focus on your prospect.

6. 304% Boost in Conversions by Moving the CTA

Recently, studies have shown that content placed above the fold attracts 80% of a consumer’s attention. So, it makes sense for marketers to use this as a reason to place their value propositions and CTAs above the fold.

(If you’re not sure what “above the fold” means, it’s the part of a website or landing page that is visible without having to scroll down.)

This best practice doesn’t always mean optimum results.

Check out the images below:

This is the result of a test in which Michael Aagaard cranked up conversions a staggering 304% by simply moving the CTA below the fold. Aagaard says this is because of a direct correlation between the complexity of a product and the location of the CTA on a landing page.

7. 100% Boost in Leads by Showing Price on Landing Page

SafeSoft Solutions is an ecommerce site that develops products for customer contact centers. They wanted to test the effect that showing price on the landing page would have on conversions.

So, they ran a split test, one page with the price boldly displayed, and one without.

Now, most people would guess that the page with no indication of price would beat the variation. But, that’s what makes this test interesting.

The page with the price prominently displayed caused a 100% increase in lead generation.

The reason?

Sometimes people think that if a company doesn’t show pricing, the product they’re selling must be expensive. Visitors fear that means they’ll have to talk to a salesman, or worse, endure a sales pitch to get information.

This test reveals that showing price can actually work in your favor. It removes doubts and questions and clearly displays what the prospect can expect to pay.

8. 102% Rise in Opt-ins from Removing Social Proof

A popular marketing assumption is that people look for the approval of others before biting the bullet on a buying a decision. This gave rise to providing social proof and popularized the routine as a best practice.

However, when Derek Halpern split tested his sidebar opt-in form for diythemes.com, the results were surprising. Contrary to best practices, social proof hindered conversions.

Halpern found that by removing social proof, sidebar opt-in form conversions increased by 102.2%.

Why did social proof hurt conversion rates?

According to Halpern, it either served as a distraction and complicated the process or the social proof numbers weren’t big enough. The first theory makes more sense. When you look at the opt-in forms, the two with social proof simply seem like they demand more energy to interact with, don’t you think?

Again, this is another reason for you to split test any methods you’ve based on best practices.

Conclusion

The above split tests have pretty interesting outcomes. They show that just because something is a best practice doesn’t mean it’s best for you. So, never stop split testing to find out what works best for your business.

Have you run any split tests that delivered odd results in your business? Tell us about them in the comments below.

]]>http://blog.kissmetrics.com/ab-tests-shocking-discoveries/feed/13The Strategic Approach to Split Testinghttp://blog.kissmetrics.com/strategic-split-testing/
http://blog.kissmetrics.com/strategic-split-testing/#commentsMon, 06 Apr 2015 23:15:09 +0000http://blog.kissmetrics.com/?p=27314Contrary to popular belief, every split test is not a great split test.

The first thing many people do when they plan to run a split test is decide what they’re going to test. This is a common “tactical” approach that starts with specifics and can, no doubt, produce wins. But, it also can cause people to accidentally split test useless factors that have no effect on overall conversion rates.

In fact, previous experiments have taught us that more than 71% of self-created “assumption” designed split tests will not increase conversions, and may even reduce conversions.

With such a low margin for randomized success, the overall result of assumption split tests are often small minor improvements or flaky results that leave people scratching their heads.

Huge variations are possible in the quality of split test results.

The SplitGen Process – Introduction

Below is a “time tested” split testing system that we coined the “SplitGen” process. It is a strategic method for developing and generating a list of potential split tests that can produce big wins instead of incremental improvements.

You can adopt this process if you want to turn assumption style results into more solid, reliable wins. For best results, compare (or split test!) the following SplitGen process against any tactical assumption style of split testing.

The result will be that the SplitGen process produces more big wins consistently, time and time again.

The Tactical “Assumption” Approach

From experience, we know that most split testing is done with a very tactical approach. A tactic is something like changing the color of a “buy now” button or adding a “satisfaction guaranteed” logo. Those are tactics.

A tactical approach means the team/person in charge of running the split test starts out with a very rigid idea of what will be split tested and which exact tactics will be used.

They start out with a team brainstorming session or, even worse, sit alone and predict which factors will affect conversion rates. Then, they use these predictions and self-deductions to build a list of things to test. Or, they approach the split test with their own bag of tricks, and try to paint every split test with the same brush.

This is the equivalent of taking a questionable recipe and running a taste test with a single person. Then, on that single person’s approval, taking the recipe and producing it on a mass scale and expecting everyone to like it. Even if the single taste tester is a chef, this method of food production is rarely done in the commercial world. This method is all based on one person’s opinion, and it is bad business sense.

Yet, when it comes to split testing, the common practice is to start out with a previously created list of tactical split tests, and then apply those split tests to a website, etc.

This happens because of the Endowment Effect, which tells us that we value our own ideas more than the ideas of others. In addition, we tend to seek only validation (not criticism) of our ideas. So, a fair analysis of the list of split tests is never truly realized; and, then, when the tests are run, we find only minor improvements.

But, even worse, the minor improvements are seen as an overall success, so the weakest method of split testing continues on and on. This leads to achieving small wins when major wins could be produced.

So, don’t start a split test with a preconceived idea of what needs split testing. Instead, start with a strategic overview of how to conduct a “big win” split test.

The Strategic Approach

In order to find enduring “consistent” big wins, split testing needs to be approached strategically. Strategy is a long-term overall picture. It does not contain any rigid details like button color, etc.

Skillfully planning your split test formula, developing a methodological approach, researching your factors, and then executing efficiently is all part of the SplitGen process. It’s a strategy for producing big wins.

The reason people focus more on tactics is because they’re much more evident. Compare that to strategy, which takes time and effort.

But, strategy always works to produce larger wins. If you put in the work to be as strategic as possible, you’ll produce much bigger wins on a much more consistent basis.

The best way to be strategic is to use a split testing “process” – an actual system that acts as an overall strategy. When you run that system, its job is to deliver the tactical factors you need to split test in order to produce big wins that will make a real difference.

The SplitGen Process – Implementation

The SplitGen process is actually refined down from the system used by top level companies that split test in the real world to improve manufacturing and sales processes across multiple industries.

There, it’s possible to be working within an industry that you have no experience in at all, and where a website is not being used to sell anything. Often, there are no button colors, headlines, or CTA’s to test against each other.

In such an open and seemingly “harsh” environment, there’s no way you can approach the split test with a preconceived bag of tricks or a previously created list of factors and constraints to test.

If you’re a tactical split tester, you’ll have no clue. But, if you use the strategic method of split testing, you’ll understand how to do this right away.

We can take this advanced method of performing split tests and modify it for online use. When applied to a sales website, this strategic approach will consistently produce much bigger wins than a tactical approach alone.

The strategic approach explained below has been adjusted so that it can be applied to sales websites. It produces consistent big wins.

Here’s how to use the SplitGen process for a strategic approach to split testing:

Phase 1: Diagnose

You don’t go to your doctor and say, “Hey doc, I need a heart transplant, let’s get this done!” Neither does your doctor look at you as soon as you walk in and tell you what you need. The first thing the doctor does is ask questions and diagnose your condition.

Approach split tests in the same way. You shouldn’t be brainstorming any split testing ideas at this time.

However, it was actually found that HiPPOs and top level executives are just as bad as other employees at predicting what factors increase conversions. Successful suggestions aren’t limited to any specific level of employee. History has proved that anyone can suggest a winning factor.

It’s the same with websites.

You need to watch how people use your website. What slows them down? What gets in their way? What stops them from making a purchase? Are there any technical issues? Does your site look the same way across multiple browsers? Is there a coding error on your website? Is something confusing your users?

You need to know exactly where the hold-ups are.

If you’re not able to watch a group of people using your website, a service like UserTesting.com can be helpful. Another great tool for observing how visitors interact with your website is Crazy Egg.

Phase 2: Quantify

You need to make sure you understand exactly how you define success. Are you trying to increase the number of email opt-ins? Are you trying to increase the conversion rate of people who move from page four to page five? Are you trying to lower the cost of each new customer? Are you trying to maximize your earnings per visitor?

Whatever you’re trying to achieve, you need to know what specifically it is and how you will measure it. You also need to make sure that your method of measuring these objectives is extremely accurate.

Phase 3: Build a Representative Sales Funnel

Create a visual representation of your entire sales funnel, and seek to understand these three important things:

What happens to a visitor before they arrive on your website?

What process does the visitor go through while buying on your website?

What happens to the visitor after they have bought on your website?

Don’t create this as a text file. It’s very easy to skip this part, but this is not the time to be lazy.

You need to have a visual representation of every single step named above. Think about the individual actions that your visitor takes, and draw them out as a path.

This will give you a great overview of the business.

You should understand how a visitor first learned of your website. Did they see an ad or were they a referral? What did the ad promise them? Was that promise kept when they arrived on your website? Where was the ad displayed? What is a visitor from that traffic source like? For example, Google users tend to be more technically minded compared with Bing users.

Also, think about the keywords that bring people to your site. What sort of visitors are they attracting? What do visitors want to do on your site? Is the majority of traffic on your website made up of first time or repeat visitors? Is there a specific traffic source that converts very well? If so, you can build a custom landing page for visitors from that source.

This information will come in handy later on as well.

So, try to get as much information about your exact sales funnel as possible, and build a visual representation of it.

The aim here is to look for potential areas of your business where you can make simple changes that will potentially produce big wins.

You also should be looking for areas where you can increase revenue. For example, if your website sells shoes, you may be able to replace your current “thank you” page with an additional purchase offer for a matching belt. Or, if you sell luxury car wax, you can ask buyers if they want you to send them a reminder to buy more wax in 60 days (when their bottle starts to run low).

Phase 4: Gather Competitive Intelligence

Take a look at your competitors and their businesses. What do they do successfully? Have they been running a specific ad for a long period of time? If so, try and go through their sales process via that ad. If the ad has been appearing for a long time, you can bet it is producing good sales for your competitor. Try and figure out how and why.

You also need to understand what’s being said about you and your competitors online. What do people like? Why do they do business with one company instead of another? You need to have this information and use it in your business.

It’s just as important to look at the complaints online, too. What don’t people like about your competitors? What don’t they like about you? You should eliminate all the reasons for these complaints from your company.

This also is a great place to find ideas for improvement and new features to add to your service. For example, if your business is a luxury hair salon and consumers are complaining that luxury hair salons do not offer head massages, you can include head massages with your service for an additional price. This will help you drive up overall profitability while serving a market need that you didn’t even know existed.

Phase 5: Discover Buyer Friction and Skepticism

A good copywriter always does their research. One of the key pieces of information to understand is what friction a buyer faces and what stops them from buying? What is the buyer skeptical about?

These issues are then acknowledged in the sales copy to remove buyer friction and overcome any skepticism.

For example, there is a very famous Ogilvy stock and bond ad. David Ogilvy understood that common buyer friction was caused by the belief that stocks and bonds is a complicated business.

He overcame the skepticism at the very beginning of the ad with this text:

“Some plain talk about a simple business that often sounds complicated.” This ad went on to build Merrill Lynch.

Although it sounds simple, understanding buyer friction online can be difficult. The key process is that you must obtain significant feedback from your visitors. Then, you have to use the information you collect to build your website and sales copy so that it naturally overcomes visitor skepticism.

You can use Qualaroo, SurveyMonkey, and Kampyle to get visitor feedback on your website, without intruding too much on your visitor experience.

Phase 6: Shine as Brightly as You Can

There are always hidden “proof” assets within a company that they never show to their visitors. You need to find these and make sure your visitors know about them. You’re basically showing proof that you are:

An authority

Credible

Expert in your industry

Trusted and Respected

You can do this by showing publications you’ve appeared in and awards you’ve won, naming previous well known clients (with their permission of course), showing case studies of your results, using testimonials and reviews, etc.

Many people think this is about additional credibility. However, it’s more about the extra believability you can provide.

As an example, a chiropractor client had a basic, standard website. He did not mention that he was one of just nine chiropractic craniopaths in the country. He also did not mention that it takes one year longer to become a chiropractic craniopath in his country than it does to become a physician.

These facts instantly make the chiropractor look like more of an expert, yet he was failing to pass along the information.

These are just a few ways you can prove you are a good company to do business with.

Another common mistake is not showcasing other products you sell. For example, one client sold a solution to prevent industrial heating systems from becoming clogged. They also sold a product to remove any current buildup inside the heating system, but hardly any of their clients knew about it.

When they added the “buildup remover” to their sales system and told buyers about it, the additional sales almost doubled their profits on that specific product line.

If you’re going to shine brightly and sell everything you can, you need to make sure your customers are fully educated about who you are and what you sell.

Phase 7: Focus on Roars

At this stage in the system, you should have a whole lot of ideas that were generated from the previous research steps.

But, before we design our new webpages, we need to be careful not to make the mistake of focusing on “squeaks” in order to try to produce the fastest wins. We should focus on “roars.”

The Squeaks

A squeak is an element on a webpage that can boost conversion rates by only small amounts. For example, changing the background color of your website will produce only small incremental improvements.

We don’t want minor improvements. We want big wins.

The great Gary Halbert once noted that the addition of extra order forms in his direct mail packages boosted response rate. Then, adding another order form also slightly lifted responses again, and so on. But, these are minor wins. They are squeaks.

If you have a 1% response rate and are looking to break even on your front end with a 2% response rate, adding in additional order forms won’t produce the big win you need (besides, this has been tested by us and has never seemed to increase results).

So, don’t aim for minor wins. Stay away from the squeaks.

The Roars

A roar is the opposite of a squeak. A roar is a specific factor on a webpage that has the potential to boost conversion rates by 100% or more.

For example, switching to a higher quality traffic source, drastically changing the headline of a page, adding upsells/continuity offers, giving premium bonus gifts, and switching the offer to a free trial are roars that can make big differences in your response rates.

In one case, an educational item was being sold for $139. We introduced a split test where the item was sold at $139 as the control, and at $159 and at $169.

The response on the $139 and $159 prices was exactly the same. So, the new $159 sales price allowed us to make an extra $20 (14%) on every sale.

(The $169 price cut response in half. So, adding an extra $10 to the $159 sales price meant that we lost 50% of all orders, which, in turn, meant a lot less profit overall.)

It’s important to make sure that any changes you make contain as many roars and “big picture” changes as possible.

This way, you’ll be running real split tests that mean something. You will see faster improvements much more quickly, and most of your tests will reach statistical significance, too, which is much more rewarding for you.

Another thing to remember is to focus on simple, fast, and cost free changes. Don’t switch to an entirely new CMS that will take a long time to implement and cost a lot of money. Start with something simpler.

Look to your previous research as well. For example, your research may uncover that a certain need was not being met by your current products. You can introduce an existing product that meets that need into the sales funnel and probably see drastic increases in conversion.

Phase 8: Think Outside the Box

When running your split tests, try hard to focus outside the box of ideas that first come to your mind. Definitely look at non-page factors you can play with. (Phase 3 will help you with this, too.)

As an example, in a mail campaign, sales response was boosted by 50%+ without making any changes to the actual sales letter. This is a difference of between 100 sales and more than 150 sales. That is huge, considering the exact same sales copy was used in both letters. (The envelope was changed.)

Thinking like this will put you light years ahead of your competitors. It’s something lots of split testers don’t do, so it will give you the competitive edge you need when trying to stay ahead of the pack.

In business, all these advantages add up.

Phase 9: Design Your New Webpage

At this stage, we’ve done our research and eliminated all the assumptions. Now, it’s finally time to design the new webpage we’ll be testing.

If you’ve stuck to being methodological so far, the chances of success are high. You also shouldn’t have any problems designing the new page, because the previous research steps will tell you exactly what is needed on the new page.

Conclusion

As you can see, this system is a well-refined process. When setting up any split test, don’t be afraid to aim high. Try to avoid starting out with a list of split tests. Don’t jump in and start tweaking button colors when they may not mean a thing.

Instead, use the process above, and it will help you steamroll the competition if you’re ever going head to head with a tactical split tester (which will be more often than not).

Good luck!

About The Author:Michael Maven is an author, speaker, coach, and business growth expert. In addition to creating systems and founding businesses, he has grown sales for Amazon, IBM, eBay, and 888.com. See detailed case studies at the Carter & Kingsley website.

]]>http://blog.kissmetrics.com/strategic-split-testing/feed/2iOS: A/B Testing, Dealing with the AppStore, and Moving Fasthttp://blog.kissmetrics.com/ios-ab-testing/
http://blog.kissmetrics.com/ios-ab-testing/#commentsSat, 17 Jan 2015 18:45:26 +0000https://blog.kissmetrics.com/?p=26324Lately, I’ve been meeting with founders and CTOs regarding the challenges of a/b testing on iOS, and I’ve found that I’m repeating myself a lot.

Releasing frequent updates and running tests on iOS aren’t as easy as on the web: you have to push to the App Store and wait for your users to update before they even receive the experiment.

The Problem

The issues regarding testing on iOS mainly involve the following:

The App Store’s slow review process.

Users not updating frequently enough to always be on the latest version (which is also a problem if you’re multi-platform).

Testing features across multiple platforms and being in sync.

I believe these are just small setbacks and shouldn’t stop you from moving fast and testing your hypotheses.

The Solution

At Frank & Oak, we try to follow a system to work against these problems and move fast. We’ve been refining the system for a while. When we started using it, we had only one developer, a part-time designer, minimal back-end resources, and a product manager (me!) working on the product. So, it should be possible to implement with any team size.

The system consists of building everything as a test and releasing every 2-3 weeks. Here it is:

Build Everything as a Test

Early on, we made the decision that everything we build should be a test (with the obvious exception of bug fixes).

We effectively instigated this rule after mistakes we made re-launching our website in early 2013. This launch dramatically decreased our signup and purchase rates, but due to the fact that it was not run as a test, we couldn’t tell why it decreased our performance. We were blind.

The move to running everything as a test is definitely extreme on our end. But, since we’ve put this system in place, we know how we’ve improved (or failed to improve) each metric we track. Better yet, we can attribute changes in metrics to specific features we’ve released. For example, I know that the conversion rate on our iOS apps has increased by about 60-70% since last year, and I can easily list all the features and improvements we’ve built to make that happen.

Generally speaking, I recommend this for almost every product. Everything you build alters the experience for your users, which could have drastic effects on the performance of your app and sales of your product, positively or negatively.

This enables us to push features to iOS. If Web or Android are behind, we can make sure that multi-platform users don’t have access to them until everyone else is ready. Being multi-platform shouldn’t slow you down.

Release Every 2-3 Weeks

We try to push a build to the App Store every 2-3 weeks. This has caused a few interesting side-effects for us:

1. It has decreased review time for the App Store.

On average, our app is reviewed and published in 1-3 days. We’ve noticed that the more regular and consistent our releases have become, the quicker the review process has been.

This could be due to Apple prioritizing apps differently when they’re following a schedule, or the review team getting used to our app, or some other factor that we haven’t thought about.

In any case, this has enabled us to keep pushing releases at the same rate and not worry too much about how long it will take to get on the store.

2. It has trained our users toupdate the app quickly.

About 90+% of our active users download the latest update in 2-4 days. This has been a great side effect of having frequent updates, giving users a nudge to open the app and keep up-to-date with the changes.

We make sure to notify our users that a new version of the app is available for them by displaying an overlay in the app. It has definitely helped accelerate the adoption of the latest versions.

3. It has trained us tosimplify features and provide a better user experience.

This approach has forced us to be proactive with designing upcoming features, as well as prioritizing back-end resources, to make sure we have everything ready when we start building for the next release.

The benefits outweigh the challenges that come with releasing so frequently and make us better at focusing on the right things to build.

Conclusion

By testing every feature, you can measure and understand the impact of everything you’ve built. This will keep the focus on improving the experience for your users and moving the business forward as a whole.

You can also move fast by organizing your team and training your users to get used to the speed. However, such processes and philosophies will not work unless you’re truly committed to them throughout your organization.

We’ve spent a lot of time building a culture of experimentation, to the point that every person involved thinks about the metric each project is working to move, be it the developers, the designers, or the customer service agents.

I encourage you to do the same.

About the Author: Nima Gardideh is the Product Manager, Mobile, at Frank and Oak. You can follow him @ngardideh or subscribe to his mailing list for more posts about mobile, experimentation, and product management here.

Recently, an unmanned NASA rocket burst into flames just seconds after liftoff. The disaster perfectly illustrates how easily things can go wrong when you’re dealing with a complex system.

It certainly holds true at my own job where I often test website changes with KISSmetrics.

In many ways, a website test is like launching a probe into deep space. There are many points where human error can enter into the equation. And, each test is expensive to implement when you consider the opportunity cost of developers, project managers, analysts, and designers.

It’s worth taking the time to ensure each test is carefully vetted; because, as with deep space exploration, once you launch a test, there is no way to get it back.

My team has spent the last quarter prepping for a major homepage refresh. Before releasing it to 100% of our visitors, I will need to show that the new homepage is outperforming our current version. In the meantime, using my experiences on this project, I have put together a brief guide of pitfalls to avoid and steps to take in order to prevent your tests from blowing up in your face.

Fail Forward

Everyone makes mistakes. The trick is to not make the same mistake twice and, where possible, to learn from the mistakes of others. Below, I’ve listed some common missteps in descending order of severity:

Your approach lacks a cohesive strategy.

Are you running tests without a clear question in mind? Why? If you haven’t written down the questions you are trying to answer and involved at least one other person in the process for a sanity check, you lack a cohesive strategy. You will undoubtedly fail.

Your website implementation has errors.

If you put your event ID tag in the parent div of a button you want to track instead of the link, that’s an implementation error. If you forget to put tags on both versions of a page, that’s an implementation error. Be careful when making coding changes. Check them twice, and then check them again.

You failed to consider scale.

This is an error of omission. Are there additional metrics worth capturing that would benefit stakeholders? Minor additions to the design of your test can require marginal effort and have a big impact.

You implemented your events incorrectly.

KISSmetrics is the most forgiving aspect of the implementation. If your event name conflicts with something you are already measuring, you can always change it on the fly. If you use regex logic in your “Visit the page,” you will have only heartache; but, never fear, you can change that, too. KISSmetrics has its own proprietary way to match multiple URLs. I have been told regex support is forthcoming. The point is that changing the KISSmetrics settings on the fly is like pushing a software update to deep space. Just be sure to make a note of when you started collecting good data.

Identify Mission Goals

Imagine a NASA scientist sitting in front of the House Subcommittee on Space:

Representative: “So, before we approve this budget, we would like to know what impact this deep space probe will have on the body of scientific knowledge. What is your mission?”

NASA scientist: “To explore strange new worlds, to seek out new life and new civilizations, to boldly go where no one has gone before.”

Representative: “…”

NASA scientist: “Sorry, I’ve always wanted to say that in an official capacity. Seriously, though, we aren’t really sure. I guess our mission is to collect space dust and stuff.”

If you don’t plan out your strategy with clear goals, you are no better than a NASA scientist who wastes public money and a good Star Trek reference. Fortunately, these four easy steps can help you do better:

1. Collect a wish list from key stakeholders.

Most website testing initiatives grow out of a desire to better understand user behavior and optimize the number of users taking a desired action. Make sure you understand the goals of your tests and how they may differ from department to department. For example, for our homepage tests, marketing wanted to improve registrations, while UX wanted to know which parts of the design were most engaging.

2. Ask if the scope is feasible.

You might not be able to answer every question with the same test. Prioritize the questions that have the biggest impact on the company and limit event measurement to those that make sense together.

3. Work your way backward from the desired results.

Start your test with the end goal in mind. Once I created the slides of what I wanted to show, I worked my way back to the reports I would need to run. With the report structures firmly established, I could think about what I should be measuring and how I would do so.

4. Visualize your approach.

Mapping out your testing approach will help you communicate with stakeholders. I use Lucidchart to create flowcharts and collaborate with team members on tweaking the details of implementation before we make our first code change. A map is especially helpful with complex flows. Our homepage test introduced three new persona-specific landing pages, so it was hard to keep track of all the moving parts. Having a map of the flow helped us keep track of everything and made any holes in our logic immediately apparent.

Get Ready for Launch

You have your mission goals firmly established, so let’s execute on the details of pre-launch preparation.

1. Establish clear protocols.

For my tests, I put together a spreadsheet that laid out every event, where to place it, and which reporting goal it helped us accomplish. This spreadsheet wasn’t made just for developers and QA. It also will help me remember the purpose of each event when I have to start crunching the numbers in about three months, which is when we’ll have enough data to make a final decision.

2. Identify the launch vehicle.

KISSmetrics isn’t going to serve up the A/B experience, so figure out what you are going to use and make sure you know how to use it. We use either Google Analytics Experiments or an in-house solution, depending on whether we want different URLs for the A and B versions.

3. Diagnostics check.

As part of our release schedule, we push a sprint’s worth of code to staging for a week of QA testing. When the pages with the KISSmetrics tags are pushed to staging, I can implement the A/B test and have QA go through the test experience to make sure events are firing properly.

4. Countdown to launch.

Assuming everything looks good, I then wait for DevOps to push staging to the production servers. Your own deployment may be different.

5. Launch test.

Congratulations, you did it! Now wait for that data to roll in. You’ll be increasing your company’s profitability at warp speed.

The Lucidchart homepage test is currently live, so feel free to check it out. Bonus points for anyone who finds the Easter egg! If you’d like a deep dive into our specific implementation, you can read about that on our tech blog.

About the Author: Brad Hanks is the Director of Marketing at Lucidchart. You can follow him on Twitter @iambradhanks.

]]>http://blog.kissmetrics.com/launch-website-tests-successfully/feed/0How to Optimize Your Website Messaging to Increase Conversionshttp://blog.kissmetrics.com/optimize-your-website-messaging/
http://blog.kissmetrics.com/optimize-your-website-messaging/#commentsWed, 07 Jan 2015 18:04:59 +0000https://blog.kissmetrics.com/?p=26195Have you been to a website recently where you read every word and absorbed every image on the homepage, but you still weren’t sure what the business actually does? Those companies have spent a great deal of time developing a beautiful website and probably even more time on the product or service. But, they have overlooked a crucial element that can make the difference between a lead and a bounced visitor – the messaging.

Considering all the time you put into your business and website, it would be disastrous if no one understood what your business actually offers. Your website messaging, including texts and images, lie at the core of how you communicate the benefits of your business. Clearly communicating the value a business creates is something all business owners and website owners must get right.

Your Website Visitors Are Impatient

Your business solves a problem and creates value for your target audience. But, first-time visitors don’t know that yet, and they are quick to judge. Tony Haile, CEO of Chartbeat, says an average reader will stay on your page for just 15 seconds. This is a very small window for you to convince them that your product/service meets their needs.

Website aesthetics are important and should not be ignored, but tweaking website design elements to increase clicks will go only so far. A green or red CTA button won’t make a difference if people don’t understand in those first moments how your business creates value for them.

This article will look at the components of successful website messaging and how you can apply (and test) on your website to bring in more leads for your business. To demonstrate just how powerful the correct messaging can be, I will provide A/B test results of changes in just a few lines of text on our website that boosted conversions by up to 27.3%.

How to Get Your Messaging Right

Communicating your message clearly isn’t always as easy as it sounds. First, it’s a common mistake to believe your visitors see your website the way you do and already know the benefits of your product/service. They don’t, and educating them is dependent upon your website messaging.

Second, many website owners fall into the trap of throwing around buzz words that sound great but actually tell little about what the business provides.

Eugene Schwartz was a legendary copywriter whose material is still highly applicable to marketing and business today. He encourages us to enter into the conversations your prospects are having in their own heads. To get your messaging right and capture your visitors’ interest and trust, you must answer the questions running through the minds of your target audience.

There are three important components of website messaging that we will cover below:

What is your business about?

What makes you different?

Have you set the right expectations?

Three Components of Website Messaging

1. What is your business about?

It’s important to quickly communicate the value your business creates for your target market. Although this sounds obvious, there are many websites that overlook this.

Think about your homepage elevator pitch text. When you meet someone for the first time and describe your business to them, do you use those same words (shown on your homepage)? Do the new acquaintances understand, or do you need to tell them more so they really get it? If they need further details, then perhaps your homepage text is not clear enough.

Here’s a fun social experiment: Ask some people who are not familiar with your business to read your homepage. Don’t let them click through to different pages. (If they need to click through your site to understand your business, then that is a problem.) After they have read your homepage, ask them to explain to you what your business is about. This can be a funny task, but their responses may shock you.

People don’t know your business like you do. To get them up to speed in a matter of seconds, you have to be crystal clear.

Let’s take a look at Zuora, which has a nice website, but its homepage text makes it a little difficult to understand the value it creates.

The Zuora website has a very nice, modern, and cool-looking homepage with some very eye-catching moving elements. It’s a great design. Unfortunately, it’s not clear from the homepage what they actually do. The elevator pitch informs visitors that it has something to do with the way people buy.

Scrolling down the homepage, we learn that this product is targeted at subscription-based businesses. This means that I personally fit in their target audience, so this homepage should really have struck a chord with me. But, based purely on the homepage, it’s not crystal clear how they create value.

As I said, I like the design of the Zuora website and think they have done a great job there. But, I think they can better explain to new visitors the value they create for SaaS businesses by improving a few areas of their homepage text.

Next, let’s look at Clarity, which has great messaging in their homepage elevator pitch heading: “On Demand Business Advice for Entrepreneurs.”

That’s a great heading to open with, and it paints a clear picture. The first sentence under the heading (“Clarity is a marketplace that connects entrepreneurs with top advisors…”) really makes it very clear what the business does.

Simply based on what you see in the above screenshot of their homepage, you learn that this website provides businesses with a better way to order supplies, giving you a very clear impression of the value they create for their customers.

Present a Visual of What Your Business Does

It’s not always easy to clearly communicate what your business is about in a short, concise elevator pitch. The good news is that we are all visual learners, and “a picture is worth a thousand words.” You can use an image on your homepage to help visitors perceive the value your business creates.

However, you must be careful with this. The wrong picture will communicate the wrong words. Your homepage image should fit your elevator pitch and support what your business does.

Now, if we move back to the previous example, Kinnek, we see a woman picking up packages in a warehouse. The warehouse setting is very suitable when someone is thinking about ordering supplies for their business.

Dropbox has a changing image on their homepage. The same screen content is displayed on a laptop, tablet, and phone (shown below). Then, the screen content updates across all the devices. This depicts how you can access your files anywhere, and it reinforces the short (but strong) title on the right-hand side: “Your stuff, anywhere.”

When you can clearly communicate to a new visitor what your business does in the first few seconds, you answer the most important question in their mind.

2. What makes you different?

We work in a competitive world where, regardless of how unique we think our product is, there will be a similar service out there. If you don’t communicate how you are different, your visitors may assume you’re the same as the rest, and bounce. Try to prominently highlight the strengths of your business or product on your homepage.

With your website and messaging, your goal is not to convince your visitors to buy something, but to reveal their needs and highlight your product as the solution. Think about your target audience, their problems, and professional interests. Which of your USPs will appeal to them the most?

Not only does Xero have a great elevator pitch heading (“online accounting software for your small business”) which instantly tells us what their business does, they have listed the top five reasons for choosing Xero. Each point highlights a strength of their service that they believe will appeal to their target audience. This is a very strong way to communicate to someone from their target audience why they should be interested in signing up for Xero.

Shippo is an example of a business which prominently displays the three benefits of its service on its homepage. In fact, other than the elevator pitch, a header, and a footer, there is almost nothing else on the homepage. They really want to highlight these three benefits to all their visitors, and they do a good job of it. You instinctively read the text and quickly learn that Shippo (1) has an easy shipping process; (2) integrates with many services; and (3) has simple, cheap pricing. Within a few seconds and without clicking anywhere else, all visitors have a clear understanding of Shippo’s strengths.

Have you set the right expectations?

Web Psychologist, Nathalie Nahai, explains that “one of the biggest barriers to gaining new clients is lack of trust” and that the context of your website messaging can influence your online credibility.

Not only should your website messaging communicate the value you create and what differentiates you, it also plays an important role in establishing credibility and converting visitors into leads.

Unfortunately, many websites lure visitors in with clever messaging, promising the world, but they fail to deliver. This makes the average visitor skeptical that you are the “Real McCoy.” Therefore, your messaging must set the right expectations and reassure visitors that you will deliver on your promises.

Let’s start with Huddle and how they communicate openly and put the visitor at ease:

The Huddle sign-up page is a simple and effective page that sets the expectations for what will take place after signing up: new users will get to see Huddle in action. The CTA button text, “Next,” tells a visitor there will be a second step in the sign-up process. If that button said “Get Started” and then asked for more personal information, this would have gone against the visitors’ expectations, and Huddle would have lost credibility. It’s also clear that Huddle won’t require new users to provide credit card details, which makes signing up for a new service more tempting.

Another good example is Campaign Monitor which, with just a couple of lines of text above the sign-up form, does a great job of assuring visitors they can test the full service without paying a cent:

There is an abundance of services out there with “free trials”; however, many come with limited functionality. New visitors interested in testing Campaign Monitor know they get access to all features and won’t be charged until they send their first email campaign. Also, as opposed to Huddle’s sign-up process, Campaign Monitor uses “Create my account” as its CTA button, which tells visitors there is no second sign-up page and therefore no credit card required.

When asking visitors for personal information, you will gain their trust by being open and up-front about the process you are asking them to engage in, whether it be a download, a sign-up form, a purchase, or a contact form, etc.

How to Test Your Website Messaging

You may now be thinking of a few areas of your messaging that you can optimize to better communicate your business and boost conversions. One thing that must be stressed is, just like with any design or layout changes you make to your website, you should test and measure the results of changes to your messaging. You can accomplish this via A/B testing.

I recently ran several A/B tests on our homepage and a separate landing page where the results demonstrate the importance of good, clear website messaging. These experiments address the three components of good messaging covered above in this article.

Not only is website messaging crucial, it is also thankfully the easiest thing I have ever A/B tested as part of our CRO efforts. This is because it involved changing only a few lines of text and one image. In fact, the A/B tests in question each took less than 30 minutes to set up and start. Later, once I had the results, it took me even less time to implement the changes live on our website.

Test 1: What is your business about?

Our most significant improvement in the area of what our business does came from changing and testing our homepage elevator pitch image.

Prior to the test, the image was of a man sitting at a desk, wearing a headset, and working away at his computer. While this was suitable to our target market, it was also very generic and could be applied to many software solutions. We needed an image that would better communicate the value we create – screen sharing for online meetings.

We went with the image you see below. Simply by better depicting what our product does and the value our business creates, we improved our messaging and our visitors responded positively. We changed nothing else in this A/B test, and our conversion rate improved by 18.6%.

Test 2: Differentiate your business

You have probably heard this statement about A/B testing: “Choose and test one variable at a time.” This is great in theory and will allow you to pinpoint the exact changes that lead to positive results. (It worked for changing our homepage image.) However, for the text of your messaging, changing and testing one line at a time can lead to “inconclusive” results.

To obtain significant and conclusive results, my suggestion is to look at your homepage (or other landing page) messaging as a whole and optimize several different pieces of text on the page at once to better communicate your message.

Similar to Xero and Shippo from the examples above, I wanted to better communicate our strengths. In three bullet points, I summarized three benefits of our service that would appeal to our visitors and target audience. We also changed our sign-up form heading and CTA button text. This was to be more open and set the expectations. We wanted to assure visitors that there are no hidden steps in our sign-up process, and we wanted our CTA to better communicate what will happen when they click it.

The result of these changes was another 18.0% improvement in our homepage conversion rate.

Think about what you truly offer that stands out. What is it about your product or service that turns leads into customers? Why do your customers recommend you to others? Why do they buy your service year after year? The answers to those questions should be front and center on your homepage.

For the record, we tested these individual text changes separately, but the changes were nowhere nearly as positive as changing all those different areas together. We also made similar findings in the next test below, when we tested several text changes at once.

Test 3: Set expectations and establish credibility

Being open and honest is important to gain your visitor’s trust, and I felt that one of our landing pages was not doing a good job of setting expectations and reassuring our visitors. We ran an A/B test on the landing page in question and changed the CTA text and header (“Create Free Account”) to match the changes on our homepage. We also added a small line of text above the sign-up form: “No credit card or further details required. Just fill out the form below.”

The result was a 23.5% increase in conversions!

We then improved this even further (shown below). We changed the sign-up form header one more time so that it really told the visitor what they would receive by registering, “Experience all the features,” which is in line with the Campaign Monitor messaging from the example above. We also changed the small line of text just above the first field and replaced the “No credit card or further details required.” part with “No obligations. No risk.” which we felt was even stronger. And, we changed the CTA button text to “Create My Free Account.” Remember Eugene Schwartz’s advice to enter into the conversation your prospect is having in their own head.

We boosted conversions by another 27.3%!

Summary

By better communicating the value your business creates and how you differ from the competition, while setting and delivering on expectations, you can fully explain your business to visitors and boost conversions.

Have you made similar optimizations to your website messaging? Did you achieve similar results? Please share your findings below.

About the Author: Andrew Donnelly is the Online Marketing Manager at Mikogo. He manages the Mikogo websites and is responsible for coordinating all product marketing projects, including content creation, communications, CRO, social media, and press relations. Follow Andrew on Twitter: @mktad.

]]>http://blog.kissmetrics.com/optimize-your-website-messaging/feed/1017 Testing Tools for Mobile UXhttp://blog.kissmetrics.com/testing-tools-for-mobile-ux/
http://blog.kissmetrics.com/testing-tools-for-mobile-ux/#commentsFri, 21 Nov 2014 17:25:13 +0000https://blog.kissmetrics.com/?p=25757Below, you will find 17 tools that will help you test various features of your app and obtain real-time feedback.

The tools will help you discover where your users are struggling and, thus, how to improve your app.

Best feature – Recruiting couldn’t be any simpler. You can send an invitation through your database, post it on social media sites, or use any 3rd party for this. There’s also a participant panel at Userlytics that you can use.

Features

Userlytics’s Mobile App User Experience Testing lets you observe usability of apps on iOS. The tool just provides you with the raw session video file. It has no data on the items clicked, movements, or gestures. Akin to UserTesting.com, it has an ever-growing panel of users who are already equipped with a PC, microphone, and webcam.

When setting up tests, you can set the tasks you want the users to complete and also set the time duration for the test.

Another option is to include a survey at the end of the test that the test takers have to answer.

Works on – iOS / iPhone / iPad.

Pricing – Userlytics works on a Freemium model. When you create a free account, you get 7 credits each month to perform tests. When you sign up, you get a one-time bonus of 33 credits, along with 7 credits; i.e., 40 credits in total.

A mobile usability test would cost 18 credits. Each participant would cost 18 credits; so if there are 10 participants for a test, the total would be 180 credits.

Best feature – The targeted group of survey participants who are handpicked to test your app. What separates Applause from other such services is that you can have a consultation with an expert at Applause, who then chooses the ideal participants based on the consultation.

With Applause’s team of usability experts, your app will receive the right feedback to develop high usability for your apps. Detailed reports, consultations, and even mock survey questions are all in the pack.

There are over 100,000 professional testers to assist you in the quest.

Works on – iOS / iPhone / iPad / Android.

Pricing – The pricing is not shown on their website, but they do have a pricing estimator through which you can calculate your costs.

Best feature – The ability to see exactly what users are doing on your app. With session recordings, touch heatmaps, and analytics, you have a window into the mind of the user. This can quickly reveal usability problems with the app.

Features

Appsee’s in-app mobile analytics platform tracks each and every interaction of the user with your app.

What makes it different from Userlytics is its ability to let you see what your actual app users are doing. Unlike Userlytics’s test panel (where people are paid to test your app), at Appsee, a few lines of code gets you into the shoes of your customers.

With a paid panel of testers, you need to specify what they have to do. Seeing what a user does with the app is an entirely different experience because each person can use it differently, even in ways you wouldn’t think of.

Here are the features of Appsee:

User Recordings: The User Recordings feature captures every screen, tap, swipe, and action in a session. You can decide how many user interactions you want to record and segment by demographic and even specific screens (e.g., checkout screens to see what causes cart abandonment).

Touch Heatmaps: All touch gestures – swipe, tap, and pinch – are aggregated into one whole called the touch heatmap. They help you find out which parts of the screen users spend most of their time on. Knowing that, you can place CTAs in the most optimal manner.

Real-time In-app Analytics: In-app analytics show you how users interact with an app’s screens. This feature can help you discover which screens have a higher quit rate, which screens are causing confusion, and the top actions by users.

In addition, Appsee can send you crash reports and reports on unresponsive gestures, and it can help you set up a proper conversion funnel.

Works on – iOS / iPhone / iPad.

Pricing – Appsee offers a fully functional free trial. However, the pricing plans aren’t disclosed on the website. For pricing, you need to contact them.

Best feature – In-app campaigns. With these, you can run extremely targeted campaigns – campaigns that can be targeted right down to an individual user and that are customizable and testable. It’s marketing automation at its best. Segmentation goes along with this feature, letting you build both small and large segments of users with specific traits.

Features

Swrve calls itself a complete app marketing solution, but its most powerful and unique feature is its A/B testing tool. Swerve was initially developed for game developers, but there’s no reason you can’t use it, too. As a marketing tool, it offers many features; but for our purposes, we’ll focus on three of its powerful features:

A/B Testing: Swrve can handle any number of variants while conducting an A/B test. The control group can be limited to any size you want. If you want 10% of your users to see the variation, that can be done in a few clicks.

Segmentation: It lets you build target groups based on several criteria. User demographics, age group, paying user, device used, gender, and users playing a particular level are just a few examples of those you can choose from. A/B testing can be combined with user segmentation, too.

Analytics: It offers easy-to-understand charts about all Key Performance Indicators (KPIs), such as Revenue, Conversions, Daily Average Users (DAU), Monthly Average Users (MAU), and retention rates.

Works on – iOS / iPhone / iPad / Android.

Pricing – Swrve offers a fully functional free trial which supports up to 10,000 monthly active users (MAU). The Flex plan supports 20,000 MAU and is priced at $200 per month. The professional edition supports 250,000 MAU and starts at $2,500 per month. If your app’s appetite is even bigger, they have custom plans, too.

You can create a remote usability test (e.g., tasks and questions) to determine how users use an app. You can invite participants via social media to perform the tasks. You will get a report with task completion rate, the time spent per task, most common success page, most common fail page, most common first click, and common navigation paths.

Works on – Web based / iPad.

Pricing – The pricing is on a per project basis, and each project would cost $350. You also can buy a license for $1,900 to $9,000, depending on your needs.

Best feature – Its A/B testing platform that lets you show different variants to different users to discover which variant performs the best.

Features

Much like other A/B testing platforms for mobile, Arise offers a native iOS and Android A/B testing platform. It integrates easily with your app’s SDK, and it lets you test an unlimited number of features and variations. The dashboard shows how conversions have changed, for the better or worse, over time.

It helps you track goals, like in-app purchases, new account creations, and improved CTR on ads.

You decide what user ratio sees the variation and at which stage.

Works on – iOS / iPhone / iPad / Android.

Pricing – Arise operates on a Freemium model with the basic version starting at $0. You can run A/B tests on one app with up to 100 monthly active users (MAU).

The professional plan is €299/month and supports one app and MAU up to 10,000.

The business plan is priced at €649/month and supports 2 apps and MAU of 100,000.

Best feature – This is one of the very few companies that offers usability testing as a service. As a result, rest assured that you will get a fully customized service that is in tune with your needs.

Features

They will identify problems associated with your mobile app by taking into account and testing identified journeys that users have on the app. Also, you can use the tool to analyze cross-platform behavior of the app and loading times on various networks.

Mobile usability is a service that SimpleUsability offers and includes the following:

Below, you can see a small preview of the number of devices, their versions, and the quantities available for conducting tests.

With 500+ mobile devices, Keynote features one of the most exhaustive lists of devices ever for testing apps.

Features

With Keynote, you have two options for app testing. The first is a cloud-based platform, Device Anywhere, and the second is a mobile testing environment, Keynote MI.

Keynote’s testing facility consists of a network of over 2,000 interconnected mobile phones and tablets called Device Anywhere. Unlike simulated platforms that offer only limited functionality, live phones on different carriers worldwide is the ultimate solution for testing apps. Screenshots of all the tests are captured and backed up so that you can review them whenever you want to.

The second option called Keynote MI supports device emulation on a WebKit browser. You can record a script on one device and run it on any other device. The library has over 2,200 scripts for testing.

They have a library of tests for your mobile apps.

Works on – iOS / iPhone / iPad / Android.

Pricing – Keynote offers a trial that supports a dozen devices. Prices for the PRO plan and Enterprise Plan are not shown on the website, but the former has a fully functional trial and the latter comes with a demo.

Best feature – UserZoom’s greatest feature could be that it provides you with people who test apps in their natural environment. You can see the facial expressions of the users while using each feature, the time taken on tasks, and a lot of other things. With no cue cards or artificial environment, this could be one of the best tools for usability testing.

Features

UserZoom provides a remote unmoderated testing environment for testing mobile apps. Unlike artificial environments created using a panel of people in a lab, UserZoom provides a natural testing environment similar to Userlytics (discussed earlier) where participants test the mobile apps in their own homes with their own devices.

After building the study design via UserZoom, you can invite participants via social media, email, or from a panel provider. Tasks are given to them and their feedback is recorded. You can see real-time data, including success ratios, time on task, clickstreams, heatmaps, video, audio, facial expressions, mouse movements, responses to the questionnaires, etc., which can be accessed via the Analytics dashboard and exported to PowerPoint, Word, Excel, or SPSS.

Best feature – Apperian’s app distribution lets users be an active part of the feedback system by letting them provide feedback through app ratings and comments. This kind of crowdsourcing helps users propose improvements to existing apps or ideas for new apps and drives down the costs for collecting feedback.

Features

Apperian can be used for mobile app testing. Apperian’s app distribution takes care of sending the app’s link via SMS, email, and chat to a wide audience. This testing community consists of testers outside the company circle like consumers, contractors, etc., allowing you to quickly test out variations of the same app to see which one is popular.

Best feature – One of the very few physical usability testing kits for testing apps. You can set up your own panel of participants with Mr. Tappy.

Features

Mr. Tappy provides insight into what users are doing with your app. It was originally designed as a filming rig for testing usability on the iPad. Now, it can be used for recording user interactions with mobile by serving as a recording kit from the user’s point of view.

It’s a hardware tool made of aluminum, and the entire thing can be set up by twisting the nuts. No tools are required.

Works on – iPhone / iPad / Android.

Pricing – Mr. Tappy kit would cost you $295, which includes worldwide shipping.

Best feature – It captures user app interactions in their natural environment. The video playback at 60 frames per second is very detailed and clear.

Features

Lookback will record the voice and gestures of users. It also will record the screen and collect experiences. Lookback’s essence is in capturing pure customer experiences while they use the app. They are not in a laboratory constrained with tools and wires. The video of the user comes from the corresponding front camera along with a screen recording. You can see the reactions on their faces and understand if they are struggling with the app.

What sets Lookback apart is that not only does it capture pure experiences, it does that in style. At 60 frames per second, Lookback is much better than its competitors at 1 frame per second.

Works on – iOS 8.

Pricing – Their website says that they are absolutely free while in beta.

Apptimize provides a reliable A/B testing platform for testing apps. You can add a few lines of code to your app and voilà there is an interface where you can run changes.

Multiple experiments can be run with Apptimize.

Works on – iOS / iPhone / iPad / Android.

Pricing – It works on a Freemium model. The basic version that can support up to 25,000 monthly users is free. The standard version costs $300 per month, and the Professional plan is priced at $1,000 per month, both of which come with a free trial. If you have more than a million customers, then the price is $5,000 per month (free trial available).

Leanplum provides flexible A/B testing for mobile apps. There is no coding whatsoever required to use it. What makes Leanplum stand out is that you can test and push those changes without approval from the Appstore.

Leanplum also offers powerful segmentation, and users can be segmented based on demographics, behavior, and device type. You can set custom parameters to segment users.

Vessel is an A/B testing tool that helps product managers run deep-layered A/B tests on their mobile apps. You can monitor how the variations perform even while the tests are running. It results in zero downtime and the ability to send out winning versions to users with app-store releases. The session times are monitored, and you can see how much time users spend on your app.

You can import all the data and analyze it with SQL or CRM. The larger analytics package allows in-browser editing, too.

Works on – iOS / iPhone / iPad / Android.

Pricing – The basic version is billed at $150/month and comes with a 30-day free trial, supports up to 100k monthly active users (MAU), and includes unlimited testing and multivariate testing.

The optimal version is billed at $650/month and comes with a 30-day free trial, supports up to 500k MAU, and includes unlimited testing, multivariate testing, segmentation, and 90-day retention.

The enterprise version also has a free trial, but you need to contact them with your requirements first.

There you have it!

]]>http://blog.kissmetrics.com/testing-tools-for-mobile-ux/feed/5How A/B Testing Works (for Non-Mathematicians)http://blog.kissmetrics.com/how-ab-testing-works/
http://blog.kissmetrics.com/how-ab-testing-works/#commentsMon, 03 Nov 2014 18:45:38 +0000https://blog.kissmetrics.com/?p=25643A/B testing is a great way to determine which variation of a marketing message will improve conversion rates (and therefore likely improve sales and revenue).

Many of you use A/B testing already, but you may need some help understanding what all the results mean. My goal here is to explain the numbers associated with A/B testing without getting bogged down in mathematical equations and technical explanations.

A/B testing results are usually given in fancy mathematical and statistical terms, but the meanings behind the numbers are actually quite simple. Understanding the core concepts is the important part. Let the calculators and software do the rest!

Sampling and Statistical Significance

The first concept to discuss is sampling and sample size. Determining whether the results from a set of tests are useful is highly dependent on the number of tests performed. The measurement of conversion from each A/B test is a sample, and the act of collecting these measurements is called sampling.

Let’s suppose you own a fast food restaurant and would like to know if people prefer French fries or onion rings. (If you are already in business, you probably know the answer from sales of each.) Let’s pretend you are not in business yet and want to estimate which will sell more, so you can pre-order your stock of each accordingly.

Now, suppose you conduct a survey of random people in the town where the restaurant will be located, and you ask them which they prefer. If you ask only three people total, and two say they like onion rings better, would you feel confident that two-thirds of all customers will prefer onion rings, and then order inventory proportionately? Probably not.

As you collect more measurements (or samples, and in this case, ask more people), statistically the results stabilize and get closer to representing the results you will actually see in practice. This applies just as much to website and marketing strategy changes as it does to French fries and onion rings.

The goal is to make sure you collect enough data points to confidently make predictions or changes based on the results. While the math behind determining the appropriate number of samples required for significance is a bit technical, there are many calculators and software applications available to help. For example, evanmiller.org has a free tool you can start using right now:

Confidence Intervals

It is likely that you have seen a confidence interval, which is a measure of the reliability of an estimate, typically written in the following form: 20.0% ± 2.0%.

Let’s suppose you performed the French fries-versus-onion-rings survey with an adequate number of people to insure statistical significance, which you determined by using your trusty statistical calculator or software tool. (Note that the sample population (demographics, etc.) matters as well, but we will omit that discussion for simplicity.)

Let’s say the results indicated 20% of those surveyed preferred onion rings. Now, notice the ± 2.0% part of the confidence interval. This indicates the upper and lower bounds of the people who prefer onion rings, and is called the margin of error. It is actually a measurement of the deviation from the true average over multiple repeated experiments.

Going back to the 2% margin of error, subtracting 2% from 20% gives us 18%. Adding 2% to 20% gives us 22%. Therefore, we can confidently conclude that between 18-22% of people prefer onion rings. The smaller the margin of error, the more confident we can be in our estimation of the average result.

Assuming a good sample population and size, this basically tells us we can confidently assume that if we were somehow able to survey, for example, everyone in the United States, 95% of the survey answers received in favor of onion rings would lie somewhere between 18-22%. In other words, we can be relatively certain that 18-22% of the people in the U.S. prefer onion rings over French fries.

Therefore, if we are placing an order to stock our restaurant, we may want to make sure that 22% of our onion rings-and-French-fries inventory is onion rings, and the rest is French fries (i.e., 78%). Then, it would be very unlikely we would run out of either, assuming the total stock is enough for the amount of time between orders.

Confidence Intervals in A/B Testing

Applying this to the A/B testing of a website change would lead to the same type of conclusion, although we would need to compare the confidence intervals from both test A and test B in order to come to a meaningful conclusion about the results.

So, now, let’s suppose we put a fancy new “Buy Now” button on our web page and are hopeful it will lead to increased conversions. We run A/B tests using our current button as the control and our fancy new button as the test variation.

After running the numbers through our A/B testing software, we are told the confidence intervals are 10.0% ± 1.5% for our control variation (test A) and 20.0% ± 2.5% for our test variation (test B).

Expressing each of these as a range tells us it is extremely likely that 8.5-11.5% of the visitors to our control version of the web page will convert, while 17.5-22.5% of the visitors to our test variation page will convert. Even though each confidence interval is now viewed as a range, clearly there is no overlap of the two ranges.

Our fancy new “Buy Now” button seems to have increased our conversion rate significantly! Again, assuming an appropriate sampling population and sample size, we can be very confident at this point that our new button will increase our conversion rate.

How Big Is the Difference?

In the example above, the difference was an obvious improvement, but by how much? Let’s forget about the margin of error portion of the confidence interval for a minute and just look at the average conversion percentage for each test.

A 10% increase seems like a really great improvement, but it is misleading since we are looking at only the absolute difference between the two rates. What we really need to look at is the difference between the two rates compared with the control variation rate.

We know the difference between the two rates is 10% and the control variation rate is 10%, so if we take the ratio (i.e., divide the difference between the two rates by the control variation rate), we have 10% / 10% = 1.0 = 100%, and we realize this was a 100% improvement.

In other words, we increased our conversions with our new button by 100%, which effectively means that we doubled them! Wow! We must really know what we’re doing, and that was quite an awesome button we added!

Realistically, we may see something more like the following. Test A’s confidence interval is 13.84 ± 0.22% and test B’s is 15.02 ± 0.27%. Doing the same sort of comparison gives us 15.02% – 13.84% = 1.18%. This is the percentage increase in conversions for the test variation.

Now, looking at the ratio, 1.18% / 13.84% = 8.5%, indicates we increased our conversions by 8.5% despite the fact that the absolute percentage increase was only 1.18%. This is therefore a pretty significant improvement. Wouldn’t you be happy to increase your conversions by almost ten percent? I would!

It is worth keeping in mind that percentages are usually better indicators of changes than absolute values. Saying the conversion rate increased by 8.5% sounds a lot better, and is more meaningful, than saying it was a 1.18% absolute increase in conversions.

Overlap of Confidence Intervals

One thing to watch out for is overlap of the confidence intervals from the A and B tests. Suppose that test A has a confidence interval of 10-20% for conversion rates, and test B has a confidence interval of 15-25%. (These numbers are obviously contrived to keep things simple.)

Notice that the overlap of the two confidence intervals is 5%, and it is located in the range between 15-20%. Given this information, it is very difficult to be sure the variation tested in B is actually a significant improvement.

Explaining this further, usually a 5% overlap between A/B confidence intervals indicates that either the variations are not statistically significant or that not enough measurements (i.e., samples) were taken.

If you feel confident that enough samples were collected based on your trusty calculator to determine sample size, then you may want to rethink your variation test and try something else that could have a bigger impact on conversion rates. Ideally, and preferably, you can find variations that result in conversion rate confidence intervals that do not overlap with the control test.

Summary

A/B testing is a technique certainly based on statistical methods and analysis. That said, you do not need to be a statistician to understand the concepts involved or the results given to you by your favorite A/B testing framework.

Sure, you could learn the mathematical equations used to calculate statistics and metrics surrounding your test, but in the end, you are likely much more concerned with what the results mean to you and how they can guide you to make targeted changes in your marketing or product.

We have discussed a variety of concepts and statistical terms associated with A/B testing, and some of the resulting quantities that can be used to make decisions. Understanding the concepts presented here is the first step toward making great decisions based on A/B testing results. The next step is ensuring that the tests are carried out properly and with enough sampling to provide results you can have confidence in when making important decisions.

Online Tools and Resources

Here are some links to tools that will help you with your A/B tests. The image below is a link to an A/B Significance Test Calculator located on getdatadriven.com.

CRO is one of the most important aspects of your digital marketing strategy because conversion rate is the only measurable metric that actually correlates with ROI.

Even if a customer “conversion” on your website is something other than a purchase (such as a newsletter signup), the rules of CRO still apply.

Unfortunately, when it comes to implementing a CRO plan, you can get completely lost in a sea of online resources that tell you to do things like change the colors of your buttons, add social proof, shorten your web copy, include gamification… Stop the madness!

Before jumping into tactical fixes, there is only one thing you need to do to optimize conversion rates on your website, and that is what today’s blog post is all about – A/B testing.

While a leap of faith worked for Bruce Springsteen in 1992, it won’t bring you success in the future. So, rather than take a leap of faith on a set of tactics, use web analytics to get a ton of insight based on real-time user feedback. The data can be used to optimize any area of your website based on the real-life behaviors of real-life customers. What could be better?

Of course, you may already have a hunch about what your users prefer and how they consume content, and that brings us to…

It is tempting to make assumptions about your audience based on things like age, gender, location, or income. Resist the temptation when possible! There was a time when customer profiling was the best way (the only way) to target customers; and, yes, it still has its place in marketing.

However, in the digital era we have so many more options! No longer do we have to rely on segmentation to deliver hyper-personalized experiences. We now have the ability to leverage every digital touchpoint as an opportunity to learn about our customers’ preferences on a one-to-one basis.

A/B Testing Rule #2: Always establish a baseline.

Increasing conversion rates is your immediate goal, and if you’re like me, you’re in a hurry. But, before jumping into a high stakes A/B test (or even a low stakes A/B test), it is important to budget time up front to establish a current baseline to measure against. If you don’t know what your current conversion rate is, how will you know if your future tests are successful? (More on that in Rule #5.)

A/B Testing Rule #3: Just because it worked for someone else, does not mean it will work for you.

If CRO were a repeatable process that worked the same way for every website every time, there would be no need for testing at all. Marketers would know the way all e-commerce websites perform, and everyone would follow the same rules.

Unfortunately, this is not the case (and a world full of sameness would be rather boring anyway), which is why you must perform A/B testing on your own unique content with your own unique audience. Sure, you can borrow ideas from other CRO-ers, but don’t expect the same results.

For example, let’s say Company ABC sells shoelaces and Company XYZ sells enterprise software applications. Clearly, the buying cycle looks completely different for these two companies, even if they have customers in common. Company ABC may find that changing its primary call-to-action (CTA) button to green instead of red increases sales by 75%. But, it is not likely that Company XYZ would experience similar results.

A/B Testing Rule #4: Test one thing at a time.

This one is pretty self-explanatory but worth mentioning because it’s important. When performing A/B testing on your website, test one variable at a time so that results are readable at the end. If you change your headline at the same time you change your navigation, how will you know which one of the variables contributed to the most conversions?

Pro tip: If you run a headline test, be sure your test headline works with the rest of your digital touchpoints throughout the sales funnel. Consistency builds credibility.

A/B Testing Rule #5: Do not call a “winner” until statistical confidence is reached.

In A/B testing, statistical confidence refers to the likelihood that the same results can be expected if the same test is run again in the future. In other words, it tells you how confident you can be of the results of your test.

For example, let’s say you perform an A/B test on your shopping cart page where “A” is the use of radio buttons and “B” is the use of dropdown menus. Let’s also say that “B” produces a 75% lift in conversion rate. Obviously, B is the winner, right?

Not necessarily. There are three more facts to consider:

Sample size: Using the example above, if your sample size is 4 people, that means only 3 people prefer dropdown menus. Sure, it’s a good start, but the likelihood of the results remaining true in a sample size of 1,000 is extremely low; therefore, this test result has a low confidence level.

Percentage: The accuracy of your A/B test results also will depend on your margin for error. If, in a sample size of 500, 99% of customers convert when shown dropdown menus, you can be fairly certain that your margin for error is low. If, on the other hand, only 51% of customers convert when shown dropdown menus vs. 49% who are shown radio buttons, random chance gives you a larger margin for error, and you should continue running the test until a higher confidence level is reached.

Population size: If the size of your entire audience is 250,000 and your sample size is 25, again, this will yield a test result with a low confidence level. To calculate your recommended sample size, check out Raosoft’s Sample Size Calculator.

A/B Testing Rule #6: Walk before you run.

This proverb is true in many aspects of business, and A/B testing is no exception. As customer perceptions and expectations evolve, CRO has always been and will always be a moving target. You will make mistakes. You will learn from your mistakes. With practice, you will become an A/B testing master.

A/B Testing Rule #7: Get a second opinion. Or a third. Or a fourth.

User testing has never been more important, nor has it ever been easier! Even if you do not have the luxury of a User Experience (UX) Department on hand, there are a number of free and low-cost services that offer usability testing on the fly, such as:

Peek User Testing: Peek is a super easy and quick way to gather qualitative feedback on your website.

The pros: Feedback is generally unbiased, detailed, and free!

The cons: It doesn’t always make sense to test an interface outside of its intended audience. Also, it is difficult to gather a large quantity of feedback using this method due to its time-consuming nature.

Amazon Turk: Amazon Turk allows you to gather feedback from thousands of real-live people in a short period of time through the use of quantitative research methods such as surveys.

The pros: Generally inexpensive, scalable, and quantitative, and you can pre-select qualifying criteria for your testers.

The cons: This is generally performed via a survey engine, which can introduce artificial filters.

Surveys certainly have their place in marketing, but realize they may not always provide honest feedback the way behavioral feedback captured via your web analytics can. This is because surveys introduce human biases in a way that raw behavioral data does not.

For example, imagine that you are in a hurry to print out important documents on your way to a meeting and, 3 pages into your print job, you find that the ink cartridge needs to be changed. Now, what if I ask you how you would handle this particular situation?

Before reading any further, please pause and think about your honest answer.

You probably said you would change the ink cartridge and continue printing your documents. If this were a survey, I would accept that as your answer.

In a user-testing environment, though, I would note that you kicked the printer 4 times, cleared a paper jam, and hit the cancel button 7 times; and then you changed the ink cartridge. While sorting your documents, you spilled coffee all over your shirt, got frustrated, and had to re-schedule your meeting.

In the survey setting, you didn’t outright lie about what you would do in the situation. You did change the ink cartridge, after all. But in the survey setting, I would have missed out on all the extra behavioral data that happened before and after.

A/B Testing Rule #9: Clearly define your success metric.

Never lose sight of your ultimate success metric. CRO is about conversions. It is not about open rates, click-through rates, tweets, shares, or pins. Unless, of course, tweeting and pinning is the “conversion” on your website. In that case, go crazy with it.

The bottom line: have a goal in mind and optimize your content around that goal. Everything else is a key performance indicator (KPI).

A/B Testing Rule #10: Don’t test whispers.

This saying dates back to the days of direct mail, and it still holds true for online marketing. Avoid testing miniscule elements that have little chance of driving significant change. Use your common sense, trust your intuition, and focus on high impact tests. For a list of 485 real-life test ideas, check out Which Test Won.

Bonus Hacks

CRO is not just about getting more people to click your buttons. It’s about delivering the right content to the right audience and encouraging them to click the right buttons at the right time. If you’ve A/B tested your entire website, optimized based on the data, and your conversion rates are still lower than you’d like them to be, perhaps you are measuring the wrong set of metrics.

For example, let’s say you own a gourmet cupcakery and your website has a 2% conversion rate. In this example, a customer placing an order for cupcakes is the “conversion.” Here are a few questions to ask yourself:

Is the 2% conversion rate based on all web traffic, or is the 2% conversion rate based on only those who click through to the “How to Order” page?

What are the traffic sources of those who click through to the “How to Order” page?

What are the traffic sources of those with the highest bounce rates?

What are the behavior patterns of those who ultimately convert? Did they watch a video? Browse your photo gallery? Read customer testimonials?

Last and most important question: how can I use this data to better qualify prospects?

By slicing and dicing your web analytics in this fashion, a couple of things may happen:

You may find that your conversion rate is better than you had originally estimated.

Either way, this exercise will help you prioritize your A/B testing calendar.

A Final Word

Outside of basic functionality like site speed and mobile optimization, there is no single truth or secret sauce to CRO. The only way to know for sure what works with your audience is to run a set of A/B tests and then be willing to implement changes based on the data.

About the Author:Nicki Powers is a Digital Marketing Strategist located in Saint Louis, Missouri, who loves to engage customers and drive sales through the use of emerging technologies. You can follow her on Twitter here: @nicki_powers.

]]>http://blog.kissmetrics.com/know-about-ab-testing/feed/5Website Testing Mistakes That Can Damage Your Business – Part 2http://blog.kissmetrics.com/website-testing-mistakes-2/
http://blog.kissmetrics.com/website-testing-mistakes-2/#commentsSat, 09 Aug 2014 18:20:12 +0000http://blog.kissmetrics.com/?p=24372Last week we looked at the first two website testing mistakes and talked about a few practical tips to avoid them. Here’s a recap:

Mistake #1: Optimizing for maximizing conversions at the expense of your promise.

Mistake #2: Making it hard for your users to get to your “must have experience”.

Today we are going to talk about the final two mistakes. Watch the video below to learn about them.

]]>http://blog.kissmetrics.com/website-testing-mistakes-2/feed/05 Psychological Principles of High Converting Websites (+ 20 Case Studies)http://blog.kissmetrics.com/psychological-principles-converting-website/
http://blog.kissmetrics.com/psychological-principles-converting-website/#commentsMon, 14 Jul 2014 14:54:49 +0000http://blog.kissmetrics.com/?p=24142You know the feeling when you pour your heart and soul into a promising new A/B test, only to have it flop like an entrepreneur’s first startup?

I certainly do.

Last year, I ran all product and brand marketing for an edutech company. For the first few months, things were amazing. I improved the CTA button, optimized the headline, and removed distracting links. The conversion rate just kept growing.

In fact, our conversion rate doubled in less than a year. Then, suddenly, the flood gates closed. I kept running tests, but for months nothing worked. At best, I could eke out a small, barely significant 3% bump.

That’s when I realized I needed a way to predict which experiments could have the largest effect before I ran them.

Studying behavioral psychology, I started evaluating and ranking potential tests based on informed predictions, and my experiments again began returning significant boosts in conversion.

By applying the following five psychological principles to your testing plans, you can uncover the big gains, too.

1. Law of Pithiness

One of Gestalt psychology’s central principles is the Law of Prägnanz. This law (literally, the “law of pithiness” in German) says that we tend to order our experiences in a symmetrical, simple manner.

We prefer things that are clear and orderly, and we’re afraid of complex, complicated ideas or designs. Instinctually, we know that simple things are less likely to hold unpleasant surprises.

This is why we’re terrified of credit card terms of service, and we embrace simple return policies.

Now, how can we leverage this principle to improve our conversion rates?

Case Study #1: The Sims 3 (128% increase)

Despite having one of the best-selling computer game franchises ever, The Sims 3 manufacturers knew there still was opportunity to improve conversions.

Ultimately, Variation D won (offering a free town after registration), but all six of the simplified variations improved conversions by at least 43%.

So, if you’ve got dozens of amazing features, congratulations! But don’t throw all of them at your potential customers.

Just choose the one that is most powerful.

Case Study #2: Device Magic (35% increase)

Mobile business company Device Magic had built a beautiful homepage filled with lots of bullet points, options for organizations and developers, and even an introductory video explaining how everything worked.

Concerned that it might be too technical and complex, they decided to test a simpler homepage. Free of the video, the bullet points, and even the developer and organization options, this test variation simply featured a short value proposition and a set of clean image sliders.

The new design made it easier for new visitors to understand the product (with none of the jargon), and click-throughs to the signup page shot up by 35%.

Case Study #3: Highrise (37.5% increase)

When 37signals wanted to improve the conversion rate for their popular Highrise CRM software, they decided to test a few dramatic changes.

In this test, they compared their original design with a long form, copy-heavy design. What do you think stands out most about these two designs?

The original design on the left is filled with lots of images, arrows, and headlines. Your attention is directed to twelve different places at once. It’s complicated.

The new long form design couldn’t be more different. The headline grabs my attention before I follow the single arrow to read the exciting details. It’s simple.

Make life easy for your customers by designing your pages with clear hierarchy, and focus on one target action per page.

Case Study #4: Daily Burn (20.45% increase)

Daily Burn (formerly Gyminee) designed a beautiful homepage complete with screenshots, visualized features, and even a fancy live calorie ticker.

Since the entire company at the time was just two founders, they didn’t have the people resources necessary to do any big complicated redesigns.

Over two tests, they removed various parts of the page (including the latest blog post, the live calorie ticker, and the extended footer), gradually focusing the page more on one essential element: the sign-up button.

Each of these two simplification split tests improved conversions by an average of 20.45%.

What unnecessary elements on your website distract visitors from your main goal?

Case Study #5: DesignBoost (13% increase)

When DesignBoost launched, they knew they had lots of A/B tests ahead, and they wanted to start with the tests that could give them the largest gains.

(Side note: Notice the word “generally” above. While general principles like simple design hold steady across websites, tactics like page length or specific trigger words bring in dramatically different results for different websites/audiences. That’s why we test them.)

Just as they had hypothesized, the shorter page did boost signups by 13%.

But here’s the really interesting part: with large changes like the deletion of most of your landing page, you can’t really attribute the results to one factor. Sure, the landing page is shorter, but it also lost the fourth headline, the comparison chart, and the third product image.

Macro changes can achieve large results, but micro changes likely will contribute more and better knowledge for your overall strategy.

2. Law of Past Experience

Also known as the concept of mental models, the law of past experience finds that our previous experiences contribute to our interpretation of current experiences.

This law is a little trickier than most for two reasons. First, past experiences are highly personal, so what may influence one person might have no effect on the next. Second, past experiences actually hold a weaker influence over our perception than most other psychological laws, so it can be overridden fairly easily.

Still, many past experiences – such as the notion that chairs are for sitting – are both universal and powerful.

Case Study #6: Fab.com (49% increase)

A major e-commerce retailer, Fab.com knew that even a small increase in click-through rate (CTR) on their “add to cart” button would significantly impact their bottom line.

Complete with an expandable product image and a fashionable cart icon for the “add to cart” button, the starting layout was actually quite well designed, but they knew they still had room for improvement.

The variations essentially modified two elements:

Manufacturer attribution (“by Blu Dot x Fab” and “by Qualy”)

Call to action (“Add To Cart” and “+Cart”)

Variation 2 (with the extra descriptive text and a slightly modified CTA) led to a 15% increase. Modest, but still statistically significant.

Variation 1, however, was the real shocker. Simply adding descriptive text and creating a button with the words “Add To Cart” increased clicks by fully 49%!

What could have caused such a large increase?

Actually, it’s quite simple. While the fashionable shopping cart icon is clever and cute, it’s also confusing. Based on a couple of decades of online shopping, we expect the checkout button to say “add to cart” (go look at Amazon!).

Breaking from this mental model may well be a design improvement, but it also confuses potential customers.

Before we get into the interesting details, just compare those two designs from a distance. What stands out?

To me (and this is where past experience is personal), the original design bears a striking resemblance to a Google form. The redesign looks like a normal checkout page.

When most people see a survey design on a checkout page, they think it is unsecure and unprofessional, not the kind of place you can comfortably put your credit card.

In the redesign, this company matched dozens of mental model triggers:

Another column (fixes the survey look)

More security badges (increases trust)

Instant chat (answers last minute questions)

Phone number (increases trust and answers questions)

As with most tests, this can’t be cleanly attributed to just one psychological principle – it’s a blended mix of dozens – but two primary principles cover most of the changes. First, the overall design plus individual elements (like the credit card images and the terms of service checkbox) all conform to the average customer’s past experience. Second, the McAfee badge, the BBB approval, and all the other third-party seals help improve trust.

Case Study #8: AMD (3600% increase)

As a computer parts manufacturer, AMD doesn’t use its website for consumer sales. Instead, they’ve designed their website to build publicity and support customers.

They had a set of social sharing buttons on their website, but AMD’s marketing team began to wonder whether they might be able to increase social sharing by moving or changing the buttons.

As a frequent internet user, you unconsciously expect to find sharing buttons in certain places. For instance, you might be less likely to use sharing icons in a website’s footer than you would floating buttons in the sidebar.

To isolate the optimal presentation of social sharing buttons on their website, AMD tested different variations. Specifically, they tested for placement (left, bottom, and right) and share icon (icon + link, small chicklet, and large chicklet).

The winning version (left with large chicklet) performed about 37x better than the original footer configuration.

How did such a minor change trigger this dramatic effect?

First, the new icons and placement meet user expectations better. Since many sites use a similar configuration (like the Digg Digg plugin for WordPress), we as internet users are used to sharing with these icons.

Second, floating left icons simply achieve higher visibility. Many people never even scroll far enough to see icons hiding in the footer.

Finally, placing these icons on the left side of the page coincides with the “F” shaped eye movements of the average internet user. (Side note: If you’re in a country that reads right to left, you’ll probably be better off with a right side placement.)

Case Study #9: Veeam Software (161% increase)

What if it were possible to significantly improve conversions by simply changing two words?

Veeam Software wanted to increase their conversion rate, but they didn’t just race into changing random elements on their page and hoping for a small boost. Instead, Veeam took the smart route by thoroughly studying their users before testing.

Using Qualaroo (an amazing live survey platform), they asked their website visitors a simple question: “What other information would you like to see on this page?”

Looking at the results, they noticed many users were asking for pricing.

But wait! Veeam already had a “request a quote” link, so why would people be asking for pricing?

Realizing the importance of using the customer’s language, Veeam Software decided to run a very simple test: they changed the words “request a quote” to “request pricing.”

Changing those two words increased CTR to the sales contact page by 161%!

So, what’s the science behind this improvement?

Your customers aren’t blank slates; as they read your site, they have specific questions and interests.

Some might want to learn more about you; they will look for the word “about” in the header or footer. Others might want to contact you; they’ll look for the word “contact” or “email” in the header or footer. Still others will want to sign up; they’ll look for words like “sign up” and “pricing.”

Just as the most powerful martial arts moves are often the most subtle, many of the most successful A/B tests you’ll run won’t be the flashy, complicated site overhauls.

Backed by solid research and behavioral science, you will discover large conversion boosts in some of the smallest website changes.

3. Principle of Cost/Benefit Analysis

Elliot Shmukler (of LinkedIn and Wealthfront) once said that all growth can be boiled down to three primary levers:

Increase exposure (reach more people)

Decrease friction (make it easier to take the target action)

Increase incentive (create a better benefit)

The principle of cost/benefit analysis explores the interaction between the last two of those levers. Human behavior is heavily influenced by the relationship of an action’s perceived benefit (downloading an ebook) versus the perceived cost (entering an email address).

This is why people will gladly give you an email address to get a useful ebook, but they’re unlikely to fill out a 60-page survey for that same ebook.

So, how can the principle of cost/benefit analysis be applied in website design?

One study found 67% of online shopping carts are abandoned, and the Official Vancouver 2010 Olympic Store was no exception.

Looking for ways to improve the checkout conversion rate, the Olympic store decided to focus on the second growth lever – decreasing friction.

The original checkout process featured four separate pages: sign in / create account, shipping information, billing information, and confirmation.

The new checkout process featured one page to purchase, with a second page after purchase encouraging account creation.

Both checkout processes required the same shipping and billing information, but the psychological impact of putting it all on one page reduced the mental friction. You could argue that this isn’t a “real” decrease in friction since all the same information is required, but mental friction is just as real as actual friction, and the results reflect this.

In addition to cutting the mental friction, they also brilliantly moved the account creation prompt to the end of the checkout process. Doing this increases purchases since the account creation option isn’t distracting the customer prior to purchase, and it also likely increases account creation since, after you’ve already entered your information during checkout, creating an account just takes a single click.

Combined, these two changes increased conversion rates by a solid 21.8% by decreasing friction, thus improving the cost/benefit ratio.

Case Study #11: Meebox (121% increase)

Many A/B tests focus on buttons, copy, and design, but what about an A/B test of the business model?

Meebox, a web hosting company in Denmark, wanted to increase revenue, so they decided to test their entire pricing structure.

The discounted variation saw a 112% increase in revenue (51% increase in conversion rate, 46% increase in average order value).

Why did this work so well?

The discount slashes the risk a customer takes in purchasing; thus, making the benefits of joining look even better.

And because more people join for the discount, Meebox will have a larger customer base for future upsells and recurring revenue. In general, it is much easier to get a current customer to buy more than to convince a new customer to make their first purchase.

Case Study #12: Soocial (28% increase)

Soocial, a former contact management company, acquired many of their new customers through a large sign-up button on their homepage.

Since they knew improvements to the top of the growth funnel often have the largest impact early in a company’s life, the founders decided to test a simple change to the sign-up button.

The initial version simply had a large button reading “Sign up now!” The variation didn’t change the button text, but added two words next to it: “It’s free!”

By improving the benefit side of the cost/benefit ratio, these two simple words increased click-through rate (CTR) by 28%.

“Free” is a powerful word, but you will need to be careful of two things if you try it on your product:

If the product isn’t actually free (or if just a tiny part is), you’ll create a lot of frustrated customers, and maybe attract the FTC.

“Free” can often imply “cheap.” If your value proposition is built around quality, using the word “free” might bring you the wrong type of customer.

Case Study #13: Expedia ($12M increase)

If you make a field optional, it shouldn’t hurt your conversion rate, right?

On the contrary, Expedia found that even optional fields contribute to friction.

In 2010, Expedia decided to test two variants of their billing address fields. The original option required the user’s name and billing address, with an optional field for company name. The variation just required the user’s name and billing address, without any optional fields.

This one simple change – removing an optional field – increased revenue by $12,000,000.

So why did it work?

Even though the field was optional, it still contributed to friction. It made the form look longer, and every user had to read it and decide whether to answer.

Also, because many users aren’t used to seeing a “company name” field in a billing address form, this field clashed with their mental models, triggering even more friction.

4. Fitt’s Law

We all know that page load time dramatically affects conversion rates, but what about the time required to take a desired action?

Fitt’s Law proposes that time required to move your mouse to a target area (like a sign-up button) is a function of (1) distance to the target and (2) size of the target.

In general, this means you can increase CTR to a desired action by making the target large (i.e., a button rather than text) and placing it near the expected mouse location (i.e., across a multi-page form, buttons should be placed in the same position to minimize mouse movement).

Inversely, you can decrease undesired actions, such as cancellations, by using a small target (text link) at a distance from the starting mouse position (near the bottom of a page).

If you’ve ever used WordPress, their UX follows Fitt’s Law admirably. Frequent actions like “Publish” use large buttons while less frequent actions like “Move to Trash” use smaller text links.

Case Study #14: Hyundai (62% increase)

The car manufacturer Hyundai created a website to generate test drive and brochure requests in The Netherlands, but they had a problem: very few people signed up.

SEO-friendly text (if it didn’t hurt conversion rates, they could use it to increase traffic)

A larger image (more graphically appealing)

Two large CTA buttons

Prior to the test, a website user could book a test drive or order a brochure (the two target actions) only by clicking a small text link in the left sidebar.

In the new variation, users got two large CTA buttons above the fold.

Combined, these changes resulted in a 62% increase in total test drive and brochure requests with a 208% increase in CTR.

(Side note: This is an example of proper multivariate split testing. Rather than simply making three changes and never knowing where to attribute the increased conversions, they ran eight different variations, combining one or more of these changes. In the end, the variation with all three changes drove the highest conversions.)

While the larger buttons follow one aspect of Fitt’s Law, the distance to the buttons still is fairly large. I would suggest that in the next test, they should try moving the CTA buttons to the top of the page (and probably removing the social sharing icons entirely).

The starting unsubscribe rate was already startlingly low (195 unsubscribes out of 578,994 recipients!), but when they applied Fitt’s Law by making the unsubscribe link smaller (“here” instead of “Unsubscribe”) the unsubscribe rate dropped 22% lower.

Aside from Fitt’s Law, the decreased unsubscribes on this variation also can be attributed to an inverse application of the Law of Past Experience. When a person thinks about unsubscribing from an email list, their brain automatically starts scanning for a link labeled “Unsubscribe,” just like it looks for a “Contact” link on a website. By changing their text to “If you’d like to unsubscribe from these messages, click here,” they increased mental friction required to unsubscribe.

Sometimes, you actually can use the same psychological principles to decrease actions you don’t want your users to take.

Case Study #16: SAP (32.5% increase)

The original page looked like a normal corporate website – fairly formal with lots of text. In fact, the download link also was text.

To improve this design, they tested a couple of variations that followed conversion best practices while still meeting the corporate branding requirements.

The winning design featured a number of important changes (including removing distractions and adding a second CTA at the bottom of the page), but one in particular stands out: the download button is clearly differentiated from the rest of the page.

Instead of forcing potential customers to find and click a small hyperlink, this new variation invited downloads with a large, easily located button.

The result? Trial downloads increased by 32.5%.

Interestingly though, bigger isn’t always better. In my experience with A/B testing, larger buttons definitely drive higher CTRs, but eventually you reach a tipping point where the button looks like a box and people start getting confused.

Case Study #17: The Vineyard (32% increase)

So, A/B testing is great for all these internet companies, but what about normal offline businesses?

The Vineyard, a luxury hotel near London, decided to find out by running a test on their website. For this particular test, they focused on variations that offered a high chance of improving their conversion from website visitor to booked guest.

The original page featured a large beautiful photo of a room with copy telling the story of their hotel. Then, at the very bottom of the page, they added a small link to “book online.”

The test variation left everything the same, but simply added a red “Book Online” button at the top of the page.

Adding that one button increased CTR by 32%.

Because this is one of those rare case studies where they changed only one element, it’s possible to clearly attribute the increase directly to Fitt’s Law.

In this case, they moved the CTA to the top of the page (shortening distance to click), and they used a button instead of a text link (increasing the size of the target).

A/B testing can be applied to great effect even on smaller websites. You just have to be more selective in choosing high potential tests.

5. Facial Recognition

As humans, we subconsciously watch for other humans. When we come across a human face on a website, we will (1) immediately jump to it and (2) assess the emotions showing on the face. None of this is intentional, and we might not even realize that we do it.

So, now, your visitors will notice if you put a face on your website, but will it improve conversions?

Human faces can increase conversions in two ways:

1. Attracting Attention

Since faces grab attention better than nearly anything else, you can use them to direct your visitors’ focus to the key elements on a page. Even better, if you use a face that’s looking at your CTA, most visitors will follow the person’s gaze to see what they’re looking at.

In the 1960s, a group of psychologists completed a study on human behavior where they had different sized groups of people stand on public sidewalks and stare up at nothing with rapt focus. A single person looking up drew almost no attention (4% of passers-by joined), but a crowd of 15 gazing up together drew nearly 40% of passers-by.

2. Conveying Emotion

We’re all experts at reading human emotion, and the emotion we find displayed on faces subtly influences our feelings about a website. If a person looks genuinely happy or sad, we’re likely to feel similarly, but be wary of stock photos. Overly exaggerated emotions likely will be off-putting and seem fake.

Studies have found that most sharing is driven by positive emotions (amusement, interest, surprise, etc.). On the other hand, hormones released by empathetic sadness may increase donations. Choose an emotion that will best affect the CTA on your particular page.

Case Study #18: Highrise (102.5% increase)

Remember Highrise from earlier? After their first test landed a solid 37.5% increase in signups, they didn’t rest on their laurels.

The control in this experiment was their original design: a fairly normal, busy website with lots of details and distractions.

In the test variation for this experiment, they condensed everything to three primary elements: a large screenshot, brief sales copy, and a large background image of a smiling customer.

Not only did this new variation still have the Law of Pithiness on its side, but the smiling, genuine face immediately grabbed the attention of new visitors while simultaneously making them happy.

This variation increased conversions 102.5% vs. the original baseline.

After seeing the success of this test, Highrise went on to test a few other faces in the same design. Although some landed small 5% improvements, they found that the specific face mattered less. Putting any customer’s mug shot on the website produced essentially the same result.

Case Study #19: Medalia Art (95.4% increase)

Boutique online art shop Medalia Art listed some of their artists’ profiles on the homepage. Then, new visitors could click through to an individual artist’s page, where they could browse paintings and purchase.

Because many of the conversions occurred over the phone, Medalia simply wanted to increase the CTR from homepage to artist page (though they could have used something like CallRail to track phone conversions).

Human faces draw attention, so in a situation like this where the photo actually links to the next step, human faces can produce amazing results.

Also, people never just buy the product; they buy the story. Using artists’ faces on their homepage helped Medalia start telling the story immediately.

Case Study #20: Harrington Movers (45.45% increase)

Remember when I mentioned that human faces in stock photos aren’t as effective as authentic people?

Harrington Movers originally used a stock photo of a smiling couple holding boxes. Looking for opportunities to boost their conversions, they decided to test two variations of this image: (1) replacing it with a photo of their moving crew, and (2) swapping it for a photo of their company moving truck.

So, again, using photos of human faces can increase your conversion rate, but don’t assume all photos are created equal. We’re pretty good at spotting stock images these days, and we often find them inauthentic.

But, while the variation with the photo of the crew makes sense, why would a photo of a moving truck in front of a house also increase conversions?

I would speculate that this photo helps for two reasons:

It builds trust – the company logo on the side assures customers that this company actually exists.

It helps customers visualize – “that could be my house getting packed into that truck!”

Conclusion: Use Behavioral Psychology to Guide Tests, but Test Everything

Behavioral psychology “laws” are really just useful theories that have worked thus far. But, it doesn’t mean they’re flawless. Also, they can overlap. While you are applying one law, you may be simultaneously (and inadvertently) violating another.

As a result, psychological principles provide amazing guidance in discovering and planning powerful A/B tests, but you still need to run the tests to make sure your particular implementation works.

Sometimes videos boost conversion, often faces help, and occasionally strange button colors do the trick. Always, the results vary from website to website.

About the Author: Nate Desmond enjoys experimenting with the best ways to find and engage amazing customers. He shares many of his favorite growth learnings on his blog. Subscribe to his email list to get a free ebook sharing the exact content marketing tactics he’s used to drive over half million visits to his blog.

]]>http://blog.kissmetrics.com/psychological-principles-converting-website/feed/3610 Things I Learned From Taking 100 Usability Testshttp://blog.kissmetrics.com/100-usability-tests/
http://blog.kissmetrics.com/100-usability-tests/#commentsTue, 08 Jul 2014 17:12:33 +0000http://blog.kissmetrics.com/?p=24095Usability testing is a technique to evaluate a product or service by testing it with users. The users work on tasks while observers take notes, listen, and learn.

After taking more than 100 usability tests (I took them all on Usability Hub), I’ve noticed some interesting things about how people are using usability tests today.

So, what do people do with usability tests now in 2014? Have we learned to avoid the pitfalls?

1. People Test Color Choices for Their Website

What do you want me to do here exactly?

The goal of a usability test is to find the critical problems that prevent people from completing tasks. Unfortunately, this test doesn’t clearly state a task that I need to complete. It asks which background color is the nicest.

This is more of a brand identity question that helps answer things like “Does the color match what the website is about? Does the color go with the emotion the product is trying to invoke?” I believe topics like this really should be defined by a creative director who oversees the brand identity, or the designer should work more closely with the client to determine if the colors represent the brand values.

2. People Make Tests Too Difficult for Users to Complete

It’s really hard to read anything here.

This is a very straightforward click test to measure where I will click. However, since I can’t easily read any of the selections, it’s hard for me to correctly click where I want. I think if the screenshot were more zoomed in, I’d be able to correctly provide the tester with the information they want. I know what I’m supposed to do, but there’s just not any good way for me to see what I’m clicking.

3. People Test Which Logos Are Preferred

Not sure if this counts as a usability test…

Is this logo presenting a critical problem internally? What does the business need to learn about the logo? I think the problem I have with the logo choices in usability tests is that I don’t understand what could be learned that would affect the outcome of the design. Here, I’m asked what I prefer, which I think is highly subjective.

Do you remember the Gap logo fiasco? Logos have long-term impact on a brand’s identity. I think random usability testing on a logo presents risks because the results will be meaningless if the business or client does not know whether a logo is a good representation of their values.

For example, if I’m being shown an architectural firm here, I could care less which logo I pick because I may just prefer colors over large text. But the impact of my decision goes far into the brand identity of this company even though I may never interact with them. I think that’s a risk that is too much to put into the hands of testing with a random demographic. If a company cares about its brand identity, it should take great care in choosing a logo among its partners, customers, and other stakeholders, rather than letting the random demographic in Usability Hub determine it.

4. People Crowdsource Logo Design from Scratch

This shouldn’t be a usability test. It can really endanger your business by soliciting logo design from random people not in your target demographic.

This one was surprising. I didn’t expect to see a usability click test to select which design I liked better as a logo and whether the letters should be lower case or upper case.

As I mentioned in the previous example, it’s more appropriate to take these considerations to prospective customers or clients of the company, not random people. Come up with a few logo ideas and show them to a small representative group of potential customers to see how they react. What you care about is how your potential or existing customers or clients react to your logo.

5. People Should Test what’s Best for their Business

I’ve worked in customer support for over 4 years and have seen variations of this screen many times. In this case, I don’t think a usability test is the right way to measure customer feedback about a transaction or store experience. That’s because the critical business metrics shouldn’t be determined by a usability test. What they really should be asking is why they want to show this screen at all.

For example, let’s say the Yes/No screen wins the click test because it’s simpler and quicker for the user to select. Then, whenever the business needs to figure out what to improve, all they can see is how many customers responded Yes or No to the survey with no other qualitative information. The Yes/No screen doesn’t help the business improve because it doesn’t provide information on what it is that needs improvement.

6. People Test Options that are nearly Identical

In the test below, I didn’t notice the difference until I looked at it twice.

Can you see the difference?

The left Filters column has a gray fill color. In the right variation, the gray fill color expands along the top of the right column. It’s very subtle. I don’t find the difference “obvious” at all, which is what the test was telling me to pick. I was expecting a whole different layout to make those interactions more obvious.

Tests like these often leave me frustrated when neither choice is obvious for the user. There should a more dramatic difference in design between choices.

7. People Use Usability Tests When They Should Use A/B Tests

This click test changes only one element – the copy on the green call-to-action buttons. I argue that this should be A/B tested in order to come up with even better results than a usability test.

Why?

With an A/B test, you can test each variant’s performance instead of waiting for a usability test to be completed. You also can test with real potential customers instead of random usability test takers who may never see the product.

8. People Don’t Set Up Usability Tests Correctly

If this is an improvement, I don’t see how…

First, framing the situation is really important so a user knows what the situation is. You can accomplish this with an “Imagine that you are [doing something] [in a location/state of mind/occasion]” statement.

Example: Imagine that you are shopping for a dress for a wedding.

Second, stating a task is important so a user knows what they should be doing. For a five-second test, generally the user is asked to recall what stood out or if they could tell what the company does.

Example: Could you tell what services or products this company offers?

For a click test, the task is related to what a designer thinks a person should click on.

Example: Click where you would find the sale section.

In the above test, there is no framing of a situation nor is there a task stated. It’s really hard for a user to figure out what form is easier to see if we don’t know what we’re supposed to be doing in the first place.

Aside from the above test showing two of the same images (so there’s really nothing to choose from), the setup of this test also failed to define what the test was even supposed to accomplish. This doesn’t help a business.

9. People Don’t Select the Right Test to Use

This question in a five-second test really should have been a separate click test.

I took a five-second test where I stared at a website, but the subsequent questions asked me where I would click.

A five-second test helps you fine tune your designs by analyzing the most prominent elements of your design. It also tests first impressions and how easy your design is to understand.

A click test is used for placement/layout and helps determine if people can do what you are asking them to do.

The above test should have been separated into a five-second test and a click test. Because the question about where to click was lumped together with the five-second test, I wasn’t able to accurately describe where to click because I couldn’t see the image anymore nor did I remember where the element was to describe it thoroughly.

10. People Love Testing Headline Copy

I saw a lot of headline copy tests, more than I would have expected. Also, they were all five-second tests, so I’m wondering if they were made by the same person. In five seconds, it was hard to read all the headlines before the timer expired if there were more than two headlines.

If these were to be made into Google AdWords links, I would say just list them all and see which performs the best at generating clicks and maximizing for the metric of how many additional content pieces were read as a result of the link coming in.

If these were being tested for a content site, I would assume an editor would have the role of deciding which headline to run. If you want to learn how to write great headlines, head on over to Copyblogger.

11. (Bonus) People Test to Choose Domain Names

I’m not sure how to explain this one, especially for something as important as a Belgian embassy.

Takeaways

About one of every three usability tests I took had some issues or questionable reasoning behind the test. I think we’ve learned quite a bit since Jared Spool’s 2005 article, but successful usability testing still seems to be far from perfect.

Make sure you know why you’re testing and that you’re testing with the right audience. More sensitive items like logos should be put in front of a targeted demographic rather than a random demographic like Usability Hub. Setting up the right test is just as important as what you will do with the test results. Also, make sure the tasks you design are in line with what you want to learn.

Usability tests are great for identifying problems. Use them to find existing problems with your product or service. Just make sure to test again after you implement solutions to see if you’ve solved the problem.

Happy testing!

Have you done any usability tests yourself? What did you test? I’d love to hear about them and what you learned.

About the Author: Chuck Liu is on the KISSmetrics Product team and loves to cook in his spare time. Find him on Twitter @chuckjliu and Quora.

]]>http://blog.kissmetrics.com/100-usability-tests/feed/1019 Obvious A/B Tests You Should Run on Your Websitehttp://blog.kissmetrics.com/19-obvious-ab-tests/
http://blog.kissmetrics.com/19-obvious-ab-tests/#commentsThu, 19 Jun 2014 17:21:18 +0000http://blog.kissmetrics.com/?p=23896Conversion rate optimization isn’t an easy game to play, especially if you’re a new kid on the block. There are some great resources to help you, though, like MecLabs and MarketingSherpa.

The real problem with CRO is in knowing how to start and what to test. This post covers the latter.

But, first, there is one thing to keep in mind: testing every random aspect of your website can often be counter-productive. You can blow time and money on software, workers, and consultants, testing things that won’t increase your website revenue enough to justify the tests in the first place.

So, if one of the following tests makes sense for your specific business, go ahead and run it. If not, try another one.

Typography

Typography is proven to affect conversions in a major way, but casually testing each Google font won’t get you anywhere. There are a few aspects of typography you need to test first before getting specific with typefaces.

1. Serif vs. Sans Serif

Serif typefaces are accented with various widths for each line in a character and contain flourishes (for example, Times New Roman). Sans serif typefaces are just the opposite, plain with a consistent width (like Arial).

Web Designer Depot recommends using sans serif, but interestingly, Georgia (a serif typeface) is by far the most popular typeface on the web.

Try both varieties to see which works best for your website.

As per a WDD infographic, sans serifs are best for the web, and serifs for print.

2. Colors

For your blog, your long-form copy, and most of the text on your website, always go with black (dark) text on a white (light) background. It’s a traditional color scheme our eyes are accustomed to.

For your calls to action and other smaller, more impactful text elements, however, test each of the basic eight colors (or whatever colors fit with your design). Always remember this principle: what stands out gets clicked.

3. Font Size

Tahoma is most legible at 10 px, Verdana and Courier at 12, and Arial at 14 px (Wichita Psychology).

Whatever typeface you choose, make sure that you test the differences in user engagement and click-throughs according to the size of the font.

4. Typefaces

Finally, we get to the most tedious typography test – typefaces. Take this one with a grain of salt. Don’t test each of the 700+ Google fonts available. Doing so would be very counter-productive. Only test a few of the major ones that harmonize with your design.

When testing these, you’ll also want to go with an A/B/C/D/etc. test. Test multiple typefaces at a time.

A graph representing the legibility of different typefaces at different font sizes

Calls to Action

Lightship Digital’s (awesome) CTA

Your call to action (CTA) is the most influential element on your landing page. Period.

As such, it requires a substantial amount of experimentation. Here are a few of the main call to action “ingredients” you need to test.

5. Position

Too often, web designers put the call to action button in the middle of the landing page above the fold, and just leave it there, because it’s what you’re “supposed” to do.

But did you know that locating your CTA below the fold could increase your conversion rate by 304% (Content Verve)? Don’t take anything for granted: test above the fold, below the fold, in the middle/left/right of the page, and relationship to text elements.

6. Color

No surprise here – color is a biggie in most CRO tests. Many have read this post on HubSpot about how a red CTA button beat a green one with a 21% increase in conversions. But a similar test in the Content Verve post (linked to in test #5 above) detailed how a green “add to cart” button got 35.81% more sales for an e-commerce store than a blue one.

So, again (as in test #2 above), a contrasting color that is distinct and stands out from the other elements on the page seems to work best. Experiment to see what works for your CTA.

For Performable, a red CTA button produced more conversions than a green one.

7. Text

As the most crucial copy on your landing page, your call to action button text needs to be tested heavily. Try out various lengths, pronouns, power words, and action verbs. Back when the 2007 U.S. election campaigns were in progress, Obama raised an extra $60 million just by changing his CTA button text from “Sign Up” to “Learn More” (Optimizely). Yes, that’s a 60 million dollar test.

Don’t miss out on those potential returns.

Pricing Schemes

This section encompasses more than just what price you set for your product/software. You also have to think about free trials and money back guarantees.

8. Freemium vs. Free Trial vs. Money Back Guarantees

To allow prospects to try products (and yes, product demos are important), vendors usually offer at least one of three models: a very basic freemium product with limited features that can be used forever, a time-sensitive free trial that allows users to experience all the bells and whistles, and a time-sensitive money back guarantee.

Generally, free trials induce more conversions than money back guarantees.

9. Free Trial Length

If a time-sensitive free trial is what works for your website, then how long should that free trial be? 7 days? 14, 21, 30? Test it!

This post on Sixteen Ventures mentions how shortening a 30-day free trial to 14 days proved to be a profitable choice for a SaaS company. Depending on your particular niche, the results may vary. As you can see below, for Crazy Egg, a 14-day free trial is the sweet spot.

Free trial of 30 days or 14 days? For Crazy Egg, the sweet spot was 14.

10. Pricing Each Plan

Finally, don’t forget to experiment with your pricing plans. Not only should you try out different prices for plans (should your price by $x9 or $x7?), but you also should play around with the features of each to make your higher-ticket plans convert better.

For the Economist, the decoy print-only subscription pricing was a bottom line booster.

Landing Page Copywriting

The art of persuasion through words on a page – copywriting – is another essential part of a landing page. Great copywriting is never great on the very first draft; it requires careful testing to ensure maximum impact.

11. Short-Form Copy vs. Long-Form Copy

Unfortunately, that isn’t a set-in-stone rule at all. For example, after testing his personal website, Neil Patel found that long-form copy produced 7.6% more leads (better-quality ones as well). On the other side of the spectrum, a Scandinavian gym chain got 11% more conversions with shorter copy (Content Verve).

The takeaway? TEST to discover what works for your business.

Your landing page copy: short-form or long-form? TEST.

12. Video vs. Text Sales Pages

Video copy is both difficult and expensive to create; hence, the general preference for text-based copywriting. But could you be missing out on potential conversions by failing to test video copy? Maybe so.

Depending on the size and capital of your business, you’ll have to decide whether a video sales page is worth it (and don’t forget text and video combinations).

This video landing page helped Six Pack Ab Exercises improve conversions by 46.15%. What could a video do for your business?

13. Actual Text

As with typefaces, testing hundreds of different versions of your text-based copy, each with only a small change from its predecessor, can be a fruitless waste of time and money.

So, while you should continually edit and experiment with your copy, remember to look at the bigger picture. Don’t get hung up on every other word.

More General Tests

The following are various A/B tests that don’t fit in any of the above categories. They fall under sales funnels, website design/structure, and more.

14. Number of Columns

Multiple-column landing pages definitely look a whole lot cooler than those with single columns.

But in CRO, coolness doesn’t count.

In fact, a SaaS company increased their conversion rate by 680.6% when they changed their two-column pricing page to a single-column page (Marketing Experiments).

15. Background Images and Patterns

Your landing page background (a solid color, pattern, or image) has a very consequential subliminal effect on your readers. If you haven’t tested different background varieties yet, you’re leaving money on the table.

Don’t leave money on the table by forgetting to test background images.

16. Navigation Links

Your navigational menu’s presentation affects how and if you can get visitors to your money pages (your pricing page, contact form, etc.).

Test the number of links, the color of the menu, its position, etc.

17. Link Color

Trying to get visitors to click links from your blog post to your money page? Test the link color.

The presentation of your internal links isn’t something that most people associate with CRO right off the bat. But when you think about it, internal link color really can have a huge impact on the number of visitors that get into your sales funnel.

Take Beamax, for example, which increased link CTR by 53.13% by changing their link color to red from the standard blue (Visual Website Optimizer).

What stands out gets clicked.

18. Contact Form Fields

If your objective is to get contact/quote requests from your website, then the format of your contact form is critical to your conversion rate.

Test the number of fields (bare minimum is usually best) and the types of fields (checkbox vs. drop-down) to elicit more form submissions.

Neil Patel changed the number of contact form fields from 4 to 3 for a 26% boost in conversions.

19. Number of Steps in Your Checkout Process

Case study after case study has proven that single-step checkouts will almost always convert significantly better than multi-page checkouts. If you’ve never considered a single-step checkout before, it’s time to test one.

The 10 Weirdest A/B Tests Guaranteed to Double Your Business Growth

Sometimes it’s not the most obvious A/B tests that drive the most growth. Instead, it can be the unconventional tests, the ones you would have never thought would make an impact, that prove to be the most valuable.

In a previous webinar, Larry Kim from Wordstream goes over some really surprising and important takeaways from doing years of A/B tests. This is a must watch!

About the Author: Stephen Walsh helps website owners make more money through conversion rate optimization and advanced PPC management. Contact him at Lightship Digital or say “Hi” on Google+ and Twitter.

]]>http://blog.kissmetrics.com/19-obvious-ab-tests/feed/25When NOT to A/B Test Your Mobile Apphttp://blog.kissmetrics.com/when-not-to-test/
http://blog.kissmetrics.com/when-not-to-test/#commentsMon, 28 Apr 2014 14:26:11 +0000http://blog.kissmetrics.com/?p=23222A/B testing might be the single most effective way to turn a good app into an amazing app. However, it’s also a subtle way to lead yourself into THINKING you’re improving your app when in fact your test results are full of false positives or you’re spending precious time testing when you could be doing something else.

Don’t get me wrong. A/B testing can be outstandingly effective at increasing user conversions and your bottom line (which is why all the big guys such as Facebook, LinkedIn, and Etsy are A/B testing constantly), but there’s a time to test and there’s a time to just implement changes. You should skip A/B testing…

1. When being first is more important than being optimized

You don’t have to be a genius data scientist to A/B test well, but it’s not a trivial task either. First of all, A/B testing can take time. You need to plan the test, program the different variants, push out a new version of your app through the app/play store, and wait while users engage with your app long enough to give you clear results.

A/B testing platforms with a visual editor can help reduce the time needed for programming and can eliminate app approval red tape, but to some extent planning and waiting are unavoidable. Of course, if you already have a stable user base and there’s no immediate urgency to make certain changes, the time it takes to A/B test is completely worth it.

Nevertheless, there are situations when time is your most important resource. For instance, a common scenario is when getting to the market first gives you a significant competitive advantage. This could be the launch of a new feature or large design changes.

KAYAK ran into this exact situation when Apple announced the iOS 7. Apple released all of their new developer resources on one day, but it was up to developers when their apps would adopt the new design.

KAYAK is a company that tests A LOT. A data driven and experimental mentality is core to their corporate culture, but this was a time when they chose to just implement a large suite of design changes rather than test each detail. And it paid off.

According to their Director of Engineering for Mobile at the time, Vinayak Ranade, “If we had spent time incrementally testing every single change we’d made and redesigned, we would never have made it. And a lot of companies did do that and they were three months late to the game.”

2. When you are fairly certain the hypothesis is wrong or you have no hypothesis at all

It’s easy to focus so hard on developing an experimental culture that you start testing everything. Literally everything. Even when you have no clue what exactly you’re testing and even when you already know the change probably would not be helpful.

While it’s great to test anything that could have a positive impact on your bottom line, it’s important to remember that the more you test, the more likely it is you will have false positive results. The typical threshold for statistically significant results is 95%. That means that you usually run a test until you’re 95% certain your results are accurate.

But 95% statistical significance is the scientific norm. This is the same rigorous standard that’s applied to FDA clinical trials. And yet, it still means that if you run 100 A/B tests with statistically significant improvements on your app, 5 of those tests would not be expected to improve your app at all.

There is no way to completely avoid this, but there are many ways to mitigate the number of false positives you get. The best thing to do is to test with care. Make sure you know what you’re testing and have a solid hypothesis as to how your test can improve the bottom line.

If you’re testing a button color, why do you think green will be better than blue? Are you randomly testing colors or do you think a certain contrast between the button and background colors will make the button more noticeable to customers?

Creating a good hypothesis and planning the test(s) to prove the hypothesis will give your tests direction and yield actionable insights that are less likely to be due just to chance.

Likewise, A/B testing should be skipped in situations where you know that an idea almost certainly will improve your app and the risks associated with blindly implementing the idea are low.

For example, Robot Invader, the makers of Wind-up Knight and Rise of the Blogs, consistently asks beta users for feedback. After playing the beta version of their newest game, Wind-up Knight 2, several players thought there wasn’t enough congratulatory “glitter” after completing achievements.

The recommendation from users was that more pomp and circumstance be added so that players would feel rewarded after accomplishing certain tasks and be more aware of the new features they just unlocked. The downsides of implementing something like this are close to zero, and the likely impact is positive.

There is no reason to spend time and resources to test something that probably is good and has low risk. Jumping to implementation is perfectly advisable.

3. When you don’t have enough users

As with any scientific experiment, you need to have enough data points to gather statistically significant results. This means you need to have a minimum number of users participating in each test. Depending on how you structure the test (how many variants) and what your expected results are (a small improvement off of an already high conversion rate or a large improvement off of a low conversion rate), you might need thousands of users to get statistically significant results. Since not everyone has Google’s scale, the key is prioritization.

If you don’t have many users, you might want to first focus your time on activities that will bring in more users. This could be marketing or even pivoting your app to build up the features that customers are actually using. Once you have enough users to start optimizing, you might have only enough users to run one test at a time.

In this case, it’s really important to first test the ideas most likely to have a big impact but too risky to jump straight to implementation. Examples of risky yet likely impactful ideas are changes to in-app purchases, login screens, page flow, and algorithms related to app logic (i.e., how recommendations are surfaced, how search queries are answered, etc.).

Here’s a simple chart to estimate how many users you need to get statistically significant (95%) results when doing an A/B test with two different variants (an A and a B). The number of users you need depends on your conversion rate (existing conversion rate of variant A) and how much better you expect the new variant to be (predicted increase in conversion rate of your new variant B).

Example: If your current conversion rate is 5% and you predict that it’ll increase by 15% with your new variant, you’re expecting your new conversion rate to be 5.75%. For this test, you’ll probably need around 9,200 users to get statistically significant results. That is, 4,600 users for variant A (your current version) and 4,600 users for your variant B (your new version).

When you’re low on users, you also must watch the funnel: the higher up you test, the faster you’ll get results. If 1,000 daily users land on your app’s login screen but only 100 make it to checkout, with all else being equal, a test of the login screen can produce results up to 10 times faster than a test of the checkout screen, simply due to the volume of users.

While we want to test as much as possible, there are some things that are hard or unwise to test. A/B testing a new logo after your company has been established for years can cause brand confusion with your customers. You might get more conversions in the short term as the change catches people’s eyes, but potentially it could be damaging in the long term to test radical changes to your brand.

This especially applies to design elements. An unusually large button or off-colored button might get more clicks because it stands out so much, but it could be impacting how your users see your brand. An otherwise elegant app becoming less elegant might not largely impact user engagement at first, but you could lose customers over time.

Similarly, some changes to price are really difficult to test (not to mention frowned upon by Apple). We test with the assumption that test results are reproducible and externally valid. In other words, testing one random group of people will produce the same results as testing another random group of people.

Logos and sometimes prices are not like that because customers talk to each other. If it’s highly publicized in the media that it costs $9.99 to unlock the full features of your app, it’s probably not a good idea to show a different price to some users. They might have read the article that promised $9.99 and be much more likely to upgrade if their version is cheaper or much less likely to upgrade if they see a higher price.

Either way, your results could be entirely biased and inaccurate, not to mention that huge PR mess you just got yourself into: once pricing goes public, all tests are off.

Summary

All in all, A/B testing your native mobile app is challenging but well worth the effort because it will help your good app become great. However, experienced testers test with caution:

Don’t sacrifice time for optimization when time is more important.

Test frequently and continuously but avoid over-testing and aimless testing. Have concrete hypotheses in mind and plan your tests to prove or disprove them.

Make sure you have a sufficient number of users to gain statistical significance on each test. If you don’t have many users, prioritize tests so that you don’t spread your users too thinly on each test.

Do not pit intelligent design against evolution through testing. New ideas being tested should mesh with your overall brand, look, and feel.

About the author: Lynn Wang is the head of marketing at Apptimize, an A/B testing platform for iOS and Android apps designed for mobile product managers and developers alike. Apptimize features a visual editor that enables real-time A/B testing without needing app store approvals. It has a programmatic interface that allows developers to test anything they can code. Sign up for Apptimize for free today or read more about mobile A/B testing on their blog.

]]>http://blog.kissmetrics.com/when-not-to-test/feed/14Supercharge Your Testing With The New KISSmetrics A/B Test Reporthttp://blog.kissmetrics.com/kissmetrics-ab-test-report/
http://blog.kissmetrics.com/kissmetrics-ab-test-report/#commentsThu, 24 Apr 2014 17:54:25 +0000http://blog.kissmetrics.com/?p=23386We have a very cool feature coming out today in KISSmetrics: our new A/B Test Report! In the past if you wanted to run an A/B test using your KISSmetrics data you would need to first run a funnel report and then manually enter your data into whatever external tools you use for your analysis. Now you can do all of your A/B test reporting from within KISSmetrics!

Turning your KISSmetrics data into insights

Let me give you a quick walkthrough of what you’ll find.

To start out we have to choose a target event which we’ll consider our “conversion event”, and the KISSmetrics property that will serve as our “experiment”.

For this test we’re going to use the ‘Signed up’ event as our conversion event. The really cool thing to note here is that any event being tracked in KISSmetrics can be used as conversion event!

Next we have to select which experiment we’re going to look at. In this case we’re going to use the property associated with an Optimizely test we’re running. This is a great example of the fact that you don’t have to give up on using your favorite A/B testing tools in order to get all your data and reports in one place.

A/B Testing is all about making a comparison; so all of your results are given in relation to a baseline. Normally this would be your control or original variant, but our reporting tool lets you choose any of the values of your property to serve as a baseline. It’s also important to point out that while this example only has 2 values for the property this report will work with any number of values that a property might have. Testing more than one variant at a time is no problem.

When you run the report it gives you a nice summary of the results. Notice at the top we provide you with a simple explanation of your results. In this case there is enough data and strong enough significance to make the call.

We also give you all the key summary data you’ll need to understand your results:

How long the test ran

The number of people in the test

Total conversions

The improvement you’re likely to see

How sure you can be of seeing an improvement

Exploring in Your Data

For those looking for information beyond a simple summary we have more for you!

First the report provides a visual history of the estimated improvement as more data is collected. This is important because just looking at significance alone can be deceptive. We mark when the results have achieved what is commonly considered to be statistical significance with a trophy icon and shading in the timeline after this has occurred. However in the early stages of a test it is not unlikely that the inferior variant will temporarily look like the winning one.

Looking at this history you can clearly see how much your improvement is jumping around and likewise use that to get a better intuition of how trustworthy your results are. The more stable your estimate of improvement the more likely it is to be accurate.

Finally, for anyone that is looking to do more analysis on their own, our report provides all the data you’ll need. This gives you much of the information found in our typical funnel report plus you get the likely improvement of each variant as well as the certainty that there is an improvement.

KISSmetrics is about People!

The focus of KISSmetrics is people, and so we have also built in the ability to explore the individual people that are going through your test.

Click on any of the points in the improvement timeline and you’ll be presented with links to run a People Report so you can see who in each variant converted, or simply everyone who passed through that variant.

Changing Your Conversion Event

Now let’s go back to the feature that you can leverage all of your KISSmetrics data in this report. We have our result for conversion to signups, and things look great, but thinking about it maybe we should also see how the result look if we swap out ‘Signed up’ for a more important conversion event ‘Received data’.

All we have to do is go back to the top of the report, change our conversion event and rerun our report!

Now we can see that if ‘Received data’ is the conversion event we really care about, we don’t have enough data to call our test. In this case the results are looking pretty close even after over 40,000 observations. The report is letting us know that maybe in this case it is best just to stop the test and stick to our original. Of course we’re free to continue running our test, and eventually we should reach a conversion, however it is unlikely we’ll see the real gains were looking for.

Comparing Multiple Variants

To highlight a few more features let’s go back and look at results from a test that we ran a few months back, long before work was even started on this report! Because of the way we can make use of any of our KISSmetrics data – the event/property combination we’re investigating can be a test we ran long ago, or even a combination that was never even thought of as being an A/B test in the first place!

In this report we’re comparing 3 variations against the original page. Here we can see how the report handles this. We get 3 lines, one for the comparison of each variant against the baseline. The report tells you which of the 3 variants is the superior.

This is great, but it looks like the variants are all doing well against the original, let’s see how they do against each other.

This time you can see we’ve switched our baseline from the original to variant 2, the clearly superior variant. We’ve also deselected the original so that it no longer appears in our visualization. In our data table all of our improvements and certainties are expressed in terms of our new baseline. Right away we can see that variant 2 and variant 3 are actually very close, certainly too close to call. This is extremely useful to know, maybe your design team prefers variant 3, now you know you’re free to make that choice with likely little or no loss in conversion.

The new A/B Test report will open up many new ways to explore and gain insights from your existing KISSmetrics data. I hope you find it as exciting and useful as I have!

A really phenomenal white paper titled “Most Winning A/B Test Results Are Illusory,” published by Qubit, was making the rounds a few weeks ago. If A/B testing is part of your daily life, then you really must take a few minutes to read through it. The gist of the paper is that many people call their A/B tests way too soon, which can cause the inferior variant to appear superior because of simple bad luck.

To understand how this can happen, imagine you have two fair coins (50% chance of landing on heads). You want to see whether your left or right hand is superior at getting heads, so you will know which hand to use when making a heads/tails bet in the future. You flip the coin in each hand 16 times, and you get these results:

Since we know the coin is fair, we know that getting 11 heads and 5 tails is just as likely as getting 11 tails and 5 heads. However, if we plug this result into a t-test to calculate our confidence, we find that we’re 96.6% certain that our right hand is superior at flipping the coin! Now, we know this is absurd since, in our example, knowing that the coin is fair, we could arbitrarily say that heads were tails, and vice versa.

If our coin flipping example were an A/B test, we would have gone ahead with the “right hand” variant. This wouldn’t have been a major loss, but it wouldn’t have been a win, either. The scary part is that this same thing can happen when the variant is actually worse! This means you can move forward with a “winning” variant, and watch your conversion rate drop!

Doing It Right

So what’s the problem? Why is this happening?

A/B tests are designed to imitate scientific experiments, but most marketers running A/B Tests do not live in a world that is anything like a lab at a university. The stumbling point is that people running A/B tests are supposed to wait and not peak at the results until the test is done, but many marketers won’t do that.

Let’s outline the classic setup for running an experiment:

Decide the minimum improvement you care about. (Do you care if a variant results in an improvement of less than 10%?)

Determine how many samples you need in order to know within a tolerable percentage of certainty that the variant is better than the original by at least the amount you decided in step 1.

Start your test but DO NOT look at the results until you have the number of examples you determined you need in step 2.

Set a certainty of improvement that you want to use to determine if the variant is better (usually 95%).

After you have seen the observations decided in step 2, then put your results into a t-test (or other favorite significance test) and see if your confidence is greater than the threshold set in step 4.

If the results of step 5 indicate that your variant is better, go with it. Otherwise, keep the original.

I recommend you play with the sample size calculator in step 2. If these steps seem straightforward to you, and the sample sizes you come up with seem easily achievable, then you can stop reading here and go get better results from your A/B tests. This approach works and will give you good results.

If, however, you read the above and thought “I also should eat more veggies and work out more…” then read on!

Marketers are NOT Scientists

I believe the reason marketers tend not to follow through with proper methodology when it comes to A/B testing has less to do with ignorance of the procedure and more to do with real world rewards and constraints. For scientists working in a lab, the most important thing is that the results must be correct. Running a test takes a relatively small chunk of time, while getting an incorrect answer that eventually finds its way into publication can have consequences that range from being embarrassing to costing lives.

Marketers have almost the opposite pressures. Management wants results as soon as possible, but you may have a long list of features and designs waiting to be tested, and you don’t want to waste time testing minor improvements if someone has something that could be a major improvement. Most important: marketers are concerned with growth! Being correct is useful only insofar as it leads to growth.

So, now, we have the question: “Is there a way to run A/B tests that acknowledges the world marketers have to exist in?”

Simulating Strategies

Whenever I’m studying interesting questions involving probabilities that don’t have an obvious analytical solution, I turn to Monte Carlo simulations! A Monte Carlo simulation is simply a way for us to answer questions by running simulations enough times to get an answer. All we have to do is model our problem. Then, we can model different strategies and see how they perform.

For our A/B testing model, we’re going to make some assumptions. In this case, we’re going to have a page that starts with a 5% conversion rate. We then assume that variants can have conversion rates that are normally distributed around 5%. In practical terms, this means that any given variant is equally likely to be better or worse than the original, and that small improvements are much more common than really large ones.

Finally, we address perhaps the most important constraint: each strategy gets only a total of 1 million observations. As you collect more data, you get more certain; but if you need 100,000 results to be certain, then how many tests have you wasted? No one has unlimited visitors to sample from. In our model, the more careful testing strategy might be penalized because it wastes too much time on poor performers and never gets to a really good variant.

The Scientist and The Impatient Marketer

Let’s start by modeling the strategy of “The Scientist.” This strategy follows all of the steps for proper testing outlined above. We can see the results of a single simulation run below:

What we see is quite clear. The Scientist has continuous improvement and will stay at a good conversion rate until another improvement is found; rarely, if ever, choosing the inferior variant by mistake. After 1,000,000 people, The Scientist has run around 20 tests and has bumped the conversion rate from 5% to 6.7% at the end.

Now, let’s look at a strategy we’ll call “The Impatient Marketer.” The Impatient Marketer is an extreme case of sloppy A/B testing, but it is an important step toward understanding how we can model a strategy for marketers that is both sane and provides good results. The Impatient Marketer checks constantly (as opposed to waiting), stops the test as soon as it reaches 95% confidence, and gets bored after 500 observations, at which point the test is stopped in favor of the original.

Here we see something very different from The Scientist. The Impatient Marketer has results all over the board. Many tests are inferior to their predecessor and many are worse than the first page!

But there are some pluses here as well. In this case, The Impatient Marketer reached a peak of 7.8% conversion and still ended close to The Scientist at 6.3%! It’s also worth noting that if this simulation is run over and over again, we find that The Impatient Marketer consistently does better than the baseline.

The Realist

Now, let’s make The Impatient Marketer a little less impatient and a little more realistic. Our new strategy is “The Realist.” The Realist wants results fast, but doesn’t want to make a lot of mistakes, and also doesn’t want to follow a 6-step process for each test. The Realist waits until 99% confidence to make the call, but will wait for only 2,000 observations. This strategy is very simple, but much less reckless than that of The Impatient Marketer.

In this sample run, The Realist is doing much better than The Impatient Marketer. The Realist occasionally does make a wrong choice, but only very briefly drops below the original. The Realist ends at 6.3% but has spent a lot of time with a variant that achieved 7.4%. Because The Realist is always trying out new ideas, this strategy is able to sometimes find better variants that The Scientist never gets to!

Measuring Strategies

In the above images, all we have is a single sample path. How do we judge how well each strategy performs? Maybe The Scientist does even better, or maybe The Impatient Marketer’s gains make up for the losses?

The way we’ll approach this is by measuring the area under the curve. If you imagine just sticking to the original, there would be a straight line at 0.05 across the entire plot, giving an area of 0.05 x 1,000,000 = 50,000. If we measure the area under each point, then we can compare. And, to get a fair assessment, we’ll simulate this process thousands of times and take the average. After we do that, here are our results:

There are a couple of really fascinating results here. Perhaps most remarkable, The Impatient Marketer does surprisingly well! Now, of course, if you actually look, The Impatient Marketer does an unrealistic number of A/B Tests. However, if you have a low traffic site that simply will never see a well-designed test converge, there’s definitely a useful insight here: A/B testing is useful even if you don’t have much data, but you have to continuously run tests to avoid getting stuck too long at a poor conversion rate.

But most interesting to everyday marketers running A/B tests is that The Realist and The Scientist do about the same in the long run! Now, it is important to note that we know these conclusions hold true only for the assumptions of our model. Still, there is an important takeaway that, if you’re thoughtful, you can make tradeoffs in your testing methodology and still get great results.

Takeaways

The biggest assumption in our model is that these tests are running back to back without breaks. Veering away from classical tests works only if acting on inferior information is made up for by always having another test ready to go.

If you want to end your test early because a design for the next round is ready, go for it! If other office pressure is making you want to end a test early, feel free to stop, but make sure you have another test ready to go. Additionally, if you have good cause for stopping early, lean toward being more conservative with your results. You assume a lot of risk if you go with a variant that isn’t a clear winner.

Conversely, if you have no pressure to stop early, stick with the traditional testing setup outlined above! Run the sample size calculator and see if the number of samples needed is in a range you can gather in a reasonable time frame. If so, there’s no reason to break what works; and, in fact, you may find your time best spent exploring other, mathematically sound, approaches to running tests.

In all of our models, being vigilant and continuously running tests is a sure way to minimize any limitations in the testing methodology.

About the Author: Will Kurt is a Data Scientist/Growth Engineer at KISSmetrics. You can reach out to him on twitter @willkurt and see what he’s hacking on at github.com/willkurt.

]]>http://blog.kissmetrics.com/your-ab-tests-are-illusory/feed/49How to Find a Winning A/B Testing Hypothesishttp://blog.kissmetrics.com/winning-ab-testing-hypothesis/
http://blog.kissmetrics.com/winning-ab-testing-hypothesis/#commentsFri, 28 Mar 2014 18:10:23 +0000http://blog.kissmetrics.com/?p=22983“Let’s test button color today. It has been so long since we changed it.”

Wait! Testing random ideas that are not based on well-thought out hypotheses can waste your time, money, and website traffic. To develop a successful test hypothesis, you need to find problems/concerns that your customers struggle with when they are completing the conversion goal of your website.

So, let’s get into some powerful research methods to discover what’s keeping your visitors from turning into customers, and how you can use those points of friction to formulate viable hypotheses (theories) to test.

1. Usability Testing

Watching people use your design can help you find multiple usability issues and technical glitches. But, in order for you to find the real issues from your usability testing, you must guide your test participants correctly and make sure they are comfortable enough to express themselves freely. It all comes down to asking the right questions.

Avoid Asking Leading Questions

Your language can have a huge impact on the responses or interactions of your test participants. And if you’re not careful about the way you are framing your questions, it can skew your test results. It is thus recommended that you pen down the questions exactly the way you want to address them to your test participants. For example, you might write this question:

“We’ve spent months building this website. Can you please tell me what you think are its positives and negatives?”

But, to the participant, this is a leading question that reveals the answer you would like. It means, “I’m looking for praise. If you say anything negative, you might hurt my feelings.”
Instead, neutralize this question to obtain a genuine response by saying:

You must assure your participants that the test is not to judge how good they are. There’s nothing they can do wrong. The purpose is to judge how good the usability of your website is.

Frame Your Questions Broadly

Instead of mentioning an exact task, give test participants a scenario. For example:

“Contact a live chat agent.” – Wrong“If you have to contact us for a query, what would you do?” – Right

This will help you check whether your visitors are able to locate your live chat widget or other contact details easily.

Also, try not to use the exact words from your call-to-action buttons or links. Let’s say you have a “central community and parking maps” link on your website. In this case, you can frame your question like this:

“You have to visit the University Museum tomorrow and wish to know the nearest place to park your car. How would you find out?”

Notice how I did not use the exact phrase of the button “central community and parking maps” in the question here.

Ask General Questions First

Decide your test structure carefully when arranging the order of your questions. For instance:

First tell me – “Have you ever seen a monkey?”
Next question – “Which is your favorite fruit?”

Wait…did you just think of bananas? I know you did. See what I did there?

The first question influences the answer to the second question. To make sure this doesn’t happen, ask general questions before getting into questions that are specific to page elements. For example, you might ask a question about navigation, followed by a question that asks something like, “What are the three things you remember on the homepage?” Many will mention something about navigation.

Ask Questions about Your Homepage, Navigation, and Pricing

Questions related to homepage, navigation, forms, and checkout process can reveal a number of objections. Conduct a 5-second test, and ask your participants the following questions about the homepage:

“What do you think the website is about?”
“What are the first three things you noticed on the homepage?” (Things like your value proposition, most compelling discount offer, primary call to action, trust signals, or contact number, etc. are the most important things visitors should notice first on your website.)

As you move ahead with your usability test, you can ask questions like:

“Were you able to interact with the drop-down menu/product filter without any problem?”

“Do you have any suggestions to improve navigation?”

“Did you easily understand the words used in the navigation menu/product filter?”
“Did you get confused/stuck at any form field?”

“Did you feel that the form is too long? Or, if any form field is unnecessary?”

“Did you feel confused or annoyed anywhere in the process?”

SaaS businesses can ask questions like:

“Do you think our pricing is clear?”

“Is there anything else you’d like to know about our plans before signing up with us?”
“Does our website look trustworthy to you?”

Observe Behavior of Test Participants

If participants stop/get stuck anywhere, or if they take a non-traditional route to complete a task, add them to your notes. Formulate hypotheses that might solve any problems they faced.

For example, if a participant stops to do a calculation in his head to complete a transaction, perhaps it would be better to provide a calculator on that page. Easy calculation on the website will give visitors a clearer picture of the amount that needs to be paid and most likely will improve your conversions.

Don’t forget to add the time taken by participants to complete each task. If they are taking more time than you think they should, you have an opportunity for improvement. Frame a hypothesis to reduce the time it takes to complete a task. A/B test it and see how it goes.

2. Insights from Losing Test Results

You do get negative lifts with A/B testing, even if you are an expert. That’s the truth. But, that being said, what you learn about customers from every test is what counts. Failed tests can sometimes reveal powerful customer insights.

The thing you need to understand here is how to get that valuable customer insight? Asking the “why” questions to unravel the underlying psychological reasons will help you find useful customer insights. For example, you might test these two headlines:

If the second headline loses, you can theorize “why” that headline lost? Audi A6 customers probably are not too concerned about the price. Limited-edition excites them more.

Ask yourself this: “What” does this make you understand about your customers? Here you can say exclusivity matters to them more than the price. Now, that’s customer insight you can use to tweak your marketing message.

So, your next hypothesis can be: changing the emphasis from price to exclusivity in the copy and images will improve conversion rate for the Audi landing page of the automobile website. Once proven to give you a winning result, sometimes profound customer insights can be tested site-wide or even in offline marketing offers to generate multiple lifts.

Losing test results don’t mean you’ve hit a wall. You still can change them into a win. All you have to do is think of a reason “why” your customers buy or don’t buy. The more you understand their motivations and behavior, the higher the winning potential of your hypotheses.

As you look for solutions to reduce friction and roadblocks in your conversion process, you will develop multiple hypotheses to test.

3. Customer Surveys

The previous two research methods will give clues to many of your customers’ subconscious problems, but asking them straight will give you an understanding of problems they are well aware of. All you have to do is ask.

But, asking the right questions is an absolute must for finding the insights you can act upon. When conducting surveys, it is important to decide who you are targeting. Dr. Karl Blanks of Conversion Rate Experts explained in a webinar that you can divide your target audience into three types:

Recent customers – People who just signed up or placed an order with you.

Existing customers – Loyalists who are your repeat customers.

The purpose here is to convert qualified no’s into recent customers, and recent customers into existing customers.

Once you know the objections faced by your recent customers (even though they overcame them and signed up with you), you can address the objections in the design/copy of your website.

Ask New Customers Open-ended Questions

Set up an autoresponder mail that contains a small survey for customers who just bought from you. If you want to know more than you already know, ask open-ended questions.

Soon after someone signs up, you can send them an email asking these questions:

“Where exactly did you find out about us?” (This will tell you websites/marketing channels where you should focus.)

“Please list the top 3 things that persuaded you to buy from us rather than a competitor?” (Keep this open ended. Let them tell you what they found most compelling about your offering.)

“Which of our competitors, both online and offline, did you consider before choosing us?” (This will help you understand your position in relation to your competitors.)

“What was your biggest fear or concern about using us?” (Open ended again. This will tell you the objections they had in mind before they decided to go ahead with you.)

Remember that asking too many questions can considerably reduce the number of responses you get. For each question, you should have a very clear purpose of how the answer can help you.

Ask Casual Visitors about Their Interests

Casual site visitors probably will not be interested in your product yet. Their behavior will be somewhat different from your customers as they still are in the exploratory phase. Your job is to make them your customers.

Remember that these people most likely will have the same objections your recent customers had before they signed up with you. Insights from the survey answers of your recent customers should give you many hypotheses you can test about how best to nudge the casual visitors into the conversion funnel.

For example, a customer may say he wasn’t sure at first if the payment method on your site is trustworthy. So, he Google searched to check your reputation online and then decided to go ahead.

To solve this issue, your hypothesis to test can be: adding a trust seal to the payment step should reduce customer anxiety and improve conversions.

See how Kellogg School of Management tries to obtain objections of potential students through their on-site surveys:

Learning about motivations of these casual surfers also can give you a great starting point to find potentially rewarding hypotheses. Ask them this:

“What brought you to our site today?” This will help you understand user intent. Of course you can add some quick choices for them. For our Visual Website Optimizer site, the choices below are appropriate:

I want to learn more about A/B testing.
I want to sign up for your tool.
I want to find test ideas to improve my website’s conversion rate.
Any other? Please specify ____.

You also can set the trigger for the exit button. As soon as a visitor is about to click it, you can ask them: Were you able to complete the task you came to do? Yes | No. Please mention why not _____.”

KISSmetrics asks about social media hangouts of its customers in their on-site surveys:

A SaaS company also can ask:

What kind of a website do you have?
Travel
SaaS
eCommerce
Publishing
Any other, please specify ___.

Pricing pages often get targeted traffic with prospects who are likely to sign up for a software/service. Thus, we recently decided to run a small survey on our pricing page.

We asked people if our pricing is clear for them or if they have any questions to ask, and this has given us some great insights about the questions we have left unanswered for our prospects. We will be addressing those questions in our copy soon and run an A/B test to see how it goes. You must try it out, too!

“What kinds of topics would you like to read more about? Mention 4-5 topics that you cover on your blog.”

Smashing Magazine asked their visitors a similar question. You can see their poll results below:

Notice how this takes guesswork from your strategy.

If a customer is taking longer than required to complete the payment process, you can add triggers to simply pop up a question such as:

“Is there anything stopping you from completing this order? Kindly let us know ____.” A site once figured their cross-browser issue this way, which was costing them a lot of revenue.

Don’t ask your site visitors more than 1-2 questions or you might annoy them. Web Engage, Qualaroo, and Survey Monkey are some great tools you can use to conduct surveys.

Ask Existing Customers What They Like about Your Business

“What are the three things you like most about our product/service/website?”
“How would you describe our product/service to a friend?”
“Have you praised/criticized us to a friend in the past six months? If yes, what did you say?”

Loft Resumes conducted a product/market fit survey and designed an alternative landing page based on survey responses they received from their existing customers. The new page was pitted against the original. The test results marked a clear win for the new page that improved Loft Resume’s sales by 64.8%. You can read the complete case study here.

4. Heatmaps

Heatmaps tell you which elements are attracting the most attention from customers and which ones are being completely ignored by them. While data can tell you “what” happened, heatmaps many times can tell you “why” it happened.

One of our customers generated the following heatmap for their homepage:

They realized that, even though the navigation on the top left is almost subdued with its translucent look, it’s still attracting a lot of attention. Thus, it is distracting people from downloading the app, which is the main conversion goal of the page.

They hypothesized that removing the navigation bar should improve conversions. The A/B test finally proved the hypothesis to be correct. Removing the navigation bar improved conversions by 12% for Pair’s website.

Sometimes your heatmap may reveal unnecessary page elements that are drawing people away from the conversion goal or getting too much attention. You can test removing them or replacing them with elements that act as supporting elements to nudge people further toward the conversion goal.

5. In-site Search

Behavior > Site Search > Search Terms. Follow this path in your Google Analytics. It will look something like this:

This is the small repository that is your key to understanding visitors’ intent and even the exact words they use to find things on your website. Use the same words as your customers in your copy. It often boosts conversions. Test it out!

Sometimes you might notice that a lot of people are searching for a particular product in your site search. But since you don’t have it, you are losing sales. In that case, you can try adding that product to your arsenal. Or, if that’s not possible, you can suggest the product category to find similar products instead of showing them the sad “Product not found” page.

Also, if you notice that a lot of visitors are looking for a common range of products, a hypothesis can be: adding a separate category for this product range on the homepage will boost sales.

Each change you make on your site should be your hypothesis. Don’t make the mistake of implementing a website change without testing it. Since all of these hypotheses will be customer-centric, it will increase your odds of winning immensely.

Finally, it’s important that you choose a page that receives good traffic. All this might seem like a lot of work, but nothing comes easy. Invest your time in research methods, and not only will you have tons of test hypotheses, but they will be hypotheses that convert into wins.

]]>http://blog.kissmetrics.com/winning-ab-testing-hypothesis/feed/22Announcing the KISSmetrics and AB Tasty Integrationhttp://blog.kissmetrics.com/kissmetrics-ab-tasty-integration/
http://blog.kissmetrics.com/kissmetrics-ab-tasty-integration/#commentsSat, 22 Mar 2014 15:48:58 +0000http://blog.kissmetrics.com/?p=22883We’re pleased to announce a new integration with a tool called AB Tasty. As the name suggests, AB Tasty is an A/B testing tool. When you run an A/B test using AB Tasty, you’ll automatically be able to send your results to KISSmetrics.

Conducting a test with AB Tasty is simple:

Install the AB Tasty tag on all pages of your site.

Make changes to a page using the WYSIWYG editor. (You can run a multivariate test in which you test multiple elements on a page.)

Set the KPIs you want to measure.

Run the test.

Get your results.

Use Case

Let’s say you’re on the marketing team of a SaaS company and want to A/B test your homepage. Using AB Tasty, the first step is to install their code on all pages of your site:

After your A/B Tasty code is installed, you’ll want to run a test. In this example, let’s say you want to test your homepage headline. You would go ahead and make changes to the headline using the WYSIWYG editor:

Then you would want to select “Track Clicks” next (your KPI is “Signed Up for Free Trial”):

You’d run the test until you reach statistical significance. You’d also want to integrate the test into your KISSmetrics account. To do this, you would log in to AB Tasty and select “third tool integration” under the options panel:

…and integrate with KISSmetrics:

This integration will create a new KISSmetrics property called AB Tasty Test — 12345 (12345 is the Test ID).

Where AB Tasty Ends, KISSmetrics Begins

Your original test was set up to see if an alternate headline would lead to more trial signups. AB Tasty will determine which variation performs the best. After it completes the task, it will hand off the baton to KISSmetrics.

With KISSmetrics, you can use the AB Tasty property to create specific reports and segment your visitors. You tracked free trial signups, so a useful report would be one showing which headline variation brought in the most paying customers (not just signups).

To get more detailed than just paying customers, you can view metrics like lifetime value and revenue per user. These insights will let you know which variant brought in the most valuable customers. Again: It’s not just about which test brought the most trial signups, but also which test brought the most valuable customers.

To see our full list of integrations for KISSmetrics, please visit this page.

About the Author: Zach Bulygo is a blogger for KISSmetrics, you can find him on Twitter here. You can also follow him on Google+.

]]>http://blog.kissmetrics.com/kissmetrics-ab-tasty-integration/feed/15 Things You Need To Know About Before Jumping into Mobile A/B Testinghttp://blog.kissmetrics.com/jumping-into-mobile-ab-testing/
http://blog.kissmetrics.com/jumping-into-mobile-ab-testing/#commentsFri, 21 Mar 2014 15:43:10 +0000http://blog.kissmetrics.com/?p=22921As the Android vs. iOS saga continues, most major apps establish their presence on both platforms. Many opt for iOS first, since there are only a few device models (as opposed to the ever-expanding portfolio of Android devices and manufacturers).

Nevertheless, some have been swayed by the advantages of Android. Our friends at Stack Overflow cited iteration speed and first-party support for alpha and beta testing as the main reasons for choosing Android as the first platform for which they developed their native mobile presence.

Even if the platform differences are not immediately apparent to you, taking them into account is key to a successful cross-platform app. It may be easy to assume that what works for one platform will translate into success on the other, but this type of logic will get you in trouble down the road.

Mobile A/B testing has a big part to play in identifying the differences needed to be successful on both iOS and Android. Here are some of the things we’ve seen that have the greatest impact when testing on the platforms:

1. Demographics

According to this comScore report, iOS users tend to be younger and wealthier: 19% of iPhone owners are between the ages of 18-24 years old (compared with just 16% of Android owners), and 41% of iOS users are in the $100,000+ income bracket (compared with just 24% of Android users).

Android has been shown to be popular with professional and business users. Hacker types also have been drawn to Android because of the possibilities that an open platform offers.

Takeaway: Think critically about the differences in general audiences across the two platforms, and how you can play to those differences through UI, UX, price points, and features.

For example, iPhone users may respond better to a promotion or feature that makes a cultural reference to something that young, affluent users will recognize. Android users, on the other hand, may be motivated more by a feature that lets them customize their experience.

2. Monetization

Despite Android’s impressive growth in market share of devices sold and active users, iOS has been shown to generate more money for developers year after year. iPhone users are more likely to purchase something from their mobile device, and 23% have purchased something on mobile previously (as opposed to 17% on Android).

Flurry took the opportunity at GDC this week to open their datasets on Android games, showing that not only is the Android population skewed toward young males, but some mix of in-app purchases and ad-based revenue is optimal for many mobile games.

Takeaway: A/B test different monetization strategies on iOS and Android in order to capture the most overall value from both platforms. iOS users generally are more likely to download paid apps and make in-app purchases, whereas Android users may be monetized more easily through advertising and lead generation. You also can test different price points, and you may find that one platform’s users have a greater tolerance for higher price points.

3. Usability

Engagement and native UX perhaps are where iOS and Android differ the most. For example, Android’s “intents” allows users to share content using any installed app from any other app, which doesn’t exist on iOS.

iOS has a tendency to attract “power users” who are more likely to engage in all major content categories (social media, news, e-commerce, and games) for longer average session times.

Takeaway: Consider testing user flows and experiences that complement the behavior typical of users on the platforms. You may want to test more streamlined and straightforward activation flows for Android, whereas you may be able to rely on iOS users to engage unilaterally for longer periods of time before you offer them an in-app purchase or ask them to share some content with friends.

Additionally, there are distinct UX conventions in iOS and Android – such as navigation, or how actions are displayed – that may work for one platform and not the other.

4. Device Type and OS Versions

A key challenge in Android development is the relatively large range of different devices that use the platform. From Samsung to HTC and Google’s own phones, Android represents a mosaic of different price points, screen sizes and resolutions, and hardware that can make it an unpredictable platform at times.

iOS runs on significantly fewer device types, but, nevertheless, can be complicated by different hardware and OS versions that may not have the capability to run some apps correctly or at all.

Takeaway: When A/B testing new features on Android or iOS, segment by device type and OS to account for how changes affect users on each combination of hardware and software. You may be able to drive more desirable user behavior by pushing only new features on certain devices while keeping the old feature set for other devices. The same may apply to OS versions.

5. Speed of Iteration

At a recent Android meetup at Google’s NYC offices, we heard from Kinsa Health’s CEO Inder Singh about the benefits of developing for Android. One of his key perceived advantages of Android: the speed at which Kinsa can act upon user data, and iterate faster, through methods like A/B testing, without having to go through lengthy app approval processes.

Although new technology in A/B testing mobile apps has cut down the iteration time for both iOS and Android, there still is a strong case to be made that Android is a more flexible platform for which to implement iterative feedback loops and respond quickly to user data.

The time saved through quicker iteration should not be confused with quicker tests, though. Stopping tests early leads to the risk of a false positive (also known as a Type I error). In other words, a statistically underpowered test could indicate an effect (like a 20% boost in conversions) is present, when, in fact, the results are skewed because of a narrow sample population.

Takeaway: If you’re not already leveraging new technologies to quickly run A/B tests and make app feature or UI decisions for your mobile app based on data (and not opinions), there’s no reason not to get started.

And, when you’re ready to commit changes to your app’s binary that have been validated through rigorous testing and analysis, it will be faster to push those changes through the Google Play Store than the Apple App Store.

About the Author: Zac Aghion is the CEO and Co-Founder of Splitforce. Data is power, and it should be easy to leverage data to make better decisions. Splitforce is A/B testing mobile apps.

]]>http://blog.kissmetrics.com/jumping-into-mobile-ab-testing/feed/14Are You a Victim of Your Own A/B Test’s Deception?http://blog.kissmetrics.com/ab-test-deception/
http://blog.kissmetrics.com/ab-test-deception/#commentsThu, 13 Feb 2014 17:30:05 +0000http://blog.kissmetrics.com/?p=22663A/B testing is all the rage, and for good reason. If tweaking your home page a bit can get you 25% more signups, who wouldn’t try it?

The best thing about A/B testing is the awesome selection of tools. Optimizely provides a live editing tool that puts page tweaks and goal tracking in the hands of marketers. Visual Website Optimizer offers a suite of interesting measurement tools, including behavioral targeting, which allows you to show different variations depending on a visitor’s actions.

Even with such great technology available, there are a few things to watch out for. The first is statistical significance, which has been written about enough (here, here and a mini-site here if you’re interested).

Another is the common mistake of assigning a goal that measures the short-term effect of a test rather than the long-term effect on your business. We made this mistake at Segment.io, and that’s the story I’ll be sharing in this article.

The Winning Variation is Wrong

Usually the goal of an A/B test is to get people to take a single action on a single page. Common actions include clicking the signup button on your home page or joining an email list. Those actions are great vanity metrics, but the fact is that more visits to your signup page or a bigger email list aren’t very sound business goals.

The problem with the single-action approach is that it assumes a single action provides value to your business, which it usually doesn’t. Most A/B tests are done at the top of an acquisition funnel, long before visitors have proven their worth.

The goal of an A/B test should be to move the visitors who are most likely to become high-value customers from the top of your funnel to the bottom of your funnel.

How We Messed This Up

I’ll share a super simple experiment we did at Segment.io that illustrates my point. We recently ran an A/B test on our shiny new home page. Our test was simple: we created two variations of the signup button text. The control version read “Get Started,” and the variation we chose was “Create Free Account.”

Here’s what our A/B test variation choice looked like:

Before long, “Create Free Account” beat “Get Started” with a 21% increase in conversions. Time to call our developers and make it permanent, right? For most people that would be the next step. But, being an analytics company, we always have an abundance of nerds around ready to dig deeper into our data.

To make analysis easier, we tagged each tested visitor with the variation they were shown. And, since Segment.io automatically sends Optimizely variations through to KISSmetrics, Customer.io and Intercom, we were able to segment out visitors who saw each variation in all of our tools.

How We Found the Real Winner

First, we looked at the immediate “next step” for visitors after they clicked on the call to action. KISSmetrics funnels were our tool of choice for this analysis. We used a simple funnel of Viewed Home Page > Viewed Signup Page > Signed Up, and split out the funnel by the “Experiment home page CTA” property. Half of the visitors at the top of the funnel had the value “Get Started” and the other half were tagged as “Create Free Account.”

It turned out that the visitors who clicked “Create Free Account” were less likely to complete the signup form. This drop in signups effectively wiped away the 21% gain that button made in our A/B test on the home page. That meant there no longer was a clear winner between “Create Free Account” and “Get Started.”

But there was one last thing to examine: the pricing plan people ultimately chose. It turned out that the visitors who clicked on “Create Free Account” were much less likely to sign up for our paid plans. Those who clicked “Get Started” were much more likely to sign up for paid accounts.

So, in the end, the real winner for us was “Get Started.”

How to Avoid This in Your Business

Watch all of your results! Be especially wary of optimizing for a single click or action. Remember, a single click usually does not provide direct value to your business. Long-term gains are always more important than short-term conversion wins.

Don’t decide an A/B test based on an increase in clicks, opt-ins or signups. Tag visitors with the A/B test version they saw and watch out for unintended consequences of the tests you run. A full page opt-in form might lead to a bigger email list, but what if it decays the value of your user base?

Here’s a checklist to help you find the real winner in your A/B tests:

Save test variations to user profiles with a tool like KISSmetrics.

Watch the effect of each test variation all the way through your acquisition funnel.

After a few months have passed, check the lifetime value and churn rate of users for each variation.

If you have questions about how to set up any of this, I’ll be watching the comments on this post.

About the Author: Jake Peterson leads customer success at Segment.io, helping thousands of customers choose and set up analytics and marketing tools. If you’re looking for free advice, check out their Analytics Academy. Segment.io is a single, simple integration that gives you access to 70+ analytics and marketing tools with the flick of a switch. Check it out here.

]]>http://blog.kissmetrics.com/ab-test-deception/feed/20How to Get Feedback for Your App Fasthttp://blog.kissmetrics.com/get-app-feedback-fast/
http://blog.kissmetrics.com/get-app-feedback-fast/#commentsTue, 04 Feb 2014 17:03:01 +0000http://blog.kissmetrics.com/?p=22578When talking to companies about developing new features or products, I’ve observed that there is concern about getting valid qualitative data and feedback from users.

Many of you are worried that when you and your company personnel “get out of the building” to look for people and get their thoughts on your app or service, you may not find the right person or target market. Thus, you’ll waste a lot of time looking for anything useful to drive your ideas.

Companies generally use surveys as a cheap and effective way to get feedback from users. It’s difficult (or resource-intensive) to get people to come in for a formal user research study or for you to travel to do ethnographic field studies.

However, data from a survey is only as good as the survey itself and the people who participate. Getting feedback from friends and family is better than nothing, but you’ll likely see a difference in insights from someone in the target market/field/business that you’re trying to reach.

This system will automate feedback to you fast.

If you’re reading this, you probably don’t have time or resources, but you do need data fast. So let’s create a way for you to get the data to come to you. Remote research and feedback collection is a fast and cheap system that constantly feeds you the answers you need. I’ve developed a quick solution that allows you to screen, recruit, and reach people fast.

Step 1: Create a Screener to Find the People You Want

As soon as you have some inkling of what you want to know more about, start writing a screener to define and select the type of people you want to talk to. A screener is a set of questions that acts like a recruiting questionnaire and mini-survey at the same time. I like to keep my screeners short, 3-5 questions max, so that I don’t discourage people from giving me feedback.

After conducting dozens of user research projects across product, engineering, marketing, support, and sales, I’ve found that I get more useful insights in less time this way than by reading through irrelevant comments. Remember, this is about BOTH the speed at which you receive this data AND the quality of the data you receive. There’s no point in getting back a bunch of useless data fast.

Define your criteria

List some characteristics of the target users you want to obtain information from. This can be things that you know off the top of your head like “active app users.” For example, when my team wanted to validate whether or not a feature should be worked on higher in the priority list, I translated that into screening only users who had touched that feature within the previous 30 days. They were the most relevant and highly active users to gather insights and feedback from.

If the pool was too small, I would expand the criteria to include a larger date range of activity or include broader criteria. I don’t recommend trying to broaden your scope too large, though, because you’ll start interacting with people who do not have much to say about what you’re trying to discover.

Write the screener

The screener should be kept short. I recommend 3-5 questions. Don’t forget to get their email address!

Once you have an idea of who you want to target, write specific questions that differentiate or screen out unrelated cases. For example, rather than asking KISSmetrics users whether or not they find a specific report useful or not, I ask:

What report do you use to find revenue data?

What other applications or ways do you use to analyze revenue?

Do you consider revenue a primary metric within your analytics?

I recommend creating the screener as a Google Form so you can collect responses in a spreadsheet automatically. I am providing below an example screener I’ve used before. You can edit it yourself if you’d like to adapt the screener for your own research efforts.

My screener skips asking people for their names and other demographic information because I want to make it easy for them to give me feedback. That information is less relevant to me than what insight they can provide me. I can always look up their names in their account data if I need them.

DIY Fancy Method

One way I love getting people to interact with my screener is to give them a micro-survey or modal within the app. It gets their attention, but it’s relatively unobtrusive because they can decide whether to deal with it or dismiss it. Not everyone wants to give you their input, so having an option to dismiss it is important if you don’t want to annoy your users.

If users answer “Yes” to wanting to help us improve a feature (shown on the feature that we’re asking feedback for), they are taken to a CTA to fill out a screener.

Qualaroo works very well here, specifically because you can expand/minimize and build in a flow of calls-to-action based on the answers.

I show them a CTA to go fill out the screener. This is repeatable for every new screener you develop.

The reason I like this method is because I know that, not only are these active users, but the micro-survey or modal is displayed only on the parts of the app that are relevant to what I want to validate. Users who fill out your screener this way are: 1) contextually aware of what you’re trying to ask, 2) more likely to answer your screener, and 3) a really good fit for getting qualitative data.

This system is easy to repeat once you’ve set it up because you can switch out your screener and have a different “ask” on your micro-survey or modal when you run it. This perpetually lets you get people piling in who are either prime for a follow-up interview or on a list for further surveying.

DIY Direct Method

If you don’t have the luxury of using a micro-survey tool like Qualaroo or setting up a quick modal, just email your customers within a certain criteria. My previous example mentioned contacting only active users, which I further translated into people who had used a specific feature within the previous 30 days. You could do this by having an engineer pull data from the database, or, in our case, we just use the KISSmetrics People Search to query the exact people we need.

Here, I was looking for people who created at least 10 metrics to investigate how they handle organizing a large number of metrics. This report gave me a list of emails so I could directly reach out to the users who met the criteria.

I like to set up a specific email campaign in MailChimp or Pardot to make sure I separate out my general marketing email list from my data gathering list. Sometimes people will want to opt-out of user testing and feedback over time but not your marketing newsletter.

I like this method less because it is not as automatic as the first method. You have to actively set up different lists of people and send out email campaigns. But it works. It works pretty fast if your email system is good at cloning and repeating campaigns, too. It’s likely you already have some sort of email service provider in place so you’ll just be able to piggyback off of what you’re already paying for.

Using both of these methods, I’ve been able to recruit 7 people for phone/Skype/Google Hangout interviews and receive over 100 screener responses within 24 hours in order to draft a more targeted survey. That’s more than enough for me to dig into for validation and research purposes in such a short time.

If you’re super scrappy or bootstrapped, just do this all in Gmail instead of an email service provider. This method will cost you $0 but more of your time.

When you have piles of people coming into your lap, it’s time to get data out of them. Pick people from your screener who are relevant to talk to individually. Or put a subset of them in a targeted survey email blast.

Before you start interacting with your users, you’ll need to figure out the right research method for the job.

Surveys are great for:

Tracking sentiment over time – i.e., NPS Score every quarter, tracking customer happiness/satisfaction before and after a feature launch

Quantifying how many users are running into trouble with a specific problem – i.e., “Have you ever run into X while trying to do Y?”

Measuring attitudes or customer understanding of concepts or tasks – i.e., “Do you know which report to create when you want to analyze X?”

That being said, I’d recommend NOT using surveys for:

Usability questions – It’s better to view people through a screensharing session in a user study to identify usability problems.

Understanding user behavior and habits – People may not fully realize what they are doing or may not be able to accurately communicate it. Viewing analytics data or having a screenshare session of what they do when they want to find something tells you a lot more than having them try to convey it in a survey.

Gathering feature requests or ranking priorities, given a stack ranking system of what people want or what they want done first – You’re going to get a pretty mixed list from everyone. Prioritizing that based on number of customer responses, highest paying customer, what’s easiest to do technically, or what impacts the least technical debt is going to be an extra confusing step for you to try to handle with your team. I prefer looking at analytics logs to get data about what features are working or not in terms of driving business growth.

Find What Works for You

After having your first set of conversations or first set of survey results come through, you’ll have a good idea of what you could have done better and what to change going forward.

What’s best, though, is that you’ll have a system for constantly pulling in people to give you feedback, nearly for free. You could go the fancy route and use Qualaroo/SurveyMonkey/MailChimp/Pardot, but everything I’ve described can be done for free using Google Drive/Docs and Google Hangouts/Skype.

By providing a highly contextual environment for both yourself and your users, you’ll be able to gather relevant and useful feedback in a matter of days. Using this system, I’ve never had a problem with asking myself “Who should I talk to when I want to get some ideas about X?” I have a gold mine of data and contacts that I simply reach out to. And because of the highly contextual environment I’ve introduced, the success rate of people responding back to me has been incredibly high.

By creating a screener for your idea, screening people based on their behaviors related to your idea, and asking only those who are interacting with a particular feature creates a win-win situation where you get the best data possible. The users happily provide feedback and data within context. And all of this happens very fast.

About the Author: Chuck Liu is a UX Researcher at KISSmetrics and loves to cook in his spare time. Find him on Twitter @chuckjliu and Quora.

]]>http://blog.kissmetrics.com/get-app-feedback-fast/feed/11Want Better Product Innovation? Here are 10 Customer Activities You Should be Testing Nowhttp://blog.kissmetrics.com/better-product-innovation/
http://blog.kissmetrics.com/better-product-innovation/#commentsTue, 28 Jan 2014 16:06:10 +0000http://blog.kissmetrics.com/?p=22537When you focus on what matters most to your customers, you focus on what matters most to your bottom line.

Continue to surprise and delight your customers, and they will become your brand champions, your roadmap inspiration, and a valuable source of repeat revenue. What your customers do and why they do it are the most important pieces of information for any product or retention marketing team. With the right combination of data, user insights, and thoughtful leadership, you can zero in on innovation that truly moves your product – and business – forward.

Build Your Roadmap with Analytics and Insights

KISSmetrics provides deep analytics into who your customers are and what they are doing on your website. Here at UserTesting, we find that when this deep data is paired with in-person user research, you can uncover many significant insights.

By watching people use your site, you can hear them explain the logic behind why they chose to click where and when they clicked. It’s incredible how the excitement (or frustration) of a customer attempting to complete a task on your site will motivate you to rethink your team’s roadmap.

By incorporating qualitative feedback with data analysis, you’ll find many ways to improve your customers’ experiences online. To help you get started, I’ve outlined ten different scenarios that could lead to higher engagement, affinity, and ROI.

Ten Customer Activities You Should be User Testing

1. Log In

Take a look at how many times your customers are logging in. Once you have identified your power users, consider how to make their login experience better. Test new login options like social integration, see where customers go to “get in,” and devise ways to optimize messaging on a new login page while you have a captive audience.

ShortStack – Seasonal messaging from ShortStack catches people at login, reminding users to think creatively about their next campaign while subtly informing them of the service’s capabilities.

2. Dashboards

Review your usage numbers, and then test to discover what information your customers wish was available. Uncover whether or not they find your dashboard helpful, how often they refer to it, and if it is as customizable as they need in order to complete their tasks.

Google Analytics – Since there are hundreds of ways to slice and dice data, Google has put considerable effort into allowing users to customize their personal dashboards to fit their at-a-glance needs.

3. Purchase History

If you’re running any kind of SaaS or e-commerce site, dig into your data and identify those who often visit their account details looking for previous purchases. Explore their habits, whether they email or download receipts (or wish to), refer to shipping status, look for purchased product details, or want to repeat the same purchases again.

Harvest – For many freelancers and small design studios, Harvest has made it easy to look up recurring payment history, which is especially helpful for folks during tax season crunch time.

4. Account Upgrades

Pull a list of your “toe in the water” customers and then optimize for an upgrade. This is a critical component of the ability to increase revenue. Test ideas for unlocking new features and how to properly position them as benefits. Examine hesitation points and what would compel customers to move forward. Find out what they already love about the product and then look for ways to make it even better with an upgrade.

LinkedIn – Notice how LinkedIn has integrated a live chat feature on their Upgrade page. No doubt they are hoping to help people take the leap into an even more benefit-driven social networking experience.

5. Search

Are your customers spending a lot of time on search results pages? Find out why, because chances are their usage will drop off quickly if they can’t find what they want. They’ll just look for it on someone else’s site.

Udemy – The Udemy search results page has been designed to highlight the most relevant courses on their website, with each course providing enough detail to inform the next click.

6. Onsite Promotion

Just a general rule of thumb: if it looks like an advertisement, it will be ignored! Try upgrade messaging, requests for reviews, cross-promoting user tips and tricks, and community-building promotions to find the right approach for your audience.

Smashing Magazine – Smashing Magazine seems to have found the right balance between aesthetically pleasing advertising and owned content promotion. Notice their own sidebar promotions are in no way designed to mirror the look of the ads above.

7. Social Sharing

Are your customers helping you generate more leads? Improve word-of-mouth marketing by discovering whether your customers find your product or content story-worthy and easy to share in the way(s) they want to share it.

Dropbox – By incentivizing customers to tell friends about their service, Dropbox has created an incredibly strong brand awareness campaign. The key? They built in a reward that is almost impossible to refuse.

8. Navigation

Information architecture changes can really impact how people use your site, so it’s important to explore any changes thoroughly. Look at how often the “back” button is clicked, how many times visitors revert back to the homepage to “start over,” and how often they use footer links to accomplish their tasks. Ask customers to participate in a card-sorting exercise, and test your prototypes.

Crocs – The Crocs web team has a clean navigation, so it’s easy for shoppers to locate their products by type or collection. Bonus points for having a solid mobile site navigation!

9. Content

If you see a large drop off in customer usage, take a look at the content that is available. Consider function, format, and fun factor. If your customers don’t find your content helpful or interesting, chances are good they might start losing interest in your product. Make sure your content resonates with your audience.

Marketo – With an advanced product like Marketo, it’s critically important that users have plenty of resources in order to make the most of their investment. Marketo’s brightly colored University is packed with content to help users boost their knowledge.

10. Multiple Devices

Ask your mobile-first customers to perform their most common tasks, highlighting their primary use cases. Test ways to ensure these user flows are properly addressed to encourage usage across devices. Run your new (or improved) apps through a set of user tests in a prototype phase to refine your design and usability.

Spotify – Spotify has taken their device-specific experiences to new levels by incorporating smart logic tailored for the user. Notice how their mobile website detected that I have their app installed and offers to take me directly to it.

Conclusion

Deep data and user testing should be an integrated toolset for anyone involved in ensuring their website supports the bottom line. Understanding the who, what, and why of your customers means you spend less time guessing and more time turning a good experience into a great one.

About the Author:Stef Miller is a marketer at UserTesting, where she spends most of her time connecting people with content. Miller has worked for global corporations and teeny tiny studios, won awards from AIGA, AAF, and AMA, and believes that true happiness comes from collaborating with creative people to make awesome things happen. You can connect with Stef on Linkedin and Twitter.

]]>http://blog.kissmetrics.com/better-product-innovation/feed/134 A/B Testing Mistakes That Can Kill Your Business – And How to Avoid Themhttp://blog.kissmetrics.com/4-ab-testing-mistakes/
http://blog.kissmetrics.com/4-ab-testing-mistakes/#commentsTue, 28 May 2013 17:21:39 +0000http://blog.kissmetrics.com/?p=20744Everyone agrees that optimization and testing are important keys to success. And everyone thinks they know the best ways to use them.

However, in my experience, testing is a double-edged sword. If it’s done right, the benefits pour in; but if it’s done wrong, you literally can drive a business into the ground.

The following testing and optimization missteps are four serious and, sadly, common testing pitfalls I see businesses make every day. The good news is that, once identified, they become easy to avoid in the pursuit of optimization that gets you where you need to go.

Mistake #1 – Optimizing for Maximum Conversion at the Expense of Your Promise

As the saying goes, left unchecked, all optimization leads to gambling and porn. While a bit extreme, the adage makes a clear point: when you optimize solely for conversions, it’s easy to lose touch with what you really do.

There is a dangerous allure to the quick win that pulls focus from the true value of your product. It can happen little by little, like a subtle current; but, eventually, you look up and you’re miles away from your core value. If you find yourself thinking solely about maximum conversion at the expense of everything else, you’re setting yourself up for failure.

Successful products share a common attribute: they are a must-have experience. The experience people can’t live without is what inspires them to share with their friends. When the must-have experience resonates with your audience, you’re on your way to product / market fit.

The experience delivers a promise, and that promise must anchor all optimization efforts. Optimization that doesn’t align can badly damage your business by diffusing the message and confusing customers.

An easy example of this is the late night TV ads that deliver miraculous results, like the “Shake Weight.” They may make a lot of one-time sales with their amazing promises, but the customer receiving the vastly overrepresented product is sure to be disappointed.

Optimizing for conversions at the top of the funnel creates a glut of dissatisfied customers who trash your business on the back end. The promises may drive conversions, but they don’t create lasting value for your business.

Instead of worrying about how you can tweak your landing page to be more and more aggressive, focus on creating a compelling hook or promise that is true to your product and offers real benefits to the consumer. Resist the urge to optimize on value propositions and promises that aren’t congruent with what your product really delivers.

Create a testing plan that keeps the core experience at the forefront. This ensures optimizations are in line with your product vision and converted customers are there because of the promise of the must-have experience. So, rather than feeling disappointed or tricked, customers will feel like they got exactly what they were looking for, which creates real value for your business.

How do you understand what optimizations may or may not be relevant and properly aligned? Ask your customers. You can use surveys – we humbly suggest Qualaroo – customer development calls, keywords on inbound traffic, or heat maps. These signals can point you to the parts of the product that most resonate with your audience. Use that data to create the hooks and promises that are most likely to trigger positive responses.

Mistake #2 – Putting Conversion in the Way of the Must-Have Experience

Organic and sustainable growth comes from customers who love the must-have experience of your product. They use it regularly; they pay for it; they give you feedback to make it better; and they tell their friends. This is growth nirvana: a world of passionate users accelerating growth that keeps on going.

But the key to the kingdom is the must-have experience that users fall in love with, not the ad or the white paper. If you’re optimizing for conversions from your ad traffic but simultaneously making it harder for users to access the must-have experience, you’re shooting yourself in the foot.

For example, if your must-have experience comes from testing the product and playing with it, but you’re optimizing for conversions to a drip marketing campaign and digital whitepaper download, you’re probably doing it wrong.

You may be optimizing the conversion rate on the front end, but you’re adding friction to the user’s quest to get to the must-have experience. This friction blocks people from the must-have experience, thus starving that engine of growth for your business.

Solution #2 – Get Users to Your Must-Have Experience as Fast as Possible

Focus your conversions on getting visitors to the must-have experience with the least friction possible. This means optimizing your user flow to remove unnecessary steps and complexity in order to maximize the number of visitors who make it to the must-have experience. Ask yourself “What are the absolute required pieces of information to get started?” and “How can I eliminate extra clicks?”

Funnel analysis can show you where the big drop-offs are in your current process. Look for ways to eliminate the big bottlenecks so that you get the biggest lift from your efforts. Complement your funnel research with surveys and customer development to get a clear understanding of the user dynamics in your funnel, and use the feedback to improve conversion.

Optimizely understands the importance of a direct and simple flow. They’re currently testing paid search ad units that ask users to input the URL they want to try Optimizely on. When a visitor enters a URL and clicks “try it out,” they hit a landing page with a sign-up form.

Right behind the form, though, they can see the website and the experiment builder waiting for them. No landing page with an email confirmation. No extra steps. From the ad to the testing interface, the visitor is ready to go with two clicks. That’s taking people directly to the must-have experience.

Compare this to Adobe and their A/B testing tool. Their ad shows up in the same search and takes you to this landing page. This page asks for everything under the sun just to get a white paper. It’s completely disconnected from the must-have experience and as full of friction as you can imagine. It’s no wonder that Optimizely is one of the first names that comes to mind in the testing space, while most people have no idea that Adobe offers A/B testing.

Hello Bar also gets visitors right to the must-have experience. When you click “try it out,” you are taken right to the Hello Bar builder. You can customize it and even preview it on your site before completing the account setup process. This allows visitors to get the must-have experience first, before jumping through registration hoops. And once you see Hello Bar on your own site, you’re far more likely to sign up for the service.

Mistake #3 – Wild Goose Chases and Random Testing

Too often, tests start with a random, offhand question from an executive who has given little thought to how the results actually will be used to improve the business. These can be in the form of micro-optimizations (see mistake #4) or tests that aren’t focused on creating quantifiable learning. When you take a “test whatever” approach, you’re missing out on discovering what truly is preventing conversions.

While you may catch lightning in a bottle through sheer serendipity, it is more likely you will end up with a heap of inconclusive data, leading to little learning and little true optimization. The effort and the lack of payoff can frustrate the team and take the momentum out of the program entirely. This leads to stagnation in optimization and growth.

Solution #3 – Structure Your Testing around Core Hypotheses

To keep from ending up on fruitless chases without any actionable results, start by identifying points of confusion for the user in either the hook/promise or the funnel itself. User testing and surveys can help you determine where to focus your optimizing efforts first.

Once you have user feedback, create your own hypotheses about what changes will move the needle or provide learning to improve your business. From those assumptions, you can build a testing plan that helps you work through the optimizations that will prove or disprove those hypotheses. If your testing is not anchored in a plan, you’re liable to eat up your business’s valuable time and resources and, in the worst case, grind progress to a halt.

When you are testing, always start with the same questions:

Where are users confused in the funnel?

What’s our hypothesis for the test?

Is this test likely to impact results?

Is this test the best test we can run?

Does it make sense to test this in light of what we’ve learned so far?

How long will it take to learn from this test?

With these questions, you can refine your testing activities to the specific ones that truly can help your business.

In a great example, DHL focused on imagery and form placement in their landing page A/B tests and drove massive lift. As you will notice, instead of starting with the promise, the copy stayed the same. This test was run to see if a long-term champion template that had plateaued could be unseated by a new challenger. From the blog post on the conversion:

The A/B split tested a long-standing “winning” template against a Challenger. The Challenger template increased the visibility of the form, moving it into the top right corner, adjacent to the courier image. Additionally, a friendly male courier image replaced the logistics image.

By having a hypothesis around where the page could be improved, they were able to focus on the elements they thought would be most likely to move the needle. Read the full post here.

Mistake #4 – Micro-Focused Testing

It’s easy to think of testing and immediately go to button color tests or copy tests. After all, these are the kinds of tests most often cited in blog posts about optimization. These tests are easy to run, and you can think of dozens of variables to implement. The trouble is you can run micro-optimizations for months and, in the process, leave a pile of money on the table with little gain.

This narrow testing limits you to optimizing around a local maximum, while a broader perspective on testing can help you find the true maximum lurking just out of sight. Like rearranging the deck chairs on the Titanic, micro-testing gives you incremental improvement in your landing page conversion, but, down below, the fundamental economics of your acquisition funnel and revenue model are in flames.

Solution #4 – Test Broadly

Take a broader, macro view of your testing and optimization strategy. Test and optimize everything, from your business model to your method for delivering the must-have experience, to find the upside that really will move the needle.

When you create your testing plan, step back and look at the big picture. Ask yourself what tests you can run that will give you insights into how to improve the performance of your business:

How quickly can you get someone to the must-have experience?

How can you change the user flows and funnels to reduce friction and improve the rate of visitors to users?

Once you’ve identified those broader tests, start to drill down into smaller-scale optimizations around landing pages and their elements.

SmartShoot focused on optimizing their pricing and products page by testing new products that didn’t even exist yet. By understanding what features their customers really wanted, and by thinking of optimization at a broader level, they were able to gain a 233% increase in conversions. If they had focused on optimizing only the layout or design of their pricing page, they might have found incremental lift but missed this massive win.

Putting It All Together

Testing is crucial to the success of your business, but equally important is ensuring that you’re taking an approach to testing that will move you toward success. Focus on tests aligned with your core experience to ensure that conversions create happy customers who will help spread your message.

Also, optimize your conversion funnels to get visitors to the must-have experience with the least frustration possible. Be rigorous in your plan and test broadly to avoid the trap of being too narrow or micro-focused in your approach. If you avoid these mistakes, you’ll be able to focus on the big opportunities and test your way to results that really move the needle.

Have an optimization that really worked for you? Show it off in the comments.

About the Author: Sean Ellis is currently the CEO of Qualaroo, a marketing software company that empowers marketers to better engage, understand and convert their website visitors. Prior to founding Qualaroo, he was the first marketer at Dropbox, Lookout, Xobni, LogMeIn (IPO), and Uproar (IPO) and also held interim marketing executive roles at Eventbrite, Socialcast, and Webs. Follow him on Twitter.

]]>http://blog.kissmetrics.com/4-ab-testing-mistakes/feed/236 Simple Elements That You Can Optimize for More Home Page Sign Upshttp://blog.kissmetrics.com/optimize-homepage-sign-ups/
http://blog.kissmetrics.com/optimize-homepage-sign-ups/#commentsMon, 20 May 2013 17:10:37 +0000http://blog.kissmetrics.com/?p=20714Sure, generating more traffic for your website is always a good thing. But what if you could increase conversions on your site with just a few small tweaks? It could cost a lot less and take less time.

Below are six things we did that increased our sign-up rate by more than 75%. But first, let’s take a look at our baseline.

This was our site before we spent time optimizing it:

The first thing you may notice is that there is no clear headline stating what this product actually does! There is some small text that says “The Online Gantt Chart.” Many of our visitors don’t even know what the term “gantt chart” means. We knew we had to make this bigger and pick some text that people would understand immediately.

1. Short Simple Headline

Try making the headline shorter and more obvious. It’s always tempting to think people understand your website better than they really do. You know what you have because you spend so much time thinking about it. However, others don’t have that advantage. They, likely, are just browsing around the web and quickly stopping by your website to see if you have anything to offer them.

You have only a few seconds to capture a visitor’s interest. Make a quick, bold statement so they will decide to hang around your site longer. We made it very clear that our product is for “Simple Project Scheduling.” This change, with a few of the others below, was part of a site makeover that resulted in an increase of over 50% in free trial signups.

As you can see, we changed more than just the headline here. We will cover the other changes as we go through this article.

Evernote has an awesome headline: “Remember everything.” That says it all about their product in just two words!

2. Images

Yes, an image can say a lot about your website or product without having to use many words.

We put a bright, cheerful image of our software on the home page. It says a few things right away. People can see immediately that our software has a fun, lively design that promotes simplicity and allows them to visually schedule their projects.

All of this was communicated with just a nice screenshot carefully designed and cropped to showcase the great aspects of TeamGantt. (It’s much easier for someone to look at an image than to read a lot of boring text, right?) We also updated the background color to a bright, happy shade of blue. This change also contributed to our initial 50% increase.

WP Engine did several tests to find out that an image of their staff is what works best for them. This probably is because they are a hosting company and support is what matters. Seeing a bunch of friendly, smiling people is something that visitors can relate to, and it worked great for increasing their conversion rates.

It may have been tempting for WP Engine to put up screenshots of their software or some websites, but they were smart enough to know that wasn’t going to work for them. They also backed up their assumptions with A/B tests.

What is it that makes your potential customers feel good about your company? Think about this when picking out images for your site.

3. Social Proof

Okay, so you hooked some visitors with your great headline and nice image. But now you have to prove to them that what you have actually is awesome! There are a few types of social proof that work really well:

Testimonials: Try not to use boring text from people no one knows. Sometimes, this can have the opposite effect and make people not trust you. Try to use a picture of the person and link to their company. Some business owners feel awkward asking for testimonials, but we have found that people are extremely helpful here.

For instance, you may receive a nice email after providing good support to a customer who says something like: “Love your product and support! It’s made a huge difference in our company.” If so, simply reply and say: “I’m so glad to hear that! We are getting ready to update our website and would love to feature a quote from you. We could even link to your company so that you could get an inbound link and extra traffic to your website.”

Initially, we had some nice quotes from people on our website. However, nobody knew who the people were. There also weren’t any pictures of the people. This can lead to visitors questioning if the quotes are even real. Our quotes were real. I promise! But people may not have any way to know that.

We were fortunate enough to have Ryan Carson and his company use TeamGantt for the launch of their popular startup, TeamTreehouse.com. Ryan is widely known throughout the startup and design communities, and his recommendation gives us a lot of credibility with people who know him. We asked him for a quote, and he kindly provided one for us.

We replaced the other quotes with this quote from a well-known and well-respected individual. We also decided to make it big so that it stands out and people actually read it.

Tweets about your company: An easy way to show some more legitimate testimonials is to use tweets about your company/product. We usually “favorite” especially nice tweets about TeamGantt. We built a Happy Customers page, and we included a section that pulls in random tweets that we “favorited.”

Case Studies: We offer a few short case studies with summaries about how people used TeamGantt for specific projects and the benefits they got from it.

Press Coverage: Were you in The New York Times? Show it on your website! Present a nice quote from the article and the newspaper’s logo. Have you not had any press yet? Well, then, go get some. Here are a few great ways to -> generate some PR.

We were featured in TechCrunch last year and wanted people to know it.

Logos of Customers: Do you have some big name customers that will allow you to put their logos up on your website? This can establish some instant credibility with others.

Adding these changes helped account for a 13.8% increase in one of our later iterations.

4. Video

There are a few different types of videos that may help improve the communication of your message. Maybe a video that walks visitors through your product and how it can be used. Or maybe an animated video that explains the problem your product solves. Consider hiring a company to create an explainer video.

On our TeamGantt page, we tried adding a product overview video. We made this video ourselves using ScreenFlow for Mac and iMovie. If you want to tackle creating an explainer video yourself, I recommend reading this article: http://blog.kissmetrics.com/creating-a-explainer-video/.

Since we created our video, it has had over 51,000 views and helps give people a quick understanding of what our product does.

This video is one of the things that helped us in our first big revision that resulted in an increase of over 50% in signups.

Later, after adding the video, we decided to try different placement options to see if we could get even more out of it. Here are the 3 options for placement of the video:

Baseline: No video directly on the home page. There was only a button that would take people directly to the video.

Video at the top of the page: Decrease of 5.3%

Button + Video further down the page: Increase of 3.3%

Yes, there actually was a decrease when we put the video up at the top of our home page, which was a bit of a surprise to us. This is one of the reasons it is so important to A/B test.

It turns out that the most optimal placement for the video was below the fold about 1/3 of the way down the page. So, instead of losing 5.3% of signups, we gained 3.3%. A 3% increase may not seem like a lot, but if you get a few of these from various changes, it can really add up.

5. Longer Home Page

This definitely is worth trying. There really isn’t any way to know if a longer home page is better or worse than your existing one without running an A/B test. A longer page works better for us.

The reason we checked into this is because we noticed that we didn’t get a lot of clicks going to our “tour page.” Therefore, we thought we would combine the “tour page” with our home page. Now, people can learn more about the benefits of our software without having to click to another page.

Increasing the length of our page with more information about features contributed to an increase in our sign-up rate of 8.6% in a separate A/B test.

Crazy Egg works very hard on A/B testing and has found that a long page works great for them as well.

6. Call to Action

Don’t forget this one! You need to tell your visitors what to do. Should they download an eBook, sign up for a mailing list, start a free trial, or buy your product right now? Give them a friendly nudge to help guide them to the next step. This can have a huge impact if you don’t already do this.

Also, make sure that your call to action is above the fold. This way, a potential customer will know what to do without having to scroll down the page.

About the Author: Nathan Gilmore is a Co-Founder of the web based project scheduling app www.TeamGantt.com. He takes care of app design and marketing. Nathan can also be found on twitter here @nathangilmore.

And optimizing your website means your online marketing efforts, like pay per click (PPC) and social media marketing, will be more profitable because your clicks will convert much more frequently.

But, unfortunately, you may have found that testing and optimizing your website aren’t as easy as you thought, and you likely aren’t getting the big conversion rate lifts expected, either. Often, one of the key reasons for this is a lack of website testing buy-in and budget from senior executives.

These influential people sometimes are referred to as HiPPOs (highest paid person’s opinion); and, in many instances, they believe they know what is best for their website. Therefore, they don’t feel the need for (or understand the benefits of) running website tests.

Further, these HiPPOs also usually hold the keys to the ample budget you need for two testing essentials. First, you need some funds to use a good website testing tool (Visual Website Optimizer, at the very least). Second, you need to hire dedicated testing resources because using a web analyst or online marketer is inefficient and causes bottlenecks. We all know what happens when you make an employee wear too many hats! Without a good budget for either of these, it severely impacts the efficiency and potential results of your website testing efforts.

So, if you manage to tame your HiPPOs and grow your testing buy-in and budget, this may result in much higher conversion rates and greater revenue from your website tests. To help you do this, here are 7 strategies and tips for you to consider. These are broken down into two key areas: education and peer pressure. Let’s get started!

Educate Them

1: Prove your competitors are testing their websites.

One of the simplest ways to educate your HiPPOs is to show them that your competitors are getting great results from testing their websites. Senior executives certainly won’t like knowing that a competitor is doing something new and cool that they aren’t (much like keeping up with the Joneses). There are two ways to do this:

Find out which competitors are running website testing using a free debugger tool like WASP and check to see if they are using a website testing tool.

Look on testing tool vendor websites to find case studies about your competitors’ testing efforts. (Maxymiser and Visual Website Optimizer have great sections for this in particular.) These highlight the return on investment (ROI) and great results from testing (more on this later) and act as great education ammo!

2: Create a presentation to demonstrate the impact of testing on revenue.

You wouldn’t invest in something if you didn’t think it was going to get great results, would you? No. And neither would your senior executives. Accordingly, you should create a presentation outlining why understanding and investing in testing is a great idea for ROI. Here are some tips for doing this:

Research the potential by using your web analytics data to find problematic pages with high bounce and exit rates.

Show them mockups of elements that could be improved on problematic pages to increase conversion rates, along with projected conversion lifts.

Translate conversion rates into revenue! Senior executives probably won’t care much about conversion rates in isolation. Turn this into something they really care about – the impact on revenue! A 2% increase in conversion rates sounds pretty boring and low; but if you translate it into revenue, it often turns into huge increases in online revenue (music to their ears).

Calculate and show them the projected overall ROI from running website testing for just 6 months, and include how long it would take to break even from the increased budget you are seeking.

If they don’t listen to you, you could try using a third-party testing expert to do this for you. With the added credibility of an external consultant’s opinion, your HiPPOs may be more likely to agree than if you present it as a solo and internal undertaking.

3: Learn better ways to communicate with them and deal with office politics.

Bosses can be a real pain to work with sometimes, particularly if they are opinionated HiPPOs. To help you get them on your testing wavelength, it’s important to learn how to deal with office politics and to influence them better. There are a few great books that I recommend for this specifically:

Before you do this, get the head of SEM on your side by explaining what you are trying to do. Be sure they understand that the purpose is to make the company more profitable, which is better for everyone. Then get them to help you convince your HiPPOs of this plan.

Obtaining just 10% of their budget for 6 months gives you enough time to get a few big testing wins under your belt and report back with clear metrics on the gains and ROI realized from the investment, thereby proving the need for an increase in the testing-related budget.

Peer Pressure Them

5: Find an executive sponsor to help.

One of the most influential forms of peer pressure comes from someone at a similar executive level. Your HiPPOs are going to be much more willing to listen to and value input from their level. If you can find an executive sponsor to help champion the benefits and get testing pushed through, it will be much easier to gain buy-in from your HiPPOs. Figure out who in your organization might make an ideal candidate for this sponsor responsibility and discuss your plans with them over lunch. Get them excited by pointing out that, ultimately, it will make them look great, too!

6: Gain buy-in from key department heads to help.

In addition to an executive sponsor, it’s important to gain additional help and pressure from key department heads. Present to them the benefits of testing and optimization and the impact it can have on your business. (Higher revenue often means more bonuses for them, too!)

It’s particularly important to get this buy-in from department heads that play a key role in optimization; for example, web design and IT. This will make it even easier for you to test effectively and efficiently.

7: Find other testing evangelists internally to help.

Finally, to help with your HiPPO peer pressure, you need to find testing evangelists at lower levels of your organization, too. You often will find sympathetic web designers or web developers who understand the real benefits of testing. So keep your ear to the ground to find out about potential evangelists. Then offer to take them out to lunch and explain what you are trying to do and that you need to get everyone talking about the benefits of testing.

If you are successful with this last step of peer pressure and evangelism, pretty soon a testing culture will grow rapidly in your organization, and your HiPPO will be more than willing to relent and offer their buy-in!

If you start using these education and peer pressure methods with great success, it won’t be long before your HiPPOs turn into mice that are much more willing to increase their budget and buy-in for your testing efforts. This means you will be able to be much more influential and successful with your website testing efforts (and, hopefully, gain a bigger bonus for you, too!)

About the Author: Rich Page is a passionate website testing and conversion rate expert and the author of Website Optimization: An Hour a Dayand co-author of the 2nd edition of Landing Page Optimization with Tim Ash. He is a consultant available for hire to help improve conversion rates for all types of online businesses.

And it sounds like the salvation you’ve been dreaming of. What’s not to like? Being able to test two different pages on your site to see which one gives you the most customers sounds amazing. After running these things a few times, those extra customers sure will add up!

You get everything set up and…

…realize you don’t know what to test first. And the A/B testing euphoria starts to wear off.

Right before you’re about to give up, you remember an argument you had with your designer (or was it the boss? Or the developer?) about the button color on one of your pages. They decided to go with green but you KNOW that orange is the better option. You’re so confident in your choice that you feel it in your BONES.

You decide to settle the matter once and for all.

The current button on your site looks like this:

And you test it against this stud of a button:

Everything’s set up, the test is running, it’s only a matter of time.

A week or two later, you log back into the account for your A/B testing tool, go to the test, scroll straight to the results…

The green button out-performed the orange by 0.1%. What a letdown. You don’t know what’s worse:

The orange button didn’t perform better than the green like you predicted

The difference was so minor that it wasn’t worth worrying about in the first place.

This happens ALL the time. Especially for people just getting started with A/B testing. You see, randomly picking elements to test produces lackluster results. The needle doesn’t move at all.

Right now, there are several big-win tests you could run that would make a HUGE difference to the growth of your company. Like 5-20% in customer growth kind of huge. These optimizations are just sitting there, waiting for you to start running A/B tests.

Finding big-wins doesn’t happen by accident.

Here’s the thing: if you A/B test all sorts of random stuff, you’ll never find these big-wins.

To find them, we’ll need a completely different process for deciding which tests to run. There’s a time and a place for testing everything we can think of (I’ll get to this in a second) but the big-wins require a completely different approach to testing.

Stage 1: Finding the Big Wins

If you’ve never tested before, you’ll find several 5-20% increases to your bottom line. I’m not talking about increases to some random conversion rate. This is an increase to your revenue and customer base. Finding a few 10% improvements to your revenue will take your business to a completely new level.

The best part? These are usually permanent increases to your customer growth. Make a single change to your business and reap the rewards for years to come.

But randomly testing all sorts of stuff on your site won’t find these big wins for you.

We need some guidance on where to start looking.

If you’re not an optimization pro that throws out A/B tests like candy, this is the process you want to use in order to get moving quickly.

To find the big wins with A/B testing, follow these steps:

Get qualitative insights (customer feedback)

Predict how to improve

Confirm the prediction with an A/B test

Let’s work through each of them:

1. Get Qualitative Insights

Qualitative data does a great job with alerting us to problems. More importantly, it helps us learn the WHY behind the WHAT. Using analytics, you’ll see where you customers bail, which features they use, and who your most profitable customers are. But to understand why your customers are doing what they’re doing, you need to go talk to them. At Kissmetrics, here are our 5 favorite ways to get customer feedback:

Surveys

Feedback Boxes

Reach Out Directly

User Activity From Your Analytics

Usability Tests

To find the big-win optimizations, we want to continue to look for trends in the feedback we’re receiving while diving deeper into issues we think are stirring up trouble.

Let’s say you’ve been looking at your funnel and you notice that only a few people upgrade to a paid plan or purchase your product. You have a TON of people clicking on your offers but as soon as they see the price, they bail.

This is where we want to get surgical with our qualitative data.

Throw up a one-question survey on the purchase page asking people if they have any questions about the product. You could also include a support button to collect feedback. And reach out to customers that HAVE purchased and ask them why they chose to become a customer. Once you’ve gotten feedback from 15-20 people, I bet you’ll be able to find a trend in the responses. Maybe you’ve oversold your offer in your marketing. Or maybe you haven’t addressed a critical objection in your copy.

Here’s the main take-away: qualitative data helps us understand which elements will have the biggest impact when running A/B tests.

Set up you customer feedback systems so you can easily spot emerging trends. And when one pops up, dive deeper so you know exactly what’s going on.

2. Predict How to Improve

This step is pretty straight-forward. You’ve already collected a bunch of qualitative data on how your customers are behaving. And you know WHY they’re behaving that way.

So it’s time to brainstorm some solutions to your problem.

Remember, this is a “prediction.” It’s just a hypothesis. It might work, it might not. And we won’t know until we get data to back it up.

3. Confirm Your Prediction With an A/B Test

Notice how the actual test comes at the END of this process, not at the beginning? By using qualitative data to help us understand what changes could be the most important, we’re setting ourselves up to find big wins with these tests.

Now it’s just a matter of testing to see if you’re right. You need to confirm that people will BEHAVE the same way they SAY they will (usually, they don’t). So get your hands on some data and run that A/B test.

Focus on finding your big wins and solving the major problems that you find from customer feedback. You can then make a huge impact on the growth of your business with a small amount of work.

But these big wins will run out. Before long, You’ll find the ideal funnel to acquire customers, the best pricing structure, and the most persuasive messaging. And if want to keep going, you’ll have to jump into Stage 2.

Stage 2: Get Methodical and Chase the Small Wins

Most improvements from A/B testing are small wins. Each one doesn’t amount to much. But if you can find dozens or hundreds of these things, you can double your growth several times.

While it might be easy to find a small win here and there, it’s not easy to crank these out week after week. You’re going to have to commit a lot of resources to this process.

Ideally, you’ll have a team of several people that can handle the marketing, development, and design of everything. If your conversion optimization team has to constantly fight for help from the engineering or design teams, they’ll never move fast enough to make a significant impact.

Give them the resources they need to test rapidly. Speed is the name of the game.

Also, your company needs to have enough data to work with. Typically, this means you need to be acquiring hundreds of new customers every month (thousands is even better). If you’re a smaller company, you could assign a single person to this task. Just remember that it’ll take a lot longer before you find enough small wins to make a difference.

Even during Stage 2, we’re not testing stuff randomly. Instead, we’re testing EVERYTHING. Brainstorm a list of changes and start running them back-to-back. Your testing pace should be relentless.

Some Joe Schmoe blogger swears that a photo of a person looking at your headline gives great results? TEST IT. Your best friend at the hottest startup in town says a video on the homepage makes money rain from the sky? TEST IT. You just found a list of 50 elements everyone should test from an A/B testing company? TEST IT ALL.

To decide which tests to run first, throw them into a list. It really doesn’t matter which order they go in, start at the top and work your way down as fast as you can. Some will make a difference, most won’t. And there’s really no way to know beforehand. So pick fast and get moving.

Here’s a list of tests to get you started:

Headlines

Remove or add steps to your funnel

Social proof (testimonials, customer logos, etc.)

Calls to action

Copy

Long-form vs. short-form

Layout

Add and remove elements from a page

Pricing (changing your pricing structure usually gives you a big win but smaller tests like $37 vs. $39 can also help you grow)

Purchase/signup bonuses

Up-sells and cross-sells

The Common Pitfalls

Many people also run into a number of pitfalls when they start A/B testing. Here’s how to dodge them:

Get Statistical Significance

When you start getting results for your test, the data is completely random. It might LOOK like version A does better but in the long run, version B is the real winner.

Flipping a coin works the same way. It’s entirely possible to get four heads in a row. But that doesn’t mean you’ll always get heads. The 50/50 split only shows up if you do hundreds and thousands of flips. Even then, it can get slanted one way or the other.

We minimize (but we can never eliminate) the odds of getting a bad result by collecting lots data.

So how do we know that we have enough data?

Statistical significance measures how confident we are that we’ll get the same results if we repeat the test. And we measure that confidence with a percentage.

If we say that we have a 90% confidence level, that means we’ll get the same results 9 out of 10 times. At 99% confidence, 99 out of 100 tests will give us the same results. Getting more data will steadily increase your confidence level and help you hit statistical significance.

Most people say they have statistical significance when they hit the 95% confidence level. That’s when they pick the winner and move to the next test. Don’t worry about trying to get to 99% confidence. It usually takes too long to get enough data. You’ll grow your business faster by picking the winner and focusing on the next A/B test as soon as you hit the 95% mark.

But remember, the 95% mark is arbitrary. So if you’ve got a test that’s sitting at 87% or 93% confidence and you have other tests in the pipeline, it’s okay to pick a winner and move on. Balance speed with data and don’t sacrifice one for the other.

Visual Website Optimizer has built an Excel sheet that does all the fancy math for you, you can download it here. Just plug in the results from your split test and it’ll tell you if you’ve hit statistical significance. If you haven’t, keep your test running until you do.

The “New” Effect

Introducing something new to your site can impact your conversion rate just because it’s new. This is the “new” effect. What this means is that the new version could out-perform your old version initially. But over time, the performance difference can shrink or even flip. The new version might perform better this week but over the long term, your old version might be the best choice.

And in some cases, “new” will negatively impact conversion rates. This happens when you introduce changes that interrupt the habits of your users.

Let’s say you’ve used the same navigation for a while. Even if you test a version that’s truly better, conversion rates will likely drop in the short term. Once people become familiar with your site, they don’t actively look through the navigation each time. Whenever they need something, they know right where to click. And anything that interrupts that habit will slow them down (at least in the short term). Tests that interrupt established habits will perform worse in the short term.

So how do we deal with these pitfalls? Even if you have a massive amount of data to work with and can establish statistical significance fairly quickly, give yourself more time if you suspect “newness” might be impacting the results. A couple of weeks will do it. And if time is of the essence, look at the trend lines of your conversion rates. Are they getting closer to each other? If they are, you might be looking at the “new” effect. If they look stable and you have a solid week’s worth of data, you’re good to go.

Track Your Entire Funnel

When testing improvements on how you acquire customers with your marketing funnel (making a change to your homepage falls into this group), be careful about only tracking the conversion rate for the next step. On a regular basis, you’ll find something that increases the next-step conversion but LOWERS the conversion rate for the entire funnel. So if you’re not tracking your A/B tests through the entire funnel, you might slow your customer growth by accident.

You’ll need customer analytics to do this. Regular A/B testing tools like Optimizely and Visual Website Optimizer only track the next step.

Kissmetrics has integrations with Optimizely and Visual Website Optimizer to help you avoid this trap.

But there is ONE exception to this. If you’re just getting started and don’t have much data to work with, you might only be able to test changes at the top of your funnel. You won’t have enough data to go any further.

For example, you might have enough traffic to measure free-trial sign ups from your home page but not enough data to track which version gives you more paid subscribers. If you spot a potential roadblock at the top of your funnel, don’t let a lack of data at the bottom get in your way of trying to fix it.

Did you know? The Kissmetrics A/B Test Report allows you to test any part of your funnel. Integrate with your favorite tool (Optimizely, VWO, etc) and track the results in Kissmetrics! Our revolutionary software suite is used by top online marketers throughout the world. Sign up for Kissmetrics today!

Data is Always Changing

So if you run a test today, you might get completely different results 6 months from now. When you find the “perfect” version, it won’t stay perfect. Everything has a half-life and the only way to stay on top is to periodically refresh by running another batch of tests.

What about seasonality?

Every business experiences fluctuations throughout the year, some more than others. In some cases, seasonality is blatantly obvious. Apple’s best quarter is the holiday season. Toy companies also bring home the majority of their revenue during November and December.

But seasonality can impact our results in more subtle ways.

Take the B2B market for example. Doesn’t seem like a candidate for seasonality right? Well, I reached out to several of our Kissmetrics customers to get feedback on a new feature we were building. Usually, I get a 75-90% response rate. But last August, only 1 out 10 replied. I was shocked. Was it the copy in my emails? What did I say to get such bad results?

It had nothing to do with my emails, everyone was on summer vacation. Over the next 2 weeks, just about all of them got back to me once they returned to the office.

The same thing can happen to your tests too, regardless of your industry.

But don’t use seasonality as an excuse. We all love taking credit when things go well but avoid it like a deathly plague as soon as things aren’t so rosy. Don’t blame seasonality unless you have strong evidence to back it up. Just keep an eye out for it.

Multivariate Tests and Other Hoopla

If you spend much time in the conversion optimization space, you’ll hear about these fancy schmancy things called multivariate tests. Basically, they let you test dozens of variables all at the same time to find the ideal landing page, home page, checkout page, etc.

Sounds great right? Here’s the rub: you need a MASSIVE amount of data before these become a viable option.

They also take a ton of time to set up and manage. Until you become an A/B testing pro and have an entire team that can hunt for optimizations around the clock, multivariate tests just aren’t worth the effort.

There are also other testing algorithms that get WAY more complicated than what most people need. Things like this will just get in the way because you’ll spend too much time trying to get started. Don’t sacrifice action for complexity.

Bottom Line

If you’ve never run an A/B test before, you’re in Stage 1 and there are several optimizations that will grow your company by 5-20%. But to find them, you can’t just test random stuff.

Instead, use this 3-step A/B testing process:

Use customer feedback to get your hands on qualitative data

Predict which optimizations will make the biggest difference

Run an A/B test to see if you’re right

Once you’ve found your big-wins, it’s time to start Stage 2 of your optimization plan. At this point, dig in for the long-haul, build a growth team, and start hunting for every little conversion increase you can find.

To make a dent in your growth, you’ll need to find hundreds of these little guys over the course of a year. To support tests at such a high frequency, you’ll need to be acquiring hundreds of customers every month, preferably thousands.

When you get into the meat of your tests, don’t forget these common pitfalls:

Get statistical significance

Be careful of the “new” effect

Track your entire funnel

Your data will always be changing

Stay simple and don’t use things like multivariate tests unless you have a good reason

Happy testing!

About the Author: Lars Lofgren is the Kissmetrics Marketing Analyst and has his Google Analytics Individual Qualification (he’s certified). Learn how to grow your business at his marketing blog or follow him on Twitter @larslofgren.