A/B Testing, or “Split Testing” as it is also known, can be one of the most useful and powerful tools available for CRO, when used correctly. Without careful planning and analysis, however, the potential benefits of an A/B test may be outweighed by the combined impact of errors, noise and false assumptions. For these reasons, we created The Crazy Egg A/B Test Planning Guide. Our user-friendly guide provides a roadmap through the A/B test planning process. In addition, it serves as a convenient way to record and store your testing history for future review. What is an A/B Test? If…
My first real introduction to Larry Kim was when I helped coordinate the webinar: The 10 Weirdest A/B Tests Guaranteed to Double Your Business Growth. Larry came up with that genius headline and naturally we had a full house that day. I’ve been hooked on Larry ever since. We wanted to catch up with Mr. Kim and ask him a few questions around conversion rate optimization, testing, and digital marketing. Here’s what he had to say.. 1. With A/B testing, most variations of the control underperform and fail. Correct? And if so, why is that? Yes. This is true and…
CRO makes big promises. But the way people get to those 300% lifts in conversions is by being organized. Otherwise, you find yourself in the position that a lot of marketers do: you do a test, build on the result, wait a while, do another test, wait a while… meanwhile, the big jumps in conversions, leads and revenue never really seem to manifest. That’s because only a structured approach can get you in position to make the best use of your testing time and budget. This isn’t something you want to be doing by the seat of your pants. In…
Here’s the most common problem I see when it comes to conversion rate optimization (CRO): Not putting enough energy into conducting the proper initial research into what to test. One round of user tests, then change the website. One heatmap, then change the website. One cohort analysis, then change the website. This can lead to very bad testing habits. When you are trying to improve your conversion rate, you should do multiple types of research. You should be using: Qualitative methods (e.g., usability testing, 5-second testing, surveys, etc.) Quantitative methods (e.g., heatmapping, cohort analysis, funnel analysis, segmentation, etc.) And of…
A type of hypothesis testing where multiple variables are tested simultaneously to determine how the variables and combinations of variables influence the output. If several different variables (factors) are believed to influence the results or output of a test, multivariate testing can be used to test all of these factors at once. Using an organized matrix which includes each potential combination of factors, the effects of each factor on the result can be determined. Also known as Design of Experiments (DOE), the first multivariate testing was performed in 1754 by Scottish physician James Lind as a means of identifying (or…
Most A/B test and conversion optimization ideas have their beginnings in web analytics reports. And there are countless types of reports that can provide inspiration for meaningful A/B testing. But still, it is extremely hard to come up with a successful test hypothesis using only quantitative data. Why? Because it simply doesn’t tell you the whole story. To cite Avinash Kaushik: Web analytics will show you where your website is leaking money. But you need qualitative research to find out why it does so. This is where data from Crazy Egg heatmaps and scroll map reports comes in handy. It…
If you’ve ever tested your website, you’ve probably been in the unfortunate situation of running out of ideas on what to test.
But don’t worry – it happens to everybody.
That’s of course, unless you have a website testing plan.
That’s why KlientBoost has teamed up with VWO to bring to you a gifographic that provides a simple guide on knowing the what, how, and why when it comes to testing your website.
Setting Your Testing Goals
Like a New Year’s resolution around getting fitter, if you don’t have any goals tied to your website testing plan, then you may be doing plenty of work, with little results to show.
With your goals in place, you can focus on the website tests that will help you achieve those goals –the fastest.
Testing a button color on your home page when you should be testing your checkout process, is a sure sign that you are heading to testing fatigue or the disappointment of never wanting to run a test again.
But let’s take it one step further.
While it’s easy to improve click-through rates, or CTRs, and conversion rates, the true measure of a great website testing plan comes from its ability to increase revenue.
No optimization efforts matter if they don’t connect to increased revenue in some shape or form.
Whether you improve the site user experience, your website’s onboarding process, or get more conversions from your upsell thank you page, all those improvements compound into incremental revenue gains.
Lesson to be learned?
Don’t pop the cork on the champagne until you know that an improvement in the CTRs or conversion rates would also lead to increased revenue.
Start closest to the money when it comes to your A/B tests.
Knowing What to Test
When you know your goals, the next step is to figure out what to test.
You have two options here:
Look at quantitative data like Google Analytics that show where your conversion bottlenecks may be.
Or gather qualitative data with visitor behavior analysis where your visitors can tell you the reasons for why they’re not converting.
Both types of data should fall under your conversion research umbrella. In addition to this gifographic, we created another one, all around the topic of CRO research.
When you’ve done your research, you may find certain aspects of a page that you’d like to test. For inspiration, VWO has created The Complete Guide To A/B Testing – and in it, you’ll find some ideas to test once you’ve identified which page to test:
Content near the fold
Awards and badges
As you can see, there are tons of opportunities and endless ideas to test when you decide what to test and in what order.
So now that you know your testing goals and what to test, the last step is forming a hypothesis.
With your hypothesis, you’re able to figure out what you think will have the biggest performance lift with the thought of effort in mind as well (easier to get quicker wins that don’t need heaps of development help).
Running an A/B Test
Alright, so you have your goals, list of things to test, and hypotheses to back these up, the next task now is to start testing.
With A/B testing, you’ll always have at least one variant running against your control.
In this case, your control is your actual website as it is now and your variant is the thing you’re testing.
With proper analytics and conversion tracking along with the goal in place, you can start seeing how each of these two variants (hence the name A/B) is doing.
When A/B testing, there are two things you may want to consider before you call winners or losers of a test.
One is statistical significance. Statistical significance gives you the thumbs up or thumbs down around whether your test results can be tied to a random chance. If a test is statistically significant, then the chances of the results are ruled out.
And VWO has created its own calculator so that you can see how your test is doing.
The second one is confidence level. It helps you decide whether you can replicate the results of your test again and again.
A confidence level of 95% tells you that your test will achieve the same results 95% of the time if you run it repeatedly. So, as you can tell, the higher your confidence level, the surer you can be that your test truly won or lost.
Multivariate Testing for Combination of Variations
Let’s say you have multiple ideas to test, and your testing list is looking way too long.
Wouldn’t it be cool if you could test multiple aspects of your page at once to get faster results?
That’s exactly what multivariate testing is.
Multivariate testing allows you to test which combinations of different page elements affect each other when it comes to CTRs, conversion rates, or revenue gains.
Look at the multivariate pizza example below:
The recipe for multivariate testing is simple and delicious.
And the best part is that VWO can automatically run through all the different combinations you set so that your multivariate test can be done without the heavy lifting.
If you’re curious about whether you should A/B test or run multivariate tests, then look at this chart that VWO created:
Split URL Testing for Heavier Variations
If you find that your A/B or multivariate tests lead you to the end of the rainbow that shows bigger initiatives in backend development or major design changes are needed, then you’re going to love split URL testing.
As VWO states:
“If your variation is on a different address or has major design changes compared to control, we’d recommend that you create a Split URL Test.”
Split URL testing allows you to host different variations of your website test without changing the actual URL.
As the visual shows above, you can see that the two different variations are set up in a way that the URL is different as well.
URL testing is great when you want to test some major redesigns such as your entire website built from scratch.
By not changing your current website code, you can host the redesign on a different URL and have VWO split the traffic between the control and the variant—giving you clear insight whether your redesign will perform better.
Over to You
Now that you have a clear understanding on different types of website tests to run, the only thing left is to, well, run some tests.
Armored with quantitative and qualitative knowledge of your visitors, focus on the areas that have the biggest and quickest impact to strengthen your business.
And I promise, when you finish your first successful website test, you’ll get hooked on.
If you’re reading this post, you already know how CRO (“conversion rate optimization”) can help you increase revenues and create better customer experiences. The problem now is: how do you decide what to test? Successful testing is almost always strategic in nature. You can’t just fire up your testing tool, plug in a couple of page variations and expect to meet your business goals – at least not consistently. Instead, you need to plan out a long-term strategy, prioritize tests based on business goals, and develop an execution timeline. This, in essence, is the testing roadmap. And when done right,…
Confidence Level: The percentage of time that a statistical result would be correct if you took numerous random samples. Confidence is often associated with assuredness, and the statistical meaning is closely related to this common usage of the term. To state a percentage value for confidence in something is essentially stating a level of how “sure” you are that it will happen. In statistical terms, it is the expected percentage of time that your range of values will be correct if you were to repeat the same experiment over and over again. Unfortunately, there is no such thing as a…
Margin of Error: An expression for the maximum expected difference between the true population parameter and a sample estimate of that parameter. When you are analyzing a statistical experiment or study and progress from discussing the test sample results to discussing the whole population that the sample represents, there will always be a margin of error attached to any estimated values. The margin of error will be stated with a “plus or minus” (+/-) in front of it, meaning you are just as likely to be above or below your estimated value by the same amount. Despite the word “error”…