It’s a known fact that file selection inputs are difficult to style the way developers want to, so many simply hide it and create a button that opens the file selection dialog instead. Nowadays, though, we have an even fancier way of handling file selection: drag and drop.
Technically, this was already possible because most (if not all) implementations of the file selection input allowed you to drag files over it to select them, but this requires you to actually show the file element.
It always comes back to copywriting. You can blindly post to social media, or you can take a step back and carefully wordsmith the language of your updates. The right concoction of words can multiply the effect of your social media efforts. Today’s infographic will help you get started in the right direction. Just be sure to track what you’re doing and measure the results. And even though most social media networks track your performance for you (using their native analytics systems), I find it’s always a good exercise to record your performance in a Google spreadsheet. It makes you…
So you’ve started a business, and you’re trying to make sure everything goes perfectly. This obviously means you’ll want to be prepared to drive sales on your website so you can generate revenue. And for this, you’re going to need an effective sales funnel that will lay out a strategy for attracting, engaging, and converting your audience. Build a Strong Foundation on Audience Insights A good understanding of your audience is crucial if you want to know how to enhance their purchase journey and ensure that they turn into paying customers. That’s why even before you start building your sales…
Note: This is a guest article written by Shane Barker, a renowned digital marketing consultant. Any and all opinions expressed in the post are Shane’s.
You want to increase your conversion rate. And you’ve implemented several CRO, or conversion rate optimization, strategies to help you do so. But have you considered researching about your competitors?
Understanding competition is crucial for the success of your business in every aspect. It will help you determine what you’re doing wrong, and what you can do better. It will also help you identify and capitalize on the weaknesses of your competitors.
In this post, you’ll learn the basics of conducting competitor research to enhance your CRO efforts.
#1: Identify Your Top Competitors
Before beginning your research, you need to know whom to research. Who are your biggest competitors? The simplest definition would be businesses where your target customers can get the same kind of services or products you offer.
Include both direct and indirect competitors.
Direct competitors are businesses that sell the same products or services as you.
Indirect competitors are those who sell products or services that fulfil the same need.
For example, Burger King and McDonald’s would be considered direct competitors because they have similar product offerings, that is, burgers. But Pizza Hut or Domino’s would be an indirect competitor of both Burger King and McDonald’s. Although they’re both fast food joints, Pizza Hut and Domino’s specialize in pizzas while the other two specialize in burgers.
Here are some of the ways you can identify your top competitors to conduct competitor research:
Google Search for Relevant Keywords
Make a list of keywords relevant to your business, and conduct a Google search using those keywords. The businesses that show up on the first page of your search results are your top competitors. List them for further research.
Let’s say you’re a wedding planner based in Sacramento. You can conduct a Google search using keywords like, “wedding planning in Sacramento,” “wedding planner in Sacramento,” “wedding planner Sacramento,” and so on.
Your top competitors in this case are the businesses that show up in the local pack and whose ads are displayed on the top of the page.
You can find more competitors on the actual websites that show up in your search results. For the above example, if there are any sites that list wedding planners in the Sacramento area, you would need to check out those as well.
SimilarWeb is a highly effective tool for identifying your competitors and determining their performance. All you need to do is type your website URL in the search bar and then click Start.
This step generates an overview of your site’s ranking and traffic, as shown in the screenshot below. As the goal here is to identify competitors, you need to click the option that says, “Similar Sites,” as shown on the left sidebar.
You will then get a list of some of the websites similar to yours, which you can sort based on the extent of similarity or ranking. Add them to your list so that you have a clear idea about who your competitors are.
Additionally, click each of these results to check where the websites stand in terms of ranking, traffic, and so on. This performance analysis can be used as part of the third step in this guide.
#2: Try Out Your Competition
Another important step in competitor research is to experience their services or products first-hand.
When dealing with ecommerce stores, try ordering from them. Analyze every aspect of the purchase process to identify what they’re doing right and what mistakes they’re making.
Maybe they’ve implemented a chatbot to help their shoppers find what they’re looking for quickly and easily. To improve your CRO, consider adding a chatbot to your website as well.
You should also analyze the user experience (UX) of your competitors’ websites. Ensuring a good user experience is an essential part of successful CRO.
To analyze the UX of your competitors, ask yourself questions such as:
How easy is it for you to navigate your competitor’s website?
Are there too many distractions on any of their webpages?
Are you having a tough time reading the copy because of a bad font choice?
Is the process of completing a purchase easy?
Additionally, analyze their post-purchase service to see how well they respond to customer complaints. These questions can help you understand more about your competition. Analyze their services to determine what they’re doing well, what you can improve on, and what mistakes you should avoid.
In the case of a brick-and-mortar shop, try visiting the establishment to experience its service. Make a note of the store’s ambiance, how friendly the staff is, how well they present their products, and so on.
You can also ask the opinions of friends and family or your customers who have visited the place.
#3: Analyze Competitor Performance and Strategy
This is one of the most important steps in competitor research. When you think of analyzing their performance and strategy, several aspects may come to mind. Not sure what exactly to prioritize, or where to start?
Analyze the following to conduct your competitor research more efficiently:
Traffic and Ranking
One of the key factors to consider when analyzing the performance of your competitors is their ranking. Find out how they rank for specific keywords, and compare their performance against your own.
For competitor performance analysis, you can use SEMrush, which you can access for free. You also have the option to upgrade to one of their paid plans, which allow for more results and reports per day.
In the screenshot below, the tool gives you a report on the website’s paid and organic search traffic. Using this tool, you can compare the amount of branded traffic and non-branded traffic and get some insight into the PPC campaigns of your competitors.
SEMrush can help you find out what your competitors are doing right so that you can use those opportunities to improve your CRO efforts.
The tool will also give you a list of keywords for which each website is ranked, along with the position and search volume for each keyword.
SpyFu is another useful tool for conducting competitor research. The tool helps you find your competitors on typing your website URL in the search bar.
The most useful aspect of this tool is that it identifies the top organic and paid keywords used by your competitors. It also helps you to identify the keywords you share with your competitors.
Link profiles is another important aspect to help you conduct competitor research. According to Moz, link profiles are among the top search ranking factors.
A good link profile will improve your website ranking, which will improve its visibility. The more visible your website is, the better your chances are of improving traffic. Increased traffic often leads to higher conversions.
This means that you need to conduct competitor research to find out where they stand in terms of backlinks. Find out which websites are linking to them and how many backlinks they currently have. This will help you determine what backlinking goals you should set and which websites you should target through your backlinking efforts.
You can use basic tools such as Backlink Checker from Small SEO Tools to check which pages are linking to your competitors. For more detailed reports, you can use the two tools mentioned earlier, SEMrush and SpyFu.
SpyFu gives you a list of pages linking to your competitors. In addition, it shows the number of organic clicks and domain strength of the websites linking to your competitors.
SEMrush is even more comprehensive. It gives you a report on the number of backlinks your competitor has and the number of domains linking to these backlinks.
Also, you can use SEMrush to view the top anchor texts being used to link to your competitors.
Landing Page Strategy
In addition to your competitors’ performance, you need to determine their ability to impress their audience. This means that you need to analyze their landing page strategy and identify their strengths and weaknesses.
How strong is the headline?
Is the value proposition clear?
Is the landing page design aesthetically pleasing?
Are there any visuals on the page?
These are just some of the questions you need to ask when analyzing the landing pages of your competitors.
When you conduct competitor research, it’s also important to analyze their pricing strategy. Their rates maybe are more competitive and, therefore, your target customers are choosing them over you.
What can you do to present your rates in a more appealing manner to enhance your CRO efforts?
Are your competitors offering multiple pricing options?
Are there any guarantees that make their offers more trustworthy?
Do they compare various pricing options?
What are the biggest strengths and weaknesses of their pricing strategies?
Now you know more about how to conduct competitor research to improve your conversions rate optimization strategy. Next, you need to make a list of the top strengths and weaknesses of each competitor based on the data you have collected.
For example, one competitor’s top strengths may be an excellent landing page design and a good backlinking strategy. But the same competitor could be lagging in terms of organic search ranking and customer service as well.
From this list, you can identify opportunities to improve your CRO efforts. Your competitor research can also provide you with insights into the mistakes you should avoid and ways to improve your service so that it stands out from your competitors.
Got any questions about the tips provided here? Feel free to ask them or to share your ideas in the comments below.
If you were planning to race your car, you would want to make sure it could handle the road, right?
Imagine racing a car that is not ready for the surprises of the road. A road that is going to require you to twist and turn constantly, and react quickly to the elements.
You would find yourself on the side of the road in no time.
A well-outfitted car, on the other hand, is able to handle the onslaught of the road and, when the dust settles, reach the finish line.
Well, think of your website like the car and conversion optimization like the race. Too many companies jump into conversion optimization without preparing their website for the demands that come with testing.
Get the Technical Optimizer’s Checklist
Download and print off this checklist for your technical team. Check off each item and get prepared for smooth A/B testing ahead!
By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.
But proper technical preparation can mean a world of difference when you are trying to develop tests quickly, and with as few QA issues as possible. In the long-run, this leads to a better testing rhythm that yields results and insights.
With 2017 just around the corner, now is a good time to look ‘under the hood’ of your website and make sure it is testing-ready for the New Year. To make sure you have built your website to stand the tests to come, pun intended.
In order to test properly, and validate the great hypotheses you have, your site must be flexible and able to withstand changes on the fly.
With the help of the WiderFunnel web development team, I have put together a shortlist to help you get your website testing-ready. Follow these foundational steps and you’ll soon be racing through your testing roadmap with ease.
To make these digestible for your website’s mechanics, I have broken them down to three categories: back-end, front-end, and testing best practices.
Back-end setup a.k.a. ‘Under the hood’
Many websites were not built with conversion optimization in mind. So, it makes sense for you to revisit the building blocks of your website and make some key changes on the back-end that will make it much easier for you to test.
1) URL Structure
Just as having a fine-tuned transmission for your vehicle is important, so is having a well-written URL structure for your website. Good URL structure equals easier URL targeting. (‘Targeting’ is the feature you use to tell your testing tool where your tests will run on your website.) This makes targeting your tests much simpler and reduces the possibility of including the wrong pages in a test.
Products to include:
Products to exclude:
Products to include:
Products to exclude:
On the other hand, the second example shows a company that structured all of their product URLs and categories. Targeting in this case uses a match for the substring “/engines/” and allows you to exclude other categories, such as ‘wheels’. Proper URL structure means smoother and faster testing.
2) Website load time or ‘Time to first paint’
‘Time to first paint‘ refers to the initial load of your page, or the moment your user sees that something is happening. Of course, today, people have very short attention spans and can get frustrated with slow load times. And when you are testing, ‘time to first paint’ can become even more of a concern with things like FOOC and even slower load times.
So, how do you reduce your website’s time to first paint? Glad you asked:
Within the HTML of your page:
Within the head tag, move the code snippet of your testing tool as high as you can―the higher the better.
Minify* your JS and CSS files so that they load into your visitor’s browser faster. Then, bring all JS and CSS into a single file for each type. This will allow your user’s browser to pull content from two files instead of having to refer to too many files for the instructions it needs. The difference is reading from 15 different documents or two condensed ones.
Use sprites for all your images. Loading in a sprite means you’re loading multiple images one time into the DOM*, as opposed to loading each image individually. If you did the latter, the DOM would have to load each image separately, slowing load time.
While these strategies are not exhaustive, if you do all of the above, you’ll be well on your way to reducing your site load time.
3) Make it easy to differentiate between logged-in and logged-out users
Many websites have logged-in and logged-out states. However, few websites make it easy to differentiate between these states in the browser. This can be problematic when you are testing, if you want to customize experiences for both sets of users.
This will make it easier for you to customize experiences and implement variations for both sets of users. Not doing so will make the process more difficult for your testing tool and your developers. This strategy is particularly useful if you have an e-commerce website, which may have different views and sections for logged-in versus logged-out users.
4) Reduce clunkiness a.k.a. avoid complex elements
Here, I am referring to reducing the number of special elements and functionalities that you add to your website. Examples might include date-picking calendars, images brought in from social media, or an interactive slider.
While elements like these can be cool, they are difficult to work with when developing tests. For example, let’s say you want to test a modal on one of your pages, and have decided to use an external library which contains the code for the modal (among other things). By using an external library, you are adding extra code that makes your website more clunky. The better bet would be to create the modal yourself.
The front-end of your website is not just the visuals that you see, but the code that executes behind the scenes in your user’s browser. The changes below are web development best practices that will help you increase the speed of developing tests, and reduce stress on you and your team.
1) Breakpoints – Keep ’em simple speed racer!
Assuming your website is responsive, it will respond to changes in screen sizes. Each point at which the layout of the page changes visually is known as a breakpoint. The most common breakpoints are:
Mobile – 320 pixels and 420 pixels
Desktop and Tablet – 768px, 992px, 1024px and 1200px
Making your website accessible to as many devices as possible is important. However, too many breakpoints can make it difficult to support your site going forward.
When you are testing, more breakpoints means you will need to spend more time QA-ing each major change to make sure it is compatible in each of the various breakpoints. The same applies to non-testing changes or additions you make to your website in the future.
Spending a few minutes looking under to hood at your analytics will give you an idea of the top devices and their breakpoints that are important for your users.
Above, you can see an example taken from the Google Analytics demo account: Only 2% of sessions are Tablet, so planning for a 9.5 inch screen may be a waste of this team’s time.
Let’s say your website works in the many breakpoints and browsers you wish to target. However, you’re using images for your footer and main calls-to-action.
Problem 1: Your site may respond to each breakpoint, but the images you are using may blur.
Problem 2: If you need to add a link to your footer or change the text of your call-to-action, you have to create an entirely new image.
Use buttons instead of images for your calls-to-action, use SVGs instead of icons, use code to create UI elements instead of images. Only use images for content or UI that may be too technically difficult or impossible to write in code.
3) Keep your HTML and CSS simple:
Keep it simple: Stop putting CSS within your HTML. Use div tags sparingly. Pledge to not put everything in tables. Simplicity will save you in the long run!
Putting CSS in a separate file keeps your HTML clean, and you will know exactly where to look when you need to make CSS changes. Reducing the number of div tags, which are used to create sections in code, also cleans up your HTML.
These are general coding best practices, but they will also ensure you are able to create test variations faster by decreasing the time needed to read the code.
Tables, on the other hand, are just bad news when you are testing. They may make it easy to organize elements, but they increase the chance of something breaking when you are replacing elements using your testing tool. Use a table when you want to display information in a table. Avoid using tables when you want to lay out information while hiding borders.
Bonus tip: Avoid using iFrames* unless absolutely necessary. Putting a page within a page is difficult: don’t do it.
4) Have a standard for naming classes and IDs
Classes and IDs are the attributes you add to HTML tags to organize them. Once you have added Classes and IDs in your HTML, you can use these in your CSS as selectors, in order to make changes to groups of tags using the attributed Class or ID.
You should implement a company-wide standard for your HTML tags and their attributes. Add in standardized attribute names for Classes and IDs, even for list tags. Most importantly, do not use the same class names for elements that are unrelated!
Looking at the above example, let’s say I am having a sale on apples and want to make all apple-related text red to bring attention to apples. I can do that, by targeting the “wf-apples” class!
Not only is this a great decision for your website, it also makes targeting easier during tests. It’s like directions when you’re driving: you want to be able to tell the difference between the second and third right instead of just saying “Turn right”.
Technical testing ‘best practices’ for when you hit the road
We have written several articles on testing best practices, including one on the technical barriers to A/B testing. Below are a couple of extra tips that will improve your current testing flow without requiring you to make changes to your website.
2) Don’t pull content from other pages while testing
When you are creating a variation, you want to avoid bringing in unnecessary elements from external pages. This approach requires more time in development and may not be worth. You have already spent time reducing the clunkiness of your code, and bringing in external content will reverse that.
The important question when you are running a test is the ‘why’ behind it, and the ‘what’ you want to get out of it. Sometimes, it is ok to test advanced elements to get an idea of whether your customers respond to them. My colleague Natasha expanded on this tactic in her article “Your growth strategy and the true potential of A/B testing”.
3) Finally, a short list of do’s and dont’s for your technical team
Don’t just override CSS or add CSS to an element, put it in the variation CSS file (don’t use !important)
Don’t just write code that acts as a ‘band-aid’ over the current code. Solve the problem, so there aren’t bugs that come up for unforeseen situations.
Do keep refactoring
Do use naming conventions
Don’t use animations: You don’t know how they will render in other browsers
DOM: The Document Object Model (DOM) is a cross-platform and language-independent convention for representing and interacting with objects in HTML, XHTML, and XML documents
iFrame: The iframe tag specifices and inline frame. An inline frame is used to embed another document within the current HTML document
Minification of files makes them smaller in size and therefore reduces the amount of time needed for downloading them.
What types of problems does your development team tackle when testing? Are there any strategies that make testing easier from a technical standpoint that are missing from this article? Let us know in the comments!
A few weeks ago, a Fortune 500 company asked that I review their A/B testing strategy.
The results were good, the hypotheses strong, everything seemed to be in order… until I looked at the log of changes in their testing tool.
I noticed several blunders: in some experiments, they had adjusted the traffic allocation for the variations mid-experiment; some variations had been paused for a few days, then resumed; and experiments were stopped as soon as statistical significance was reached.
When it comes to testing, too many companies worry about the “what”, or the design of their variations, and not enough worry about the “how”, the execution of their experiments.
Don’t get me wrong, variation design is important: you need solid hypotheses supported by strong evidence. However, if you believe your work is finished once you have come up with variations for an experiment and pressed the launch button, you’re wrong.
In fact, the way you run your A/B tests is the most difficult and most important piece of the optimization puzzle.
There are three kinds of lies: lies, damned lies, and statistics.
– Mark Twain
In this post, I will share the biggest mistakes you can make within each step of the testing process: the design, launch, and analysis of an experiment, and how to avoid them.
This post is fairly technical. Here’s how you should read it:
If you are just getting started with conversion optimization (CRO), or are not directly involved in designing or analyzing tests, feel free to skip the more technical sections and simply skim for insights.
If you are an expert in CRO or are involved in designing and analyzing tests, you will want to pay attention to the technical details. These sections are highlighted in blue.
Mistake #1: Your test has too many variations
The more variations, the more insights you’ll get, right?
Not exactly. Having too many variations slows down your tests but, more importantly, it can impact the integrity of your data in 2 ways.
First, the more variations you test against each other, the more traffic you will need, and the longer you’ll have to run your test to get results that you can trust. This is simple math.
But the issue with running a longer test is that you are more likely to be exposed to cookie deletion. If you run an A/B test for more than 3–4 weeks, the risk of sample pollution increases: in that time, people will have deleted their cookies and may enter a different variation than the one they were originally in.
Within 2 weeks, you can get a 10% dropout of people deleting cookies and that can really affect your sample quality.
The second risk when testing multiple variations is that the significance level goes down as the number of variations increases.
For example, if you use the accepted significance level of 0.05 and decide to test 20 different scenarios, one of those will be significant purely by chance (20 * 0.05). If you test 100 different scenarios, the number goes up to five (100 * 0.05).
In other words, the more variations, the higher the chance of a false positive i.e. the higher your chances of finding a winner that is not significant.
Google’s 41 shades of blue is a good example of this. In 2009, when Google could not decide which shades of blue would generate the most clicks on their search results page, they decided to test 41 shades. At a 95% confidence level, the chance of getting a false positive was 88%. If they had tested 10 shades, the chance of getting a false positive would have been 40%, 9% with 3 shades, and down to 5% with 2 shades.
You can calculate the chance of getting a false positive using the following formula: 1-(1-a)^m with m being the total number of variations tested and a being the significance level. With a significance level of 0.05, the equation would look like this:
1-(1-0.05)^m or 1-0.95^m.
You can fix the multiple comparison problem using the Bonferroni correction, which calculates the confidence level for an individual test when more than one variation or hypothesis is being tested.
Wikipedia illustrates the Bonferroni correction with the following example: “If an experimenter is testing m hypotheses, [and] the desired significance level for the whole family of tests is a, then the Bonferroni correction would test each individual hypothesis at a significance level of a/m.
For example, if [you are] testing m = 8 hypotheses with a desired a = 0.05, then the Bonferroni correction would test each individual hypothesis at a = 0.05/8=0.00625.”
In other words, you’ll need a 0.625% significance level, which is the same as a 99.375% confidence level (100% – 0.625%) for an individual test.
The Bonferroni correction tends to be a bit too conservative and is based on the assumption that all tests are independent of each other. However, it demonstrates how multiple comparisons can skew your data if you don’t adjust the significance level accordingly.
The following tables summarize the multiple comparison problem.
Probability of a false positive with a 0.05 significance level:
Adjusted significance and confidence levels to maintain a 5% false discovery probability:
In this section, I’m talking about the risks of testing a high number of variations in an experiment. But the same problem also applies when you test multiple goals and segments, which we’ll review a bit later.
Each additional variation and goal adds a new combination of individual statistics for online experiments comparisons to an experiment. In a scenario where there are four variations and four goals, that’s 16 potential outcomes that need to be controlled for separately.
Some A/B testing tools, such as VWO and Optimizely, adjust for the multiple comparison problem. These tools will make sure that the false positive rate of your experiment matches the false positive rate you think you are getting.
In other words, the false positive rate you set in your significance threshold will reflect the true chance of getting a false positive: you won’t need to correct and adjust the confidence level using the Bonferroni or any other methods.
One final problem with testing multiple variations can occur when you are analyzing the results of your test. You may be tempted to declare the variation with the highest lift the winner, even though there is no statistically significant difference between the winner and the runner up. This means that, even though one variation may be performing better in the current test, the runner up could “win” in the next round.
You should consider both variations as winners.
Mistake #2: You change experiment settings in the middle of a test
When you launch an experiment, you need to commit to it fully. Do not change the experiment settings, the test goals, the design of the variation or of the Control mid-experiment. And don’t change traffic allocations to variations.
Changing the traffic split between variations during an experiment will impact the integrity of your results because of a problem known as Simpson’s Paradox.This statistical paradox appears when we see a trend in different groups of data which disappears when those groups are combined.
Ronny Kohavi from Microsoft shares an example wherein a website gets one million daily visitors, on both Friday and Saturday. On Friday, 1% of the traffic is assigned to the treatment (i.e. the variation), and on Saturday that percentage is raised to 50%.
Even though the treatment has a higher conversion rate than the Control on both Friday (2.30% vs. 2.02%) and Saturday (1.2% vs. 1.00%), when the data is combined over the two days, the treatment seems to underperform (1.20% vs. 1.68%).
This is because we are dealing with weighted averages. The data from Saturday, a day with an overall worse conversion rate, impacted the treatment more than that from Friday.
We will return to Simpson’s Paradox in just a bit.
Changing the traffic allocation mid-test will also skew your results because it alters the sampling of your returning visitors.
Changes made to the traffic allocation only affect new users. Once visitors are bucketed into a variation, they will continue to see that variation for as long as the experiment is running.
So, let’s say you start a test by allocating 80% of your traffic to the Control and 20% to the variation. Then, after a few days you change it to a 50/50 split. All new users will be allocated accordingly from then on.
However, all the users that entered the experiment prior to the change will be bucketed into the same variation they entered previously. In our current example, this means that the returning visitors will still be assigned to the Control and you will now have a large proportion of returning visitors (who are more likely to convert) in the Control.
Note: This problem of changing traffic allocation mid-test only happens if you make a change at the variation level. You can change the traffic allocation at the experiment level mid-experiment. This is useful if you want to have a ramp up period where you target only 50% of your traffic for the first few days of a test before increasing it to 100%. This won’t impact the integrity of your results.
As I mentioned earlier, the “do not change mid-test rule” extends to your test goals and the designs of your variations. If you’re tracking multiple goals during an experiment, you may be tempted to change what the main goal should be mid-experiment. Don’t do it.
All Optimizers have a favorite variation that we secretly hope will win during any given test. This is not a problem until you start giving weight to the metrics that favor this variation. Decide on a goal metric that you can measure in the short term (the duration of a test) and that can predict your success in the long term. Track it and stick to it.
It is useful to track other key metrics to gain insights and/or debug an experiment, if something looks wrong. However, these are not the metrics you should look at to make a decision, even though they may favor your favorite variation.
Let’s say you have avoided the 2 mistakes I’ve already discussed, and you’re pretty confident about the results you see in your A/B testing tool. It’s time to analyze the results, right?
Not so fast! Did you stop the test as soon as it reached statistical significance?
I hope not…
Statistical significance should not dictate when you stop a test. It only tells you if there is a difference between your Control and your variations. This is why you should not wait for a test to be significant (because it may never happen) or stop a test as soon as it is significant. Instead, you need to wait for the calculated sample size to be reached before stopping a test. Use a test duration calculator to understand better when to stop a test.
Now, assuming you’ve stopped your test at the correct time, we can move on to segmentation. Segmentation and personalization are hot topics in marketing right now, and more and more tools enable segmentation and personalization.
There are 2 main problems with post-test segmentation, however, that will impact the statistical validity of your segments (when done incorrectly).
The sample size of your segments is too small. You stopped the test when you reached the calculated sample size, but at a segment level the sample size is likely too small and the lift between segments has no statistical validity.
The multiple comparison problem. The more segments you compare, the greater the likelihood that you’ll get a false positive among those tests. With a 95% confidence level, you’re likely to get a false positive every 20 post-test segments you look at.
There are different ways to prevent these two issues, but the easiest and most accurate strategy is to create targeted tests (rather than breaking down results per segment post-test).
I don’t advocate against post-test segmentation―quite the opposite. In fact, looking at too much aggregate data can be misleading. (Simpson’s Paradox strikes back.)
The Wikipedia definition for Simpson’s Paradox provides a real-life example from a medical study comparing the success rates of two treatments for kidney stones.
The table below shows the success rates and numbers of treatments for treatments involving both small and large kidney stones.
The paradoxical conclusion is that treatment A is more effective when used on small stones, and also when used on large stones, yet treatment B is more effective when considering both sizes at the same time.
In the context of an A/B test, this would look something like this:
Simpson’s Paradox surfaces when sampling is not uniform—that is the sample size of your segments is different. There are a few things you can do to prevent getting lost in and misled by this paradox.
First, you can prevent this problem from happening altogether by using stratified sampling, which is the process of dividing members of the population into homogeneous and mutually exclusive subgroups before sampling. However, most tools don’t offer this option.
If you are already in a situation where you have to decide whether to act on aggregate data or on segment data, Georgi Georgiev recommends you look at the story behind the numbers, rather than at the numbers themselves.
“My recommendation in the specific example [illustrated in the table above] is to refrain from making a decision with the data in the table. Instead, we should consider looking at each traffic source/landing page couple from a qualitative standpoint first. Based on the nature of each traffic source (one-time, seasonal, stable) we might reach a different final decision. For example, we may consider retaining both landing pages, but for different sources.
In order to do that in a data-driven manner, we should treat each source/page couple as a separate test variation and perform some additional testing until we reach the desired statistically significant result for each pair (currently we do not have significant results pair-wise).”
In a nutshell, it can be complicated to get post-test segmentation right, but when you do, it will unveil insights that your aggregate data can’t. Remember, you will have to validate the data for each segment in a separate follow up test.
The execution of an experiment is the most important part of a successful optimization strategy. If your tests are not executed properly, your results will be invalid and you will be relying on misleading data.
It is always tempting to showcase good results. Results are often the most important factor when your boss is evaluating the success of your conversion optimization department or agency.
But results aren’t always trustworthy. Too often, the numbers you see in case studies are lacking valid statistical inferences: either they rely on too heavily on an A/B testing tool’s unreliable stats engine and/or they haven’t addressed the common pitfalls outlined in this post.
Use case studies as a source of inspiration, but make sure that you are executing your tests properly by doing the following:
If your A/B testing tool doesn’t adjust for the multiple comparison problem, make sure to correct your significance level for tests with more than 1 variation
Don’t change your experiment settings mid-experiment
Don’t use statistical significance as an indicator of when to stop a test, and make sure to calculate the sample size you need to reach before calling a test complete
Finally, keep segmenting your data post-test. But make sure you are not falling into the multiple comparison trap and are comparing segments that are significant and have a big enough sample size
There was a time when simply launching an A/B test was a big deal.
I remember my first test. It was a lead gen form. I completely redesigned it. I learned nothing. And it felt like I was on top of the world.
Today, things are different, especially if you’re a major e-commerce company doing high-volume conversion optimization in a team setting. The demands have shifted; the expectations are far greater. New tools are being created to solve new problems.
So what does it take to own enterprise e-commerce CRO in 2016 compared to before?
Make money during A/B tests
While “always be testing” is a great mantra, I have to ask, “is you ‘always be banking?’”
Most of us have been running tests that inform us first, and make money later. For example, you might run a test where you’ve got a clear winner, but it’s one of 5 other variations, so you’re only benefiting from it 20% of the time during the length of the experiment.
Furthermore, you may have 4 variations that are underperforming versus your Control, so you could even be losing money while you test. Imagine spending an entire year testing in that manner. You’d rarely be fully benefiting from your positive test results!
Of course, as part of a controlled experiment and in order to generate valid insights, it’s important to distribute traffic evenly and fairly between all variations (across multiple days of the week, etc).
But there also comes a time to be opportunistic.
Enter the multi-arm bandit (MAB) approach. MAB is an automated testing mechanism that diverts more traffic to better performing variations. Thresholds can be set to control how much better a variation has to perform before it is favored by the mechanism.
Hold your horses: MAB sounds amazing, but it is not the solution to all of your problems. It’s best reserved for times when the potential revenue gains outweigh the potential insights to be gained or the test has little long-term value.
Say, for example, you’re running a pre-Labor Day promotion and you’ve got a site-wide banner. This banner’s only going to be around for 5-10 days before you switch to the next holiday. So really, you just want to make the most of the opportunity and not think about it again until next year.
A bandit algorithm applied to an A/B test of your banner will help you find the best performer during the period of the experiment, and help generate the most revenue during the testing period.
While you may not be able to infer too many insights from the experiment, you should be able to generate more revenue than had you either not tested at all or gone with a traditional, even split test.
BEFORE: Test, analyze results, decide, implement, make money later.
TODAY: Test and make money while you’re at it.
When to do it: Best used in cases where what you learn is not that useful for the future.
When not to do it: Not necessarily the most useful for long-term testing programs.
Track long-term revenue gains
If you’ve been testing over the course of many months and years, accurately tracking and reporting your cumulative gains can become a serious challenge.
You’re most likely testing across different zones of your website – homepage, category page, product detail page, site-wide, checkout, etc. Multiply those zones by the number of viewport ranges you’re specifically testing on.
What do you do, sum up each individual increase and project out over the course of a year? Do you create an equation to calculate the combined effect of all of your tests? Do you avoid trying to report at all?
There isn’t one good solution, but rather a few options that all have their strengths and weaknesses:
The first, and easiest, is using a formula to determine combined results. You’ll want a strong mathematician to help you with this one. Personally, I always have a lingering doubt that none of what is being reported is accurate, even using conservative estimations. And as time goes on, things only get less accurate.
The second is to periodically re-test your original Control from the moment at which you started testing. Say, every 6 months, test your best performing variation against the Control you had 6 months prior. If you’ve been testing across the funnel, test the entire funnel in one experiment.
Yes, it will be difficult. Yes, your developers will hate you. And yes, you will be able to prove the value of your work in a very confident manner.
It’s best to run these sorts of tests with a duplicate of each variation (2 “old” Controls vs 2 best performers) just to add an extra layer of certainty when you look at your results. It goes without saying that you should run these experiments for as long as reasonably possible.
Another option is to always be testing your “original” Control vs your most recent best performer in a side experiment. Take 10% of your total traffic and segment it to a constantly running experiment that pits the original control version of your site against your latest best performer.
It’s an experiment running in the background, not affected by what you are currently testing. It should serve as a constant benchmark to calculate the total effect of all your tests, combined.
Technically, this will be a challenge. You’ll be asking a lot of your developers and your analytics people, and at one point, you may ask yourself if it’s all worth it. But in the end, you will have some awesome reports to show, demonstrating the ridiculous revenue you’ve generated through CRO.
BEFORE: Individual test gains, cumulated.
TODAY: Taking into consideration interaction effects, re-running Control vs combined new variations OR using a model to predict combined effect of tests.
When to do it: When you want to better estimate the combined effect of multiple testing wins.
When not to do it: When your tests are highly seasonal and can’t be combined OR when it becomes impossible from a technical perspective (hence the importance of doing so in a reasonable time frame—don’t wait 2 years to do it).
Track and distribute cumulative insights
If you do this right, you will learn a ton about your customers and how to increase your revenue in the future. Ideally, you should have a goody-bag of insights to look through whenever you’re in need of inspiration.
So, how do you track insights over time and revalidate them in subsequent experiments? Also, does Jenny in branding know about your latest insights into the importance of your product imagery? How do you get her on board and keep her up to date on a consistent basis?
Both of these challenges deserve attention.
The simplest “system” for tracking insights is via spreadsheet, with columns that codify insights by type, device, and any other useful criteria for browsing and grouping. This proves unscalable when you’re testing at high velocity. That’s where a custom platform comes into play that does the job of tracking and sharing insights.
For example, the team at The Next Web created in internal tool for tracking tests, insights, then easily sharing ideas via Slack. There are other publicly available options, most of which integrate with Optimizely or VWO.
BEFORE: Excel sheets, Powerpoint presentations, word of mouth, or nothing at all.
TODAY: A shared and tagged database of insights that link back to the experiments that generated them and is updated on the fly. Tools such as Experiment Engine, Effective Experiments, Iridion and Liftmap are all solving some part of this puzzle.
When to do it: When you’re learning a lot of valuable things, but having trouble tracking or sharing what you learn. (BTW, if you’re not having this problem, you might be doing something wrong.)
When not to do it: When the future is of little importance.
Code implementation-ready variations
High velocity testing doesn’t just mean quickly getting tests out the door; it means being able to implement winners immediately and move on. To make this possible, your test code has to be ready to implement, meaning:
Code should be modularized. Your scripts should be modularized into sections functionality and design changes.
BEFORE: Messy jQuery.
TODAY: Modularized experiment code, separated css that aligns with classnames.
When to do it: When you wish to make the implementation process as painless as possible.
When not to do it: When you just don’t care.
Create FOOC-free variations
If your test variations “flicker” or “flash” as they load, you’re experiencing Flash of Original Content or FOOC. It will affect your results if it goes untreated. Some of the best ways to prevent it are as follows:
Place your code snippets as high as possible on the page.
Improve site load time in general (regardless of your testing tool).
Briefly hide the body or div element being tested.
Some people think of A/B testing as a way to improve the look of their website, while others use it to test the fundamentals of their business. Take advantage of the tools at your disposal to get to the heart of what makes your business tick.
For example, we tested reducing the product range of one of our clients and discovered that they could save millions on manufacturing and marketing without losing revenue. What are the big lingering questions you could answer through A/B testing?
BEFORE: Most of us tested button colors at one point or another.
TODAY: Business decisions are being validated through A/B tests.
When to do it: When business decisions can be tested online, in a controlled manner.
When not to do it: When most factors cannot be controlled for online, during the length of an A/B test.
Use data science to test predictions, not ideas
It is highly likely that you are underutilizing the customer analytics that are available to you. Most of us don’t have the team in place or the time to dig through the data constantly. But this could be costing you dearly in missed opportunities.
If you have access to a data scientist, even on a project-basis, you can uncover insights that will vastly improve the quality of your A/B test hypotheses.
TODAY: Predictive analytics can uncover data-driven test hypotheses.
When to do it: When you’ve got lots of well-organized analytics data.
When not to do it: When you prefer the spaghetti method.
Optimize for volume of tests
There was a time when “always be testing” was enough. These days, it’s about “always be testing in 100 different places at once.” This creates new challenges:
How do you test in multiple parts of the same funnel synchronously without concern for cross pollination?
How do you organize your human resources in a way to get all the work done?
This is the art of being a conversion optimization project manager: knowing how to juggle speed vs value of insights and considering resource availability. At WiderFunnel, we do a few things that help make sure we go as fast as possible without sacrificing insights:
We stagger “difficult” experiments with “easy” ones so that production can be completed on “difficult” ones while “easy” ones are running.
We integrate with testing tool APIs to quickly generate coding templates, meaning our development doesn’t need to do any manual work before starting to code variations.
We use detailed briefs to keep everyone on the same page and reduce gaps in communication.
We schedule experiments based on “insight flow” so that earlier experiments help inform subsequent ones.
We use algorithms to control for cross-pollination so that multiple tests within the same funnel can be run while being able to segment any cross-pollinated visitors.
BEFORE: Running one experiment at a time.
TODAY: Running experiments across devices, segments, and funnels.
When to do it: When you’ve got the traffic, conversions and the team to make it happen.
When not to do it: When there aren’t enough conversions to go around for all of your tests.
Don’t get stuck in the optimization ways of the past. The industry is moving quickly, and the only way to stay ahead of your competitors (who are also testing) is to always be improving your conversion optimization program.
Bring your testing strategies into the modern era by mastering the 8 tactics outlined above. You’re an optimizer, after all―it’s only fitting that you optimize your optimization.
Do you agree with this list? Are there other aspects of modern-era CRO not listed here? Share your thoughts in the comments!
It has never been easier to create an online business. Accordingly, competition has never been more fierce.
To not just survive, but thrive … every element of your online business must be optimized for maximum conversions: sign ups, landing pages, product descriptions, buttons, design, and (of course) your copy.
What you need is real data from real people to create real insights, real action and epic wins.
And, there’s no debate. The best way to get valuable data is through A/B testing – creating different versions of your online material to see which one performs best.
The results of A/B testing are powerful. Social media powerhouse Buffer, for instance, literally doubled their email signups in 30 days by adding “nine times the email capture opportunities” and then testing their new layout against their original page.
So, to help you and your business succeed online, here are six must-test elements you absolutely should pay attention to, and more importantly, tips on how to actually do that… the right way.
These Elements Can Yield Big Wins If Tested The Right Way
A headline is the first thing that captures the attention of your reader.
It’s at the top of your page and for blog posts, it’s usually what’s shared on social media. Make the mistake of using a poor headline and your content will sit on your website gathering dust … no matter how good the actual content is.
While following best practices is generally a good strategy, your customers won’t always respond the way that one marketing article said they would.
For example, we’ve been taught that using promotional tactics such as discounts will drive sales. Yet, that wasn’t true for EA Games when selling their new SimCity game. Their test page without a promotional coupon drove 43.4% more purchases than the one offering 20% off for pre-ordering. It turns out their visitors didn’t want an incentive. They just wanted to buy the game.
Here’s another best-practice fail. In copywriting it’s often advised that you don’t talk about what your company does. Instead, talk about the benefits your company will provide for the customer. But here’s a case when this supposed best practice was beaten hands down. When Movexa changed its headline from “Natural Joint Relief”, a classic benefits-focused line, to “Natural Joint Relief Supplement”, a more “what”-focused line, conversions increased by 89.97%!
Now take a look at these two webpage screenshots from an A/B test for email management tool, AwayFind, and guess which one is more likely to convert visitors.
If you guessed Version A because of the larger, stand-out headline and bigger CTA button, you’d be wrong. Due to the shorter — and easier to read — headline along with the bolded key features in the sub-heading, Version B increased sign-ups by 38%.
You can experiment with your headlines in similar ways, like testing whether your audience prefer long headlines or shorter ones, or whether they respond better to a friendly tone– “7 Easy Ways to Get Better Sleep” – or a frightening tone – “The Shocking Truth about What’s Hiding in Your Bed”.
2. Calls-to-Action (CTAs)
The call to action in any web content tells the reader what you want them to do next.
Interested in our software? Sign up for a free trial.
Like the article you just read? Get more like this delivered to your inbox.
A good call-to-action gives readers a clear route to accomplishing their goals, while simultaneously leading them to where you want them to go.
To ensure you are getting the most out of your CTAs, you should be testing four elements:
These factors can make a big difference in your audience’s response.
Take copy for instance. Friendbuy was able to significantely increase views of their product demo with only a slight tweak in their wording. Their A/B test proved that a button with the words “See demo” got 82% more clicks than a button which said “Test it out.” Small change. Big results.
Why? Because visitors to the Friendbuy homepage, typically marketers, were most likely seeking out more information about the products and clearly understood the term “demo.”
To further confirm this point, VWO was able to boost Zwitserleven’s landing page conversion rate 14.1%, with a 5% total lead increase, by simply changing their CTA from a rather vague “Go Further” to a much more specific “More Information”.
And what about placement? One excellent tool to test is HelloBar, which puts a high-converting bar at the top of your website, promoting your latest content or even to get email signups directly through the bar.
As GetResponse points out, “People who buy products marked by email spend 138% more than those who do not receive email offers.” In other words, the payoffs are huge.
The main ways to test the success of your email marketing campaigns are to measure the:
Open Rate: How many people click on your subject line to see your full email
Click-Through Rate: How many click on the links/buttons you’ve provided in your emails
As with your website, there is room for testing the elements that contribute to a reader’s decision to click – in this case, these would be your email subject line, content, form fields, CTAs etc.
Here are some ways to A/B test your emails:
Do your customers prefer being addressed by their first names, or do they prefer a more formal last name approach?
Do they want to get emails from the company itself, or would they be more likely to click on emails from a human, like “Jake from State Farm”?
Should you use simple language to make your content easy to understand, or will your industry-savvy customers respond better to industry jargon that shows you’re “in the know.”
Are your customers more likely to buy products directly through your emails, or are they looking for more valuable content and tips first?
Would you get more responses if you repeated the same CTA throughout your email or used multiple CTAs in one email?
4. Landing Pages
Your landing page is where decision-making happens.
This is where you present the most appealing information to your viewers in hopes that they’ll sign up for your mailing list, opt in to your latest offer, or even buy your product. As with headlines and web page design, we should never assume that the most aesthetically pleasing page will win.
It’s important to test the various elements on your landing pages to find what works, not just what looks better. Adding a trust badge, for example, could make a huge impact. That’s what eCommerce site Bag Servant found when they switched out their Twitter followers widget with a trust badge. This tiny change resulted in a whopping 72.05% improvement in conversions for the brand. Here’s more on how to increase customers’ trust on your eCommerce website.
Contrary to popular belief, not every business NEEDS a responsive website.
But you probably do.
If you don’t want to invest in building a responsive site, it’s important to at least run a few tests on your audience and assess the ROI of upgrading or creating a responsive website. If your customers simply aren’t visiting your site via mobile that much, or your content is too complex to render well on a mobile device, it might not be worth it just yet. But since mobile viewership is growing at a tremendous rate (34% of global eCommerce sales happen on mobile devices as opposed 30% in 2014 Q4), you really don’t want to miss out without at least testing a few pages out.
It’s really as simple as that. The best way to test out whether a responsive site makes sense is to simply… test it out… and see how well it performs. That’s what TwentySixDigital did for one of their clients in the travel sector.
With mobile visitors increasing, they picked ONE page that had the highest revenue earning potential, made it responsive, and tracked its performance against the non-responsive version.
Results? The responsive mobile version was 50% better at getting users to buy tickets.
50%!!! That’s worth a one page effort.
Results of mobile responsive test (Source: TwentySixDigital)
6. Quality of Leads
Of course, nearly all the optimization tips mentioned in this article talk about getting more leads, but you shouldn’t forget the importance of quality in acquiring leads and customers.
The three factors you should be testing are:
Frequency – how often these leads are buying or interacting with your business
Recency – the last time they interacted with your business
Value – the average dollar amount of purchase
Testing these factors against the leads you get with each of your A/B campaigns will help you truly determine the most beneficial strategy for your business.
Even if you’re getting 100% more signups, would it really matter unless they become paying customers?
Why Leave Money On The Table..When You Can Test?
Testing the various elements of your business’ online presence is crucial for optimization and growth. When you start out, test one element at a time, even refer to best practices (keeping your audience in mind) and run your A/B tests properly. Look to attain statistical significance before calling a winner. But don’t be afraid to break out of the box and try something different if the research backs your decision.
Remember, if you never explore your options, you’ll never know how much money you’re leaving on the table.
This is not a normal Smashing Magazine post. I’m not going to teach you something new or inspire you with examples of great work. Instead, I want to encourage you to complete a Web design challenge. I believe this will help to address a weakness that exists in many of our design processes.
If you complete this challenge, it will make it easier for clients to sign off on your designs, and it will improve the quality of your work.
One of the biggest advantages of online media over print is the ability to change, update, and enhance online media at virtually anytime, with virtually no negative side effects. In fact, if a website or web application does not continually offer its users an ever-evolving and growing experience, that site or application would soon become insecure, unusable, and out of date.
You may also want to check out the following Smashing Magazine articles: