Moz’s Whiteboard Friday on IA and SEO I’ve always had a thing for site architecture and designing sites that are both user AND search engine friendly. However, it can be a bit challenging. Especially for those who have been doing SEO for a while. We all want to over-optimize every web page as much as possible. Well, stop doing that! In this video, Rand does a great job of explaining what you need to keep in mind when you’re designing a website that has both a good user experience and SEO. A Few Key Takeaways From The Video: Good information…
But email marketing is just one half of a successful campaign. Those emails need to lead somewhere…
A landing page is the ideal destination for your email marketing campaign because it focuses your visitor’s attention on a single conversion goal. This lack of distractions could mean the difference between a bounce and a sale.
Using email marketing and landing pages together is thus the perfect combo to drive more sales — it’s the spiked eggnog of marketing and conversion tactics.
May we pour you a glass?
Run high-converting campaigns using email and landing pages
Psst: This post was published previously on the Unbounce Blog. With Black Friday and Cyber Monday around the corner, we’ve updated it with helpful tips and critiques that will inspire your upcoming holiday campaigns.
It’s that time again: Holiday shopping season.
And every business is trying to take advantage of the billions of consumer dollars that will be spent over the next four weeks.
Black Friday and Cyber Monday were only the beginning.
We all know that there were millions of consumers heading online and into stores to grab the first amazing deals of the season. The question is, which brands left money on the table?
Over the past weekend, I took a look at a bunch of Black Friday and Cyber Monday marketing campaigns that were promoted through Twitter, Facebook and Google Adwords.
Some marketers knocked it out of the park.
Others, not so much.
I took a quick tally of how many websites were promoting their sales through the use of landing pages, and I was disappointed to say the least.
This was a random sample of campaigns found by searching for Cyber Monday & Black Friday keywords
Just 8 out of 34 campaigns used a landing page that focused on the black Friday sale.
16 of the campaigns sent traffic to a corporate website, using some sort of headline or banner to promote the sale.
And a whopping 10 out of 34 companies just sent traffic to their normal homepage without a single mention of Black Friday or Cyber Monday.
I mean, almost 30% of the companies I looked at figured all they had to do was send out a tweet or an ad to promote themselves on Black Friday weekend!
So what are the tricks you can use on your landing pages to knock holiday shopping season out of the park? Let’s take a look at 7 sites that actually used a landing page to promote themselves this past weekend, and the different strategies they employed to pull it off.
Strategy: The flash sale [promoted via Twitter]
I love the premise of this page. Auto parts retailer Autozone set up a bunch of “flash sales” that were released throughout the day. The counter on the page told visitors when the next flash sale would be available.
In theory, this should increase the engagement of the page and keep visitors’ attention longer by getting them excited about the next sale.
But there are a couple of problems with how they went about it:
Is this page already sold out?
The words “SOLD OUT” are very large on this page. It’s the first thing you see, and I’d be afraid of this driving traffic away from the site. If you went into a store on Black Friday and saw a huge sign that said “SOLD OUT” would you stick around?
I would make the headline more explanatory. Something like this:
“Our latest sale has SOLD OUT, our next sale starts in: 00:02:57”
Don’t make me wait!
Another drawback I see with this page is a high abandon rate. Sure, you’re going to get a few people interested enough to stick around, but a good portion of your visitors are going to bounce off this page and forget about it.
The solution is to add a quick opt-in. Why not say something like:
“Don’t miss our next sale! Enter your email address below and we’ll notify you when our next flash sale begins!”
That way you’re not only building a list for the future, you’re also keeping visitors engaged throughout Cyber Monday.
2. Snack Tools
Strategy: The overlay [promoted via Twitter and Facebook]
All right, so this example isn’t a landing page, but it represents an effective way to boost conversions during the holidays.
Web app company Snack Tools put an overlay on its site with the details of a holiday promotion for visitors who arrived via social media.
Their technique presents a few problems:
My attention span is short, give me the quick points
The trouble with competing on Cyber Monday or Black Friday is that everyone is trying to find the best deal. That means they don’t necessarily want to spend a lot of time on your page to decide if your deal is right for them.
This overlay needs less copy and preferably fewer membership benefits. Less is more when it comes to using overlays.
Another option is to remove the close button and turn this into a real landing page! However, if the copy is strong and the offer is clear, this overlay will be able to drive conversions as well as any standalone page.
Pro tip: Targeted overlays create more conversion opportunities… which means more conversions for your Black Friday and Cyber Monday campaigns. Build and publish high-converting overlays in *just a few minutes* with Unbounce’s drag and drop builder.
Just remember, using overlays is a great way to increase sales and conversions. The deal you’re offering is front and center is sure to capture visitors’’ attention.
This call to action is rubbish
“Post your order” is only slightly better than “Submit” – and we all know you should never submit.
No need to get fancy, but a simple “Activate My Account” would be a much better call to action.
3. ONE Medical Group
Strategy: Promotional code [promoted via Google Adwords and Twitter]
This is an example of a promotion code landing page. It seems visually appealing at first glance, but there are some serious issues with this page:
Am I shopping for furniture?
The photo in the background looks like a furniture store, not anything medical. Images on a landing page are very important. They reassure visitors that they’ve arrived in the right place.
What exactly does this company do?
This entire page focuses on the Cyber Monday deal, but makes no mention of the product itself. If I were a visitor who didn’t know anything about this service, I would not have enough information to move forward.
Make sure not to lose focus on your product and the benefits it will bring to your visitors. Ultimately, that’s what will sell your product or service.
Where’s the call to action?
Oh right, it’s those two orange buttons. The problem with these buttons is that they’re the exact same colour as the logo (Yikes!).
As a result, they get lost in the shuffle. By making your calls to action look like buttons and giving them enough contrast with the other elements of your landing page, you’ll get a higher click-through rate on your landing pages.
Strategy: Minor modifications to existing landing pages [promoted via Google Adwords]
Why reinvent the wheel? If you already have a successful landing page that’s crushing conversions for your company, you may not need to make large sweeping changes for a holiday promotion.
If you’re in a pinch, you can set up a landing page just like this. Sage sells account software, and it looks like they’re using a basic template for their landing pages. This allows them to swap out the background image and the headlines for various promotions quickly and easily.
But what about urgency?!
This page is simple and to the point, but it could use more urgency. The beauty of Cyber Monday/Black Friday is that you have that urgency built right in. Remind your visitors that this is a limited time offer and it’s going to expire very soon.
Sage could throw a countdown on this landing page, which might give visitors that extra little push to convert.
5. The New York Times
Strategy: Focus on one step at a time [promoted via their website]
I like this page.
It cuts to the core of the offer and doesn’t have any fluff.
My only critiques are that the headline could be more readable and the end date doesn’t have very much emphasis; you want to make sure that every visitor is aware that the deal is limited, which creates a sense of urgency.
Here’s what I like so much about this page:
Frequently asked questions are available, but don’t take up space
The FAQs are on the bottom left of the page. If you don’t need them, they don’t take up much room anyway. But if you’re interested in seeing them, they’re just one click away.
The page stays simple until it needs more information
When you first land on this page the only two options are “For myself” and “For a gift.”
When you make a choice, the page expands and gives you more options.
The reason this is so great is that it keeps the user focused on the task at hand. Giving a visitor too many options all at once can be overwhelming and increase the page’s bounce rate. Well done, New York Times marketers!
No need to get fancy, but a simple “Activate My Account” would be a much better call to action.
Strategy: Get cheeky [promoted via Twitter]
This is an excellent Cyber Monday landing page. Vimeo has taken Cyber Monday and a unique spin on it with “Cyborg” Monday.
The deal is laid out very clearly and the product and its benefits are outlined in the green section of the page.
But can they improve this page?
My main critique of this page is that the call to actions don’t look like buttons. Also, a fun play on a cyborg countdown could enhance the page and add a sense of scarcity.
7. Young and Reckless
Strategy: The storefront landing page [promoted via Twitter]
If you’re a marketer for an e-commerce site then listen up!
Young and Reckless is the ONLY online retailer I saw the entire weekend that effectively used a landing page concept on their store.
This store/landing page is specially designed to sell their products on Cyber Monday. There is no menu navigation, no distractions and no fluff. Just selling.
The only problem is that they didn’t quite go all the way:
Where is the offer???
The shirts on this page are listed between 25% and 50% off, so where is the headline telling me about it?
A headline like this would be more effective:
“Until Midnight Only: Save up to 50% on everything you see below”
Just add urgency
This is a long page because there are lots of items listed. Why not include a timer that follows the visitor down the page reminding them how much time they have left on Cyber Monday?
It’s just another element that could drive home the scarcity of the sale.
Now it’s your turn.
Take these strategies and apply them to your own campaigns for better results. The holiday buying season is well worth the extra effort.
What creative campaigns will you come up with before the holiday season is over?
A few weeks ago, a Fortune 500 company asked that I review their A/B testing strategy.
The results were good, the hypotheses strong, everything seemed to be in order… until I looked at the log of changes in their testing tool.
I noticed several blunders: in some experiments, they had adjusted the traffic allocation for the variations mid-experiment; some variations had been paused for a few days, then resumed; and experiments were stopped as soon as statistical significance was reached.
When it comes to testing, too many companies worry about the “what”, or the design of their variations, and not enough worry about the “how”, the execution of their experiments.
Don’t get me wrong, variation design is important: you need solid hypotheses supported by strong evidence. However, if you believe your work is finished once you have come up with variations for an experiment and pressed the launch button, you’re wrong.
In fact, the way you run your A/B tests is the most difficult and most important piece of the optimization puzzle.
There are three kinds of lies: lies, damned lies, and statistics.
– Mark Twain
In this post, I will share the biggest mistakes you can make within each step of the testing process: the design, launch, and analysis of an experiment, and how to avoid them.
This post is fairly technical. Here’s how you should read it:
If you are just getting started with conversion optimization (CRO), or are not directly involved in designing or analyzing tests, feel free to skip the more technical sections and simply skim for insights.
If you are an expert in CRO or are involved in designing and analyzing tests, you will want to pay attention to the technical details. These sections are highlighted in blue.
Mistake #1: Your test has too many variations
The more variations, the more insights you’ll get, right?
Not exactly. Having too many variations slows down your tests but, more importantly, it can impact the integrity of your data in 2 ways.
First, the more variations you test against each other, the more traffic you will need, and the longer you’ll have to run your test to get results that you can trust. This is simple math.
But the issue with running a longer test is that you are more likely to be exposed to cookie deletion. If you run an A/B test for more than 3–4 weeks, the risk of sample pollution increases: in that time, people will have deleted their cookies and may enter a different variation than the one they were originally in.
Within 2 weeks, you can get a 10% dropout of people deleting cookies and that can really affect your sample quality.
The second risk when testing multiple variations is that the significance level goes down as the number of variations increases.
For example, if you use the accepted significance level of 0.05 and decide to test 20 different scenarios, one of those will be significant purely by chance (20 * 0.05). If you test 100 different scenarios, the number goes up to five (100 * 0.05).
In other words, the more variations, the higher the chance of a false positive i.e. the higher your chances of finding a winner that is not significant.
Google’s 41 shades of blue is a good example of this. In 2009, when Google could not decide which shades of blue would generate the most clicks on their search results page, they decided to test 41 shades. At a 95% confidence level, the chance of getting a false positive was 88%. If they had tested 10 shades, the chance of getting a false positive would have been 40%, 9% with 3 shades, and down to 5% with 2 shades.
You can calculate the chance of getting a false positive using the following formula: 1-(1-a)^m with m being the total number of variations tested and a being the significance level. With a significance level of 0.05, the equation would look like this:
1-(1-0.05)^m or 1-0.95^m.
You can fix the multiple comparison problem using the Bonferroni correction, which calculates the confidence level for an individual test when more than one variation or hypothesis is being tested.
Wikipedia illustrates the Bonferroni correction with the following example: “If an experimenter is testing m hypotheses, [and] the desired significance level for the whole family of tests is a, then the Bonferroni correction would test each individual hypothesis at a significance level of a/m.
For example, if [you are] testing m = 8 hypotheses with a desired a = 0.05, then the Bonferroni correction would test each individual hypothesis at a = 0.05/8=0.00625.”
In other words, you’ll need a 0.625% significance level, which is the same as a 99.375% confidence level (100% – 0.625%) for an individual test.
The Bonferroni correction tends to be a bit too conservative and is based on the assumption that all tests are independent of each other. However, it demonstrates how multiple comparisons can skew your data if you don’t adjust the significance level accordingly.
The following tables summarize the multiple comparison problem.
Probability of a false positive with a 0.05 significance level:
Adjusted significance and confidence levels to maintain a 5% false discovery probability:
In this section, I’m talking about the risks of testing a high number of variations in an experiment. But the same problem also applies when you test multiple goals and segments, which we’ll review a bit later.
Each additional variation and goal adds a new combination of individual statistics for online experiments comparisons to an experiment. In a scenario where there are four variations and four goals, that’s 16 potential outcomes that need to be controlled for separately.
Some A/B testing tools, such as VWO and Optimizely, adjust for the multiple comparison problem. These tools will make sure that the false positive rate of your experiment matches the false positive rate you think you are getting.
In other words, the false positive rate you set in your significance threshold will reflect the true chance of getting a false positive: you won’t need to correct and adjust the confidence level using the Bonferroni or any other methods.
One final problem with testing multiple variations can occur when you are analyzing the results of your test. You may be tempted to declare the variation with the highest lift the winner, even though there is no statistically significant difference between the winner and the runner up. This means that, even though one variation may be performing better in the current test, the runner up could “win” in the next round.
You should consider both variations as winners.
Mistake #2: You change experiment settings in the middle of a test
When you launch an experiment, you need to commit to it fully. Do not change the experiment settings, the test goals, the design of the variation or of the Control mid-experiment. And don’t change traffic allocations to variations.
Changing the traffic split between variations during an experiment will impact the integrity of your results because of a problem known as Simpson’s Paradox.This statistical paradox appears when we see a trend in different groups of data which disappears when those groups are combined.
Ronny Kohavi from Microsoft shares an example wherein a website gets one million daily visitors, on both Friday and Saturday. On Friday, 1% of the traffic is assigned to the treatment (i.e. the variation), and on Saturday that percentage is raised to 50%.
Even though the treatment has a higher conversion rate than the Control on both Friday (2.30% vs. 2.02%) and Saturday (1.2% vs. 1.00%), when the data is combined over the two days, the treatment seems to underperform (1.20% vs. 1.68%).
This is because we are dealing with weighted averages. The data from Saturday, a day with an overall worse conversion rate, impacted the treatment more than that from Friday.
We will return to Simpson’s Paradox in just a bit.
Changing the traffic allocation mid-test will also skew your results because it alters the sampling of your returning visitors.
Changes made to the traffic allocation only affect new users. Once visitors are bucketed into a variation, they will continue to see that variation for as long as the experiment is running.
So, let’s say you start a test by allocating 80% of your traffic to the Control and 20% to the variation. Then, after a few days you change it to a 50/50 split. All new users will be allocated accordingly from then on.
However, all the users that entered the experiment prior to the change will be bucketed into the same variation they entered previously. In our current example, this means that the returning visitors will still be assigned to the Control and you will now have a large proportion of returning visitors (who are more likely to convert) in the Control.
Note: This problem of changing traffic allocation mid-test only happens if you make a change at the variation level. You can change the traffic allocation at the experiment level mid-experiment. This is useful if you want to have a ramp up period where you target only 50% of your traffic for the first few days of a test before increasing it to 100%. This won’t impact the integrity of your results.
As I mentioned earlier, the “do not change mid-test rule” extends to your test goals and the designs of your variations. If you’re tracking multiple goals during an experiment, you may be tempted to change what the main goal should be mid-experiment. Don’t do it.
All Optimizers have a favorite variation that we secretly hope will win during any given test. This is not a problem until you start giving weight to the metrics that favor this variation. Decide on a goal metric that you can measure in the short term (the duration of a test) and that can predict your success in the long term. Track it and stick to it.
It is useful to track other key metrics to gain insights and/or debug an experiment, if something looks wrong. However, these are not the metrics you should look at to make a decision, even though they may favor your favorite variation.
Let’s say you have avoided the 2 mistakes I’ve already discussed, and you’re pretty confident about the results you see in your A/B testing tool. It’s time to analyze the results, right?
Not so fast! Did you stop the test as soon as it reached statistical significance?
I hope not…
Statistical significance should not dictate when you stop a test. It only tells you if there is a difference between your Control and your variations. This is why you should not wait for a test to be significant (because it may never happen) or stop a test as soon as it is significant. Instead, you need to wait for the calculated sample size to be reached before stopping a test. Use a test duration calculator to understand better when to stop a test.
Now, assuming you’ve stopped your test at the correct time, we can move on to segmentation. Segmentation and personalization are hot topics in marketing right now, and more and more tools enable segmentation and personalization.
There are 2 main problems with post-test segmentation, however, that will impact the statistical validity of your segments (when done incorrectly).
The sample size of your segments is too small. You stopped the test when you reached the calculated sample size, but at a segment level the sample size is likely too small and the lift between segments has no statistical validity.
The multiple comparison problem. The more segments you compare, the greater the likelihood that you’ll get a false positive among those tests. With a 95% confidence level, you’re likely to get a false positive every 20 post-test segments you look at.
There are different ways to prevent these two issues, but the easiest and most accurate strategy is to create targeted tests (rather than breaking down results per segment post-test).
I don’t advocate against post-test segmentation―quite the opposite. In fact, looking at too much aggregate data can be misleading. (Simpson’s Paradox strikes back.)
The Wikipedia definition for Simpson’s Paradox provides a real-life example from a medical study comparing the success rates of two treatments for kidney stones.
The table below shows the success rates and numbers of treatments for treatments involving both small and large kidney stones.
The paradoxical conclusion is that treatment A is more effective when used on small stones, and also when used on large stones, yet treatment B is more effective when considering both sizes at the same time.
In the context of an A/B test, this would look something like this:
Simpson’s Paradox surfaces when sampling is not uniform—that is the sample size of your segments is different. There are a few things you can do to prevent getting lost in and misled by this paradox.
First, you can prevent this problem from happening altogether by using stratified sampling, which is the process of dividing members of the population into homogeneous and mutually exclusive subgroups before sampling. However, most tools don’t offer this option.
If you are already in a situation where you have to decide whether to act on aggregate data or on segment data, Georgi Georgiev recommends you look at the story behind the numbers, rather than at the numbers themselves.
“My recommendation in the specific example [illustrated in the table above] is to refrain from making a decision with the data in the table. Instead, we should consider looking at each traffic source/landing page couple from a qualitative standpoint first. Based on the nature of each traffic source (one-time, seasonal, stable) we might reach a different final decision. For example, we may consider retaining both landing pages, but for different sources.
In order to do that in a data-driven manner, we should treat each source/page couple as a separate test variation and perform some additional testing until we reach the desired statistically significant result for each pair (currently we do not have significant results pair-wise).”
In a nutshell, it can be complicated to get post-test segmentation right, but when you do, it will unveil insights that your aggregate data can’t. Remember, you will have to validate the data for each segment in a separate follow up test.
The execution of an experiment is the most important part of a successful optimization strategy. If your tests are not executed properly, your results will be invalid and you will be relying on misleading data.
It is always tempting to showcase good results. Results are often the most important factor when your boss is evaluating the success of your conversion optimization department or agency.
But results aren’t always trustworthy. Too often, the numbers you see in case studies are lacking valid statistical inferences: either they rely on too heavily on an A/B testing tool’s unreliable stats engine and/or they haven’t addressed the common pitfalls outlined in this post.
Use case studies as a source of inspiration, but make sure that you are executing your tests properly by doing the following:
If your A/B testing tool doesn’t adjust for the multiple comparison problem, make sure to correct your significance level for tests with more than 1 variation
Don’t change your experiment settings mid-experiment
Don’t use statistical significance as an indicator of when to stop a test, and make sure to calculate the sample size you need to reach before calling a test complete
Finally, keep segmenting your data post-test. But make sure you are not falling into the multiple comparison trap and are comparing segments that are significant and have a big enough sample size
“Black Friday” has many explanations and various historical reasons. Besides that, every year it leads to people buying things just because retailers give huge discounts. But do you really need more? If you wouldn’t have bought something at its full price, you probably don’t need it at all.
In a world where most of us have many things in their home untouched for months or years, we should focus on what is important. It’s not having the newest products, using the latest tools, using the latest cool startup service. It’s about helping other people, sharing real experiences and stories with your friends. Thank them and yourself this year without a bought gift.