Got conversion rate optimization questions? Wanna ask Neil Patel for help? Then join us for our weekly Crazy Egg Conversion Talks. Anyone can attend and ask as many questions as they’d like! Our first talk will be on May 5th, 2017 at 10 AM Pacific. Here are a few more details about the talks: Frequency: Weekly Duration: 1 Hour No sign-up required Learn From Neil Patel – Ask Him Direct Questions! Neil has been continually optimizing websites for over a decade and is an industry leader when it comes to traffic generation, growth hacks and conversion funnel improvements. Plus, he’s…
Jekyll is gaining popularity as a lightweight alternative to WordPress. It often gets pigeonholed as a tool developers use to build their personal blog. That’s just the tip of the iceberg — it’s capable of so much more!
In this article, we’ll take on the role of a web developer building a website for a fictional law firm. WordPress is an obvious choice for a website like this, but is it the only tool we should consider? Let’s look at a completely different way of building a website, using Jekyll.
What separates a Hermès Birkin bag from a high-quality leather handbag you could buy anywhere? A label, a fancy charm, and about $22,000. But unlike the tangible qualities of a purchase, like the grade of leather used or the fact that the utterly useless bag charm is 14 karat gold, the perception of value is what really separates one bag from the other. One bag contains social cachet and the ability to draw envy from other women – intangible benefits so valuable; it justifies the raised price. The nameless bag, however, has its own set of benefits for a different…
A type of hypothesis testing where multiple variables are tested simultaneously to determine how the variables and combinations of variables influence the output. If several different variables (factors) are believed to influence the results or output of a test, multivariate testing can be used to test all of these factors at once. Using an organized matrix which includes each potential combination of factors, the effects of each factor on the result can be determined. Also known as Design of Experiments (DOE), the first multivariate testing was performed in 1754 by Scottish physician James Lind as a means of identifying (or…
I started out as a web developer, and that’s now one part of what I do as a full-stack developer, but never had I imagined I’d create things for the desktop. I love the web. I love how altruistic our community is, how it embraces open-source, testing and pushing the envelope.
I love discovering beautiful websites and powerful apps. When I was first tasked with creating a desktop app, I was apprehensive and intimidated. It seemed like it would be difficult, or at least… different.
Instead, you need to reexamine how you are structuring your tests. Because, as Alhan Keser writes,
If your results are disappointing, it may not only be what you are testing – it is definitely how you are testing. While there are several factors for success, one of the most important to consider is Design of Experiments (DOE).
This isn’t the first (or even the second) time we have written about Design of Experiments on the WiderFunnel blog. Because that’s how important it is. Seriously.
For this post, I teamed up with Director of Optimization Strategy, Nick So, to take a deeper look at the best ways to structure your experiments for maximum growth and insights.
Discover the best experiment structure for you!
Compare the pro’s and con’s of different Design of Experiment tactics with this simple download. The method you choose is up to you!
By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.
Warning: Things will get a teensy bit technical, but this is a vital part of any high-performing marketing optimization program.
The basics: Defining A/B, MVT, and factorial
Marketers often use the term ‘A/B testing’ to refer to marketing experimentation in general. But there are multiple different ways to structure your experiments. A/B testing is just one of them.
Let’s look at a few: A/B testing, A/B/n testing, multivariate (MVT), and factorial design.
In an A/B test, you are testing your original page / experience (A) against a single variation (B) to see which will result in a higher conversion rate. Variation B might feature a multitude of changes (i.e. a ‘cluster’) of changes, or an isolated change.
In an A/B/n test, you are testing more than two variations of a page at once. “N” refers to the number of versions being tested, anywhere from two versions to the “nth” version.
Multivariate test (MVT)
With multivariate testing, you are testing each, individual change, isolated one against another, by mixing and matching every possible combination available.
Imagine you want to test a homepage re-design with four changes in a single variation:
Change A: New hero banner
Change B: New call-to-action (CTA) copy
Change C: New CTA color
Change D: New value proposition statement
Hypothetically, let’s assume that each change has the following impact on your conversion rate:
Change A = +10%
Change B = +5%
Change C = -25%
Change D = +5%
If you were to run a classic A/B test―your current control page (A) versus a combination of all four changes at once (B)―you would get a hypothetical decrease of -5% overall (10% + 5% – 25% +5%). You would assume that your re-design did not work and most likely discard the ideas.
With a multivariate test, however, each of the following would be a variation:
Multivariate testing is great because it shows you the positive or negative impact of every single change, and every single combination of every change, resulting in the most ideal combination (in this theoretical example: A + B + D).
However, this strategy is kind of impossible in the real world. Even if you have a ton of traffic, it would still take more time than most marketers have for a test with 15 variations to reach any kind of statistical significance.
The more variations you test, the more your traffic will be split while testing, and the longer it will take for your tests to reach statistical significance. Many companies simply can’t follow the principles of MVT because they don’t have enough traffic.
Enter factorial experiment design. Factorial design allows for the speed of pure A/B testing combined with the insights of multivariate testing.
Factorial design: The middle ground
Factorial design is another method of Design of Experiments. Similar to MVT, factorial design allows you to test more than one element change within the same variation.
The greatest difference is that factorial design doesn’t force you to test every possible combination of changes.
Rather than creating a variation for every combination of changed elements (as you would with MVT), you can design your experiment to focus on specific isolations that you hypothesize will have the biggest impact.
With basic factorial experiment design, you could set up the following variations in our hypothetical example:
VarA: Change A = +10%
VarB: Change A + B = +15%
VarC: Change A + B + C = -10%
VarD: Change A + B + C + D = -5%
NOTE: With factorial design, estimating the value (e.g. conversion rate lift) of each change is a bit more complex than shown above. I’ll explain.
Firstly, let’s imagine that our control page has a baseline conversion rate of 10% and that each variation receives 1,000 unique visitors during your test.
When you estimate the value of change A, you are using your control as a baseline.
Given the above information, you would estimate that change A is worth a 10% lift by comparing the 11% conversion rate of variation A against the 10% conversion rate of your control.
The estimated conversion rate lift of change A = (11 / 10 – 1) = 10%
But, when estimating the value of change B, variation A must become your new baseline.
The estimated conversion rate lift of change B = (11.5 / 11 – 1) = 4.5%
As you can see, the “value” of change B is slightly different from the 5% difference shown above.
When you structure your tests with factorial design, you can work backwards to isolate the effect of each individual change by comparing variations. But, in this scenario, you have four variations instead of 15.
We are essentially nesting A/B tests into larger experiments so that we can still get results quickly without sacrificing insights gained by isolations.
– Michael St Laurent, Optimization Strategist, WiderFunnel
Then, you would simply re-validate the hypothesized positive results (Change A + B + D) in a standard A/B test against the original control to see if the numbers align with your prediction.
Factorial allows you to get the best potential lift, with five total variations in two tests, rather than 15 variations in a single multivariate test.
It’s not always that simple. How do you hypothesize which elements will have the biggest impact? How do you choose which changes to combine and which to isolate?
The Strategist’s Exploration
The answer lies in the Explore (or research gathering) phase of your testing process.
At WiderFunnel, Explore is an expansive thinking zone, where all options are considered. Ideas are informed by your business context, persuasion principles, digital analytics, user research, and your past test insights and archive.
Experience is the other side to this coin. A seasoned optimization strategist can look at the proposed changes and determine which changes to combine (i.e. cluster), and which changes should be isolated due to risk or potential insights to be gained.
At WiderFunnel, we don’t just invest in the rigorous training of our Strategists. We also have a 10-year-deep test archive that our Strategy team continuously draws upon when determining which changes to cluster, and which to isolate.
Factorial design in action: A case study
Once upon a time, we were testing with Annie Selke, a retailer of luxury home-ware goods. This story follows two experiments we ran on Annie Selke’s product category page.
(You may have already read about what we did during this test, but now I’m going to get into the details of how we did it. It’s a beautiful illustration of factorial design in action!)
In the first experiment, we tested three variations against the control. As the experiment number suggests, this was not the first test we ran with Annie Selke, in general. But it is the ‘first’ test in this story.
Variation A featured an isolated change to the “Sort By” filters below the image, making it a drop down menu.
This change was informed by qualitative click map data, which showed low interaction with the original filters. Strategists also theorized that, without context, visitors may not even know that these boxes are filters (based on e-commerce best practices). This variation was built on the control.
Variation B was also built on the control, and featured another isolated change to reduce the left navigation.
Click map data showed that most visitors were clicking on “Size” and “Palette”, and past testing had revealed that Annie Selke visitors were sensitive to removing distractions. Plus, the persuasion principle, known as the Paradox of Choice, theorizes that more choice = more anxiety for visitors.
Unlike variation B, variation C was built on variation A, and featured a final isolated change: a collapsed left navigation.
This variation was informed by the same evidence as variation B.
Variation A (built on the control) saw a decrease in transactions of -23.2%.
Variation B (built on the control) saw no change.
Variation C (built on variation A) saw a decrease in transactions of -1.9%.
But wait! Because variation C was built on variation A, we knew that the estimated value of change C (the collapsed filter), was 19.1%.
The next step was to validate our estimated lift of 19.1% in a follow up experiment.
The follow-up test also featured three variations versus the original control. Because, you should never waste the opportunity to gather more insights!
Variation A was our validation variation. It featured the collapsed filter (change C) from 4.7’s variation C, but maintained the original “Sort By” functionality from 4.7’s control.
Variation B was built on variation A, and featured two changes emphasizing visitor fascination with colors. We 1) changed the left nav filter from “palette” to “color”, and 2) added color imagery within the left nav filter.
Click map data suggested that Annie Selke visitors are most interested in refining their results by color, and past test results also showed visitor sensitivity to color.
Variation C was built on variation A, and featured a single isolated change: we made the collapsed left nav persistent as the visitor scrolled.
Scroll maps and click maps suggested that visitors want to scroll down the page, and view many products.
Variation A led to a 15.6% increase in transactions, which is pretty close to our estimated 19% lift, validating the value of the collapsed left navigation!
Variation B was the big winner, leading to a 23.6% increase in transactions. Based on this win, we could estimate the value of the emphasis on color.
Variation C resulted in a 9.8% increase in transactions, but because it was built on variation A (not on the control), we learned that the persistent left navigation was actually responsible for a decrease in transactions of -11.2%.
This is what factorial design looks like in action: big wins, and big insights, informed by human intelligence.
The best testing framework for you
What are your testing goals?
If you are in a situation where potential revenue gains outweigh the potential insights to be gained or your test has little long-term value, you may want to go with a standard A/B cluster test.
If you have lots and lots of traffic, and value insights above everything, multivariate may be for you.
If you want the growth-driving power of pure A/B testing, as well as insightful takeaways about your customers, you should explore factorial design.
A note of encouragement: With factorial design, your tests will get better as you continue to test. With every test, you will learn more about how your customers behave, and what they want. Which will make every subsequent hypothesis smarter, and every test more impactful.
One 10% win without insights may turn heads your direction now, but a test that delivers insights can turn into five 10% wins down the line. It’s similar to the compounding effect: collecting insights now can mean massive payouts over time.
Typography is a primary element of composition. Being a designer, I pay a lot of attention to its quality. Operating Photoshop is easy for me; however, to level up my skills, I am always learning to work with letters, using my hands, without any computer programs.
The first time I took a calligraphy course was about a year ago, and the decision was quite hard. I was sure that it would be painstaking and that I would need excellent handwriting to learn this art. How mistaken I was!
Hotjar’s content experiment with overlays is turning website visitors into new customers. Here’s how.
If you Google “Content is king,” here’s what you’ll find: More than 37 million Google results that justify how important content is online.
It’s a tired phrase, but it’s true. At Unbounce, for instance, our blog has been invaluable in growing our digital footprint and our business.
Every once in a while, you hear a story about someone who uses content to earn new customers and new revenue. And, they make it seem pretty easy (like “Why didn’t I think of that?”).
Well, Nick Heim, the Director of Inbound Marketing at Hotjar, has done just that. He offered website visitors an ebook at just the right time and in just the right way by using an overlay.
Overlays are modal lightboxes that launch within a webpage and focus attention on a single offer. Still fuzzy on what an overlay is? Click here.
Overlays, a type of Unbounce Convertables, allow you to show relevant offers to specific users at the perfect time, making them less likely to leave your website without converting.
By implementing a Convertable into his campaign, Nick isn’t just bringing in new leads, he’s actually turning website visitors into paying Hotjar users. So how’s he doing it?
Let’s start from the beginning
The TL;DR? Hotjar implemented a new Convertable on their pricing page, which resulted in new signups. The overlay offered visitors an ebook, The Hotjar Action Plan, in exchange for their first name and email address.
The overlay converted 408 visitors in the first three weeks, 75% of which were not existing Hotjar customers.
Once a visitor converted on the overlay they received an email from Hotjar right away. Non-customers received an email with the ebook as a PDF, along with an offer to try out Hotjar for an extended period of time.
For non-users, we sent them a quick instant thank you email followup that contained the asset and offered a 30 day trial of the Hotjar Business Plan. This is double the trial length a new user would usually receive by signing up through our site.
Here’s what the actual email looks like:
Hotjar makes good use of the email they sent to preexisting customers, too. That variation contains the ebook as well as a simple question about what type of content they’d like to see — allowing Hotjar to continue delivering value to their customers. #winwin
The overlay strategy
The overlay Nick built was set to appear only to first-time visitors who are exiting the Hotjar pricing page.
According to Nick,
This was more of a visitor experience decision than anything. We didn’t want to come off as badgering visitors in the research phase [of the buying process].
Setting trigger rules in the Unbounce builder.
So, did it work?
“Absolutely, we’re getting 60-70 new users per month as a result of the Convertable,” said Nick.
From the overlay, about 3% of page visitors convert on the page.
Of those that converted on the overlay, 75% were not current Hotjar customers and about 19% of the non-users who received their follow-up email with the PDF have become new Hotjar customers.
Already an Unbounce customer? Log into Unbounce and start using Convertables today at no extra cost.
Experimenting the Hotjar Way
Nick explained that his team at Hotjar hadn’t implemented overlays into their lead gen strategy before using the Unbounce Convertable; “this was a total experiment. We wanted to be able to nurture the new leads coming from different channels and bring them back.”
Nick pointed out that, “these things [overlays] can be used really wrong. You need to be careful and consider the human on the other end. Think about the entire process.”
For their experiment, Nick said, “[we didn’t have] hard goals, but we wanted to prove whether there was a case for using overlays.” Nick pointed out that it can be difficult to measure the negative effects of user experience — especially without a baseline to measure your results against.
“We wanted to see if the risk was worth the reward. We did get the quantitative results — which for us, measure better than industry standards.”
Hotjar’s Golden Rules for Using Overlays
Through this trial experience, Nick and his team at Hotjar established some general guidelines for using overlays. Nick shared his golden rule for delighting visitors with overlays (opposed to pestering them).
Start by asking yourself these questions:
First, is it appropriate to use an overlay in this part of the user journey?
If the answer is yes, ask yourself “What’s the least annoying way to accomplish that?” If the answer is no, don’t use it.
Second, “Does it solve the problem [website visitors] are looking to solve?” Nick emphasized that the offer on the overlay needs to align to the problem that people are trying to solve.
Finally, how do you know if you’re offering the right thing? Nick says, “Ask people! This is an awesome way to improve your content.”
Should you use Convertables?
Overlays give us marketers an opportunity to present the right people with the right offer at the right time. Of course, they can also be used to do the opposite, and, as Nick says, “you don’t want to leave someone with a bad taste in their mouth,”
Like any good data-driven marketer, you’re going to want to take it for a test drive. Like Hotjar, try experimenting with overlays to decide they’re a good fit. At the end of the day, it’s your customers and your brand that will decide if overlays work in your marketing strategy.
Give yourself a pat on the back. It’s time to celebrate, right? After all your hard work you’ve finally got the sale or sign up you’ve been searching for. You’ve used your audience data to optimize your landing page design, finesse your language, and ensure everything on the page is as perfectly personalized as possible. So here’s to a job well done. But, and I’m gonna rain on your parade here. Your job is far from over. Sure, you’ve managed to get the sale, but that’s not the end of your job. It’s the end of the purchase journey, and…
For luxury companies and upscale lifestyle service providers, excellence in experience is an essential component of the value delivered. Conceptually different from the mass market, the luxury domain relies not only on offering the highest differentiated products and services, but on delivering experiential value.
Adopting technology and embracing a digital presence through platforms and initiatives, the luxury industry today is tackling the challenge of designing an unparalleled user experience (UX) online. In this article, we’ll present a case study and share observations on the peculiarities of the UX design of a luxury lifestyle service platform and its mobile apps.