Regardless of what product or service you are offering, the above quote stands true for all ecommerce players. Trust plays a key role to increase the conversion rate on your checkout page, getting more revenue and more customers from your existing traffic base. And that happens when your visitors trust your brand. Trust plays a very significant role at every step of a user journey. If your target audience doesn’t trust your brand, they might not visit your website. And even if they land on your website, they might not purchase from you.
What happens when visitors don’t trust you?
Low conversion rate
High cart abandonment rate
High bounce rate
“In eCommerce, everything hinges on trust. If they don’t trust you, they won’t buy from you.”
So how do you earn the trust of your visitors and motivate them to buy your product?
Building trust is a long-term process, and it doesn’t happen overnight. However, there are some actionable tips that can be given a shot. Some time ago, we created this exhaustive list of tips for eCommerce brands. Among these, adding a trust seal on the checkout page to convince potential customers that the process is safe and secure can be a great option.A survey conducted by Matthew Niederberger on Actual Insights found that “61% of participants said they have at one time NOT completed a purchase because there were no trust logos present.”
What Is a Trust Seal/Trust Badge?
A trust seal, sometimes called a secure site seal, is something you’re likely already familiar with if you’ve ever noticed small badges displayed on a website, particularly on store or payment pages.
Our client, Uptowork, experienced a great deal of results by earning visitors’ trust by following the same approach. Let’s see how they did it.
Background: The Company
Uptowork is a a career site and online resume building platform. The platform is easy to use, fast, and professional. Uptowork targets all types of job aspirants, especially especially those, who struggle with building their resume in traditional text editors. You can always refer to their blog for some quick tips for your resume. Most of the traffic coming to Uptowork website is organic and through AdWords.
Investigating and Identifying the Issue
Although organic medium was paying off well for them by getting substantial traffic, they wanted to improve the percentage of visitors making a purchase and converting into customers besides the surprisingly high cart abandonment rate.
When they analyzed their visitor journey, they noticed that a lot of visitors are checking out the product and adding it to their carts, but not making the final purchase. This resulted in a high cart abandonment rate and low conversion rate.
The Uptowork team tried making a couple of changes on the website and closely analyzed the GA data to see if it worked.
They made some changes, but GA and other tools were not capable enough to give them all the answers.
They also did not A/B test them, so there was no direct comparison that could be made.
All this made them doubt the data they had.
Finding the Gap
The Uptowork team understood that there was a huge gap between what the brand wanted to convey and what the visitors perceived. They understood that the one thing lacking was visitor trust on the website.
The key idea was to completely redesign the cart page and add a McAfee trust badge on their cart page to convey a sense of security to its visitors.
“We added a McAfee badge to our cart with the assumption that it will reduce the percentage of people leaving the cart. And it did “
Bases on their research they came up with hypothesis of adding a McAfee badge to gain visitor’s trust. They hoped that adding a McAfee badge will ensure a secure payment gateway for visitors and uplift the brand image. And thus, reduce the cart abandonment rate and increase conversion rate.
“While we were hoping for the badge to work, we had our doubts about how such a small change will make any impact”
Implementing and Testing
Almost a month-long test was ran for their entire user base with the help of VWO AB testing capability.
The results of this test perfectly aligned with its hypothesis. Adding the McAfee seal reduced its abandoned cart rate and increased the conversion rate by 1.27 %.
“We were almost sure that such a small badge wouldn’t have any impact on our bottom line. If it wasn’t for the test we would just remove it and wonder what happened to our sales. VWO made it really easy to prepare the test and track the results.”
The team believed that visitors recognize this badge from other places, and it builds a sense of security.
“We aren’t a huge brand (yet!) and trust is still something we have to take care about. Using visual cues like that can bring that little extra reassurance we need.”
“We use VWO to test any visual or content changes that might impact our bottom line. It turns lengthy discussions about what should we do into easy to setup tests that bring results to the table, not opinions. I think this has been the biggest value we got out of using VWO (along with the hundreds of dollars we managed to save on mistakes we would’ve made without it!).”
When a small change inspired from a blogpost showed such impact on the conversion rate, you can just imagine the impact of a planned conversion rate optimization for eCommerce.
“Trust comes from delivering everyday on what you promised as a manager, an employee and a company.”
Note: This is a guest article written by Tyler Hakes, the strategy director at Optimist, a full-service content marketing agency. He’s spent nearly 10 years helping agencies, startups, and corporate clients achieve sustainable growth through strategic content marketing and SEO. Any and all opinions expressed in the post are Tyler’s.
Almost 10 years ago, I got my first job in marketing.
I was right out of college, and I was eager to prove myself and light the world on fire.
Like most people in their early 20s, I was convinced that I knew everything. I thought I had all of the solutions to every problem. I was a marketing mastermind, of course, because I had managed to get a few hundred people to follow me on Twitter.
It didn’t take me long to learn that I didn’t quite have all of the answers. In fact, I had a lot to learn. And it became more important for me to understand what I don’t know and to learn rather than to feel like I already had the answers.
Since then, I’ve worked for agencies, corporations, and startups. As a freelancer and agency owner, I’ve done marketing for every kind of company imaginable—from custom hats to apartment rentals. I’ve put together dozens of content marketing strategies and written/published thousands of articles, ebooks, and landing pages.
In all that time, I’ve come to realize something really, really important.
I don’t know anything.
Sure, I have accumulated a lot of knowledge and skills in the digital marketing space. I understand, at a high level, how things work. And I know, directionally, what the best practices are for achieving results.
But when it comes to executing any particular tactic, writing a particular type of content, or advertising to a particular market, each scenario is a little different. What I think will work best is usually wrong.
With this realization in mind, I’ve developed a kind of manifesto. It’s a way to remind myself that it’s okay to not have all the answers. It’s okay to be wrong, as long as you commit to finding the right answer eventually. Embrace a testing mentality.
Assume You’re Wrong
The biggest challenge with having a testing mentality is accepting that you are almost always wrong.
Let me say this again: You’re wrong.
It can be difficult to swallow. But don’t take it personally. Don’t link your personal worth to your ability to guess which messaging will get the most clicks or which blog post will drive the most social engagement. That’s just silly.
This isn’t Mad Men. You’re not Don Draper. So, don’t spend a million bucks trying to come up with the best idea. We live in a digital age of data. We’re able to track, measure, and test anything and everything that we do in business. There should be no more guesswork.
And what we generally consider to be “conventional wisdom” about best practices when it comes to optimization is also generally wrong. (That’s why it’s called “conventional wisdom,” after all.)
It’s become a driving force for my work and my business. I assume that I know nothing and that everything—anything—is open for testing. Test, fail and learn. In that order.
And instead of taking it personally, I just accept that it’s impossible for someone to know the right answer 100% of the time.
As such, it makes way more sense to defer to the data whenever possible.
Unfortunately, you can’t possibly test every single variable to determine the single best approach, messaging, targeting, or design.
But you can get a head start.
Begin any testing cycle by looking at companies that test and optimize regularly. Then, steal their findings. Rather than starting from square one, begin your own testing with their current best case—the design, ad, or content that they’ve found to be most successful.
You can do this in a number of ways.
Look at crowd-sourced A/B or multivariate test communities like Behave.org.
Visit competitors websites and emulate what they’ve done.
Use social media to uncover specific messaging/positioning/CTAs used by competitors.
For our work on content marketing, we begin any client engagement with an extensive research and competitive analysis process. It’s the foundation of our content marketing strategy—is what we already know working for competitors and other companies in the space?
We’re able to gain years (or decades) or knowledge in a matter of weeks. We avoid expensive, time-consuming, and frustrating trial and error by just stealing what works and iterating on it from there.
Prove Yourself Right (Or Wrong)
Once you have learned to not internalize the results and found a base to start with, it’s time to test.
Depending on what it is you’re testing, you’ll want to generate dozens—or hundreds—of variations. Try different colors, placements, layouts, or strategies.
Of course, a tool like VWO will help you execute these tests quickly and measure the results.
Create an experiment sheet that allows you to track each experiment and the outcome of that experiment. Remember to constantly challenge your own assumptions.assume you’re wrong and that you can come up with a variation that works better.
This kind of data-driven testing mentality applies not only to tactical tweaks or changes. You can assume a similar mentality for your entire strategy.
When we work with a new client on content marketing, we make a whole bunch of new assumptions.
Each piece of content that we create serves a strategic purpose within our larger framework. Because of this, we have a specific goal for that piece—to generate search traffic, to earn links, to generate social shares, and so on. And this is the benchmark that we use to measure our effectiveness.
So, we may begin with an idea about which kinds of content will best accomplish those goals.
But, in most cases, we have never created content in this particular market. We have never tried to build relationships within this particular community. We’re just guessing (per our past experience with other clients and other industries).
This means that what we really want to do is try what we think we will work, get the results, and then incorporate that data to help us improve in the future. A lot of times, we’re wrong. If we didn’t adopt a testing mentality, then we would just carry on being wrong.
Obviously, this is not ideal. It’s better to be wrong and to learn from that mistake than to be blind to your mistakes. This is why we apply a testing model to everything from our overall strategy to specific, tactical implementation—content flow, calls to action, outreach emails, and so on.
We want to achieve the best results we can, even if it means that we admit we were wrong.
Do It All Over Again
Think you’ve found the right answer? You’re probably wrong—again.
Any test is only as good as the variations that you’re considering. So, while you may have identified a clear winner of those that you’re considering, that doesn’t mean that you’ve objectively identified the best possible solution.
Whatever is working best now could only work half as well as the true best case. And it’s just a matter of time until you hit that particular variation.
It’s the pursuit of continuous improvement. It’s relentless.
This is the foundational idea behind “growth hacking,” which is really just a data-driven, experimental approach to growth. It takes trial and error—over and over again—ad infinitum.
It’s why many software teams have embraced agile development because it allows for iterative progress and improvement rather than investing all of your time and resources into a single window or opportunity.
Testing isn’t just about making small tweaks. It’s about embracing a culture of continuous learning and improvement. It’s about the pursuit of truth, even when it makes you feel stupid.
And it all starts by admitting that you don’t have all the answers.
This Summer, WiderFunnel participated in several virtual events. And each one, from full-day summit to hour-long webinar, ended with a TON of great questions from all of you.
So, here is a compilation of 29 of your top conversion optimization questions. From how to get executive buy-in for experimentation, to the impact of CRO on SEO, to the power (or lack thereof) of personalization, you asked, and we answered.
As you’ll notice, many experts and thought-leaders weighed in on your questions, including:
Q: What do you see as the most common mistake people make that has a negative effect on website conversion?
Chris Goward: I think the most common mistake is a strategic one, where marketers don’t create or ensure they have a great process and team in place before starting experimentation.
I’ve seen many teams get really excited about conversion optimization and bring it into their company. But they are like kids in a candy store: they’re grabbing at a bunch of ideas, trying to get quick wins, and making mistakes along the way, getting inconclusive results, not tracking properly, and looking foolish in the end.
And this burns the organizational momentum you have. The most important resource you have in an organization is the support from your high-level executives. And you need to be very careful with that support because you can quickly destroy it by doing things the wrong way.
It’s important to first make sure you have all of the right building blocks in place: the right process, the right team, the ability to track and the right technology. And make sure you get a few wins, perhaps under the radar, so that you already have some support equity to work with.
Q: What are the most important questions to ask in the Explore phase?
Chris Goward: During Explore, we are looking for your visitors’ barriers to conversion. It’s a general research phase. (It’s called ‘Explore’ for a reason). In it, we are looking for insights about what questions to ask and validate. We are trying to identify…
What are the barriers to conversion?
What are the motivational triggers for your audience?
Why are people buying from you?
And answering those questions comes through the qualitative and quantitative research that’s involved in Explore. But it’s a very open-ended process. It’s an expansive process. So the questions are more about how to identify opportunities for testing.
Whereas Validate is a reductive process. During Validate, we know exactly what questions we are trying to answer, to determine whether the insights gained in Explore actually work.
Explore is one of two phases in the Infinity Optimization Process – our framework for conversion optimization. Read about the whole process, here.
Q: Is there such a thing as too much testing and / or optimizing?
Chris Goward: A lot of people think that if they’re A/B testing, and improving an experience or a landing page or a website…they can’t improve forever. The question many marketers have is, how do I know how long to do this? Is there going to be diminishing returns? By putting in the same effort will I get smaller and smaller results?
But we haven’t actually found this to be true. We have yet to find a company that we have over-A/B tested. And the reason is that visitor expectations continue to increase, your competitors don’t stop improving, and you continuously have new questions to ask about your business, business model, value proposition, etc.
So my answer is…yes, you will run out of opportunities to test, as soon as you run out of business questions. When you’ve answered all of the questions you have as a business, then you can safely stop testing.
Of course, you never really run out of questions. No business is perfect and understands everything. The role of experimentation is never done.
Case Study: DMV.org has been running an optimization program for 4+ years. Read about how they continue to double revenue year-over-year in this case study.
Q: Do you get better results with personalization or A/B testing or any other methods you have in mind?
Chris Goward: Personalization is a buzzword right now that a lot of marketers are really excited about. And personalization is important. But it’s not a new idea. It’s simply that technology and new tools are now available, and we have so much data that allows us to better personalize experiences.
I don’t believe that personalization and A/B testing are mutually exclusive. I think that personalization is a tactic that you can test and validate within all your experiences. But experimentation is more strategic.
At the highest level of your organization, having an experimentation ethos means that you’ll test anything. You could test personalization, you could test new product lines, or number of products, or types of value proposition messaging, etc. Everything is included under the umbrella of experimentation, if a company is oriented that way.
Personalization is really a tactic. And the goal of personalization is to create a more relevant experience, or a more relevant message. And that’s the only thing it does. And it does it very well.
Further Reading: Are you evaluating personalization at your company? Learn how to create the most effective personalization strategy with our 4-step roadmap.
Q: Is there such a thing as too much personalization? We have a client with over 40 personas, with a very complicated strategy, which makes reporting hard to justify.
Chris Goward: That’s an interesting question. Unlike experimentation, I believe there is a very real danger of too much personalization. Companies are often very excited about it. They’ll use all of the features of the personalization tools available to create (in your client’s case) 40 personas and a very complicated strategy. And they don’t realize that the maintenance cost of personalization is very high. It’s important to prove that a personalization strategy actually delivers enough business value to justify the increase in cost.
When you think about it, every time you come out with a new product, a new message, or a new campaign, you would have to create personalized experiences against 40 different personas. And that’s 40 times the effort of having a generic message. If you haven’t tested from the outset, to prove that all of those personas are accurate and useful, you could be wasting a lot of time and effort.
We always start a personalization strategy by asking, ‘what are the existing personas?’, and proving out whether those existing personas actually deliver distinct value apart from each other, or whether they should be grouped into a smaller number of personas that are more useful. And then, we test the messaging to see if there are messages that work better for each persona. It’s a step by step process that makes sure we are only creating overhead where it’s necessary and will create value.
Further Reading: Are you evaluating personalization at your company? Learn how to create the most effective personalization strategy with our 4-step roadmap.
Q: With the advance of personalization technology, will we see broader segments disappear? Will we go to 1:1 personalization, or will bigger segments remain relevant?
Chris Goward: Broad segments won’t disappear; they will remain valid. With things like multi-threaded personalization, you’ll be able to layer on some of the 1:1 information that you have, which may be product recommendations or behavioral targeting, on top of a broader segment. If a user falls into a broad segment, they may see that messaging in one area, and 1:1 messaging may appear in another area.
But if you try to eliminate broad segments and only create 1:1 personalization, you’ll create an infinite workload for yourself in trying to sustain all of those different content messaging segments. And it’s almost impossible for a marketing department practically to create infinite marketing messages.
Hudson Arnold: You are absolutely going to need both. I think there’s a different kind of opportunity, and a different kind of UX solution to those questions. Some media and commerce companies won’t have to struggle through that content production, because their natural output of 1:1 personalization will be showing a specific product or a certain article, which they don’t have to support from a content perspective.
What they will be missing out on is that notion of, what big segments are we missing? Are we not targeting moms? Newly married couples? CTOs vs. sales managers? Whatever the distinction is, that segment-level messaging is going to continue to be critical, for the foreseeable future. And the best personalization approach is going to balance both.
Q: How do you explain personalization to people who are still convinced that personalization is putting first and last name fields on landing pages?
A PANEL RESPONSE
André Morys: I compare it to the experience people have in a real store. If you go to a retail store, and you want to buy a TV, the salesperson will observe how you’re speaking, how you’re walking, how you’re dressed, and he will tailor his sales pitch to the type of person you are. He will notice if you’ve brought your family, if it’s your first time in a shop, or your 20th. He has all of these data points in his mind.
Personalization is the art of transporting this knowledge of how to talk to people on a 1:1 level to your website. And it’s not always easy, because you may not have all of the data. But you have to find out which data you can use. And if you can do personalization properly, you can get big uplift.
John Ekman: On the other hand, I heard a psychologist once say that people have more in common than what separates them. If you are looking for very powerful persuasion strategies, instead of thinking of the different individual traits and preferences that customers might have, it may be better to think about what they have in common. Because you’ll reach more people with your campaigns and landing pages. It will be interesting to see how the battle between general persuasion techniques and individual personalization techniques will result.
Chris Goward: It’s a good point. I tend to agree that the nirvana of 1:1 personalization may not be the right goal in some cases, because there are unintended consequences of that.
One is that it becomes more difficult to find generalized understanding of your positioning, of your value proposition, of your customers’ perspectives, if everything is personalized. There are no common threads.
The other is that there is significant maintenance cost in having really fine personalization. If you have 1:1 personalization with 1,000 people, and you update your product features, you have to think about how that message gets customized across 1,000 different messages rather than just updating one. So there is a cost to personalization. You have to validate that your approach to personalization pays off, and that is has enough benefit to balance out your cost and downside.
David Darmanin: [At Hotjar], we aren’t personalizing, actually. It’s a powerful thing to do, but there is a time to deploy it. If personalization adds too much complexity and slows you down, then obviously that can be a challenge. Like most things that can be complex, I think that they are the most valuable, when you have a high ticket price or very high value, where that touch of personalization has a big impact.
With Hotjar, we’re much more volume and lower price points, so it’s not yet a priority for us. Having said that, we have looked at it. But right now, we’re a startup, at the stage where speed is everything. And having many common threads is as important as possible, so we don’t want to add too much complexity now. But if you’re selling very expensive things, and you’re at a more advanced stage as a company, it would be crazy not to leverage personalization.
Q: How do you avoid harming organic SEO when doing conversion optimization?
Chris Goward: A common question! WiderFunnel was actually one of Google’s first authorized consultants for their testing tool, and Google told us is that they support optimization fully. They do not penalize companies for running A/B tests, if they are set up properly and the company is using a proper tool.
On top of that, what we’ve found is that the principles of conversion optimization parallel the principles of good SEO practice.
If you create a better experience for your users, and more of them convert, it actually sends a positive signal to Google that you have higher quality content.
Google looks at pogo-sticking, where people land on the SERP, find a result, and then return back to the SERP. Pogo-sticking signals to Google that this is not quality content. If a visitor lands on your page and converts, they are not going to come back to the SERP, which sends Google a positive signal. And we’ve actually never seen an example where SEO has been harmed by a conversion optimization program.
Q:When you are trying to solicit buy-in from leadership do you recommend going for big wins to share with the higher ups or smaller wins?
Chris Goward: Partly, it depends on how much equity you have to burn up front. If you are in a situation where you don’t have a lot of confidence from higher-ups about implementing an optimization program, I would recommend starting with more under the radar tests. Try to get momentum, get some early wins, and then share your success with the executives to show the potential. This will help you get more buy-in for more prominent areas.
This is actually one of the factors that you want to consider when prioritizing where to test. The “PIE Framework” shows you the three factors to help you prioritize.
One of them is Ease. Potential, Importance, and Ease. And one of the important aspects within Ease is political ease. So you want to look for areas that have political ease, which means there might not be as much sensitivity around them (so maybe not the homepage). Get those wins first, and create momentum, and then you can start sharing that throughout the organization to build that buy-in.
Q: Who would you say are the key stakeholders you need buy-in from, not only in senior leadership but critical members of the team?
Nick So: Besides the obvious senior leadership and key decision-makers as you mention, we find getting buy-in from related departments like branding, marketing, design, copywriters and content managers, etc., can be very helpful.
Having these teams on board can not only help with the overall approval process, but also helps ensure winning tests and strategies are aligned with your overall business and marketing strategy.
You should also consider involving more tangentially-related teams like customer support. This makes them a part of the process and testing culture, but your customer-facing teams can also be a great source for business insights and test ideas as well!
Q: Do you have any suggestions for success with lower traffic websites?
Nick So: In our testing experience, we find we get the most impactful results when we feel we have a strong understanding of the website’s visitors. In the Infinity Optimization Process, this understanding is gained through a balanced approach of Exploratory research, and Validated insights and results.
When a site’s traffic is low, the ability to Validate is decreased, and so we try to make up for it by increasing the time spent and work done in the Explore phase.
We take those yet-to-be-validated insights found in the Explore phase, and build a larger, more impactful single variation, and test the cluster of changes. (This variation is generally more drastic than we would create for a higher-traffic client, since we can validate those insights easily through multiple tests.)
Because of the more drastic changes, the variation should have a larger impact on conversion rate (and hopefully gain statistical significance with lower traffic). And because we have researched evidence to support these changes, there is a higher likelihood that they will perform better than a standard re-design.
If a site does not have enough overall primary conversions, but you definitely, absolutely MUST test, then I would look for a secondary metric further ‘upstream’ to optimize for. These should be goals that indicate or guide the primary conversion (e.g. clicks to form > form submission, add to cart > transaction). However with this strategy, stakeholders have to be aware that increases in this secondary goal may not be tied directly to increases of the primary goal at the same rate.
Q: What would you prioritize to test on a page that has lower traffic, in order to achieve statistical significance?
Chris Goward: The opportunities that are going to make the most impact really depend on the situation and the context. So if it’s a landing page or the homepage or a product page, they’ll have different opportunities.
But with any area, start by trying to understand your customers. If you have a low-traffic site, you’ll need to spend more time on the qualitative research side, really trying to understand: what are the opportunities, the barriers your visitors might be facing, and drilling into more of their perspective. Then you’ll have a more powerful test setup.
You’ll want to test dramatically. Test with fewer variations, make more dramatic changes with the variations, and be comfortable with your tests running longer. And while they are running and you are waiting for results, go talk to your customers. Go and run some more user testing, drill into your surveys, do post-purchase surveys, get on the phone and get the voice of customer. All of these things will enrich your ability to imagine their perspective and come up with more powerful insights.
In general, the things that are going to have the most impact are value proposition changes themselves. Trying to understand, do you have the right product-market fit, do you have the right description of your product, are you leading with the right value proposition point or angle?
Q: How far can I go with funnel optimization and testing when it comes to small local business?
A PANEL RESPONSE
David Darmanin: What do you mean by small local business? If you’re a startup just getting started, my advice would be to stop thinking about optimization and focus on failing fast. Get out there, change things, get some traction, get growth and you can optimize later. Whereas, if you’re a small but established local business, and you have traffic but it’s low, that’s different. In the end, conversion optimization is a traffic game. Small local business with a lot of traffic, maybe. But if traffic is low, focus on the qualitative, speak to your users, spend more time understanding what’s happening.
If you can’t test to significance, you should turn to qualitative research.
That would give you better results. If you don’t have the traffic to test against the last step in your funnel, you’ll end up testing at the beginning of your funnel. You’ll test for engagement or click through, and you’ll have to assume that people who don’t bounce and click through will convert. And that’s not always true. Instead, go start working with qualitative tools to see what the visitors you have are actually doing on your page and start optimizing from there.
André Morys: Testing with too small a sample size is really dangerous because it can lead to incorrect assumptions if you are not an expert in statistics. Even if you’re getting 10,000 to 20,000 orders per month, that is still a low number for A/B testing. Be aware of how the numbers work together. We’ve had people claiming 70% uplift, when the numbers are 64 versus 27 conversions. And this is really dangerous because that result is bull sh*t.
Jamie Elgie: It helps when you’ve had a screwup. When we started this process, we had not been successful with the radical design approach. But my advice for anyone championing optimization within an organization would be to focus on the overall objective.
For us, it was about getting our marketing spend to be more effective. If you can widen the funnel by making more people convert on your site, and then chase the people who convert (versus people who just land on your site) with your display media efforts, your social media efforts, your email efforts, and with all your paid efforts, you are going to be more effective. And that’s ultimately how we sold it.
It really sells itself though, once the process begins. It did not take long for us to see really impactful results that were helping our bottom line, as well as helping that overall strategy of making our display media spend, and all of our media spend more targeted.
Video Resource: Watch this webinar recording and discover how Jamie increased his company’s sales by more than 40% with evolutionary site redesign and conversion optimization.
Q: What has surprised you or stood out to you while doing CRO?
Jamie Elgie: There have been so many ‘A-ha!’s, and that’s the best part. We are always learning. Things that we are all convinced we should change on our website, or that we should change in our messaging in general, we’ll test them and actually find out.
We have one test running right now, and it’s failing, which is disappointing. But our entire emphasis as a team is changing, because we are learning something. And we are learning it without a huge amount of risk. And that, to me, has been the greatest thing about optimization. It’s not just the impact to your marketing funnel, it’s also teaching us. And it’s making us a better organization because we’re learning more.
One of the biggest benefits for me and my team has been how effective it is just to be able to say, ‘we can test that’.
If you have a salesperson who feels really strongly about something, and you feel really strongly that they’re wrong, the best recourse is to put it out on the table and say, ok, fine, we’ll go test that.
It enables conversations to happen that might not otherwise happen. It eliminates disputes that are not based on objective data, but on subjective opinion. It actually brings organizations together when people start to understand that they don’t need to be subjective about their viewpoints. Instead, you can bring your viewpoint to a test, and then you can learn from it. It’s transformational not just for a marketing organization, but for the entire company, if you can start to implement experimentation across all of your touch points.
Q: Do you have any tips for optimizing a website to conversion when the purchase cycle is longer, like 1.5 months?
Chris Goward: That’s a common challenge in B2B or with large ticket purchases for consumers. The best way to approach this is to
Track your leads and opportunities to the variation,
Then, track them through to the sale,
And then look at whether average order value changes between the variations, which implies the quality of the leads.
Because it’s easy to measure lead volume between variations. But if lead quality changes, then that makes a big impact.
We actually have a case study about this with Magento. We asked the question, “Which of these calls-to-action is actually generating the most valuable leads?”. And ran an experiment to try to find out. We tracked the leads all the way through to sale. This helped Magento optimize for the right calls-to-action going forward. And that’s an important question to ask near the beginning of your optimization program, which is, am I providing the right hook for my visitor?
Case Study: Discover how Magento increased lead volume and lead quality in the full case study.
Q: When you have a longer sales process, getting visitors to convert is the first step. We have softer conversions (eBooks) and urgent ones like demo requests. Do we need to pick ONE of these conversion options or can ‘any’ conversion be valued?
Nick So: Each test variation should be based on a single, primary hypothesis. And each hypothesis should be based on a single, primary conversion goal. This helps you keep your hypotheses and strategy focused and tactical, rather than taking a shotgun approach to just generally ‘improve the website’.
However, this focused approach doesn’t mean you should disregard all other business goals. Instead, count these as secondary goals and consider them in your post-test results analysis.
If a test increases demo requests by 50%, but cannibalizes ebook downloads by 75%, then, depending on the goal values of the two, a calculation has to be made to see if the overall net benefit of this tradeoff is positive or negative.
Different test hypotheses can also have different primary conversion goals. One test can focus on demos, but the next test can be focused on ebook downloads. You just have to track any other revenue-driving goals to ensure you aren’t cannibalizing conversions and having a net negative impact for each test.
Q: You’ve mainly covered websites that have a particular conversion goal, for example, purchasing a product, or making a donation. What would you say can be a conversion metric for a customer support website?
Nick So: When we help a client determine conversion metrics…
…we always suggest following the money.
Find the true impact that customer support might have on your company’s bottom line, and then determine a measurable KPI that can be tracked.
For example, would increasing the usefulness of the online support decrease costs required to maintain phone or email support lines (conversion goal: reduction in support calls/submissions)? Or, would it result in higher customer satisfaction and thus greater customer lifetime value (conversion goal: higher NPS responses via website poll)?
Q: Do you find that results from one client apply to other clients? Are you learning universal information, or information more specific to each audience?
Chris Goward: That question really gets at the nub of where we have found our biggest opportunity. When I started WiderFunnel in 2007, I thought that we would specialize in an industry, because that’s what everyone was telling us to do. They said, you need to specialize, you need to focus and become an expert in an industry. But I just sort of took opportunities as they came, with all kinds of different industries. And what I found is the exact opposite.
We’ve specialized in the process of optimization and personalization and creating powerful test design, but the insights apply to all industries.
What we’ve found is people are people, regardless of whether they’re shopping for a server, or shopping for socks, or donating to third-world countries, they go through the same mental process in each case.
The tactics are a bit different, sometimes. But often, we’re discovering breakthrough insights because we’re able to apply principles from one industry to another. For example, taking an e-commerce principle and identifying where on a B2B lead generation website we can apply that principle because someone is going through the same step in the process.
Most marketers spend most of their time thinking about their near-field competitors rather than in different industries, because it’s overwhelming to look at all of the other opportunities. But we are often able to look at an experience in a completely different way, because we are able to look at it through the lens of a different industry. That is very powerful.
Q: For companies that are not strictly e-commerce and have multiple business units with different goals, can you speak to any challenges with trying to optimize a visible page like the homepage so that it pleases all stakeholders? Is personalization the best approach?
Nick So: At WiderFunnel, we often work with organizations that have various departments with various business goals and agendas. We find the best way to manage this is to clearly quantify the monetary value of the #1 conversion goal of each stakeholder and/or business unit, and identify areas of the site that have the biggest potential impact for each conversion goal.
In most cases, the most impactful test area for one conversion goal will be different for another conversion goal (e.g. brand awareness on the homepage versus checkout for e-commerce conversions).
When there is a need to consider two different hypotheses with differing conversion goals on a single test area (like the homepage), teams can weigh the quantifiable impact + the internal company benefits in their decision and make that negotiation of prioritization and scheduling between teams.
I would not recommend personalization for this purpose, as that would be a stop-gap compromise that would limit the creativity and strategy of hypotheses, as well as create a disjointed experience for visitors, which would generally have a negative impact overall.
If you HAVE to run opposing strategies simultaneously on an area of the site, you could run multiple variations for different teams and measure different goals. Or, run mutually exclusive tests (keeping in mind these tactics would reduce test velocity, and would require more coordination between teams).
Q: Do you find testing strategies differ cross-culturally? Do conversion rates vary drastically across different countries / languages when using these strategies?
Chris Goward: We have run tests for many clients outside of the USA, such as in Israel, Sweden, Australia, UK, Canada, Japan, Korea, Spain, Italy and for the Olympics store, which is itself a global e-commerce experience in one site!
There are certainly cultural considerations and interesting differences in tactics. Some countries don’t have widespread credit card use, for example, and retailers there are accustomed to using alternative payment methods. Website design preferences in many Asian countries would seem very busy and overly colorful to a Western European visitor. At WiderFunnel, we specialize in English-speaking and Western-European conversion optimization and work with partner optimization companies around the world to serve our global and international clients.
Q: How do you recommend balancing the velocity of experimentation with quality, or more isolated design?
Chris Goward: This is where the art of the optimization strategist comes into play. And it’s where we spend the majority of our effort – in creating experiment plans. We look at all of the different options we could be testing, and ruthlessly narrow them down to the things that are going to maximize the potential growth and the potential insights.
And there are frameworks we use to do that. Its all about prioritization. There are hundreds of ideas that we could be testing, so we need to prioritize with as much data as we can. So, we’ve developed some frameworks to do that. The PIE Framework allows you to prioritize ideas and test areas based on the potential, importance, and ease. The potential for improvement, the importance to the business, and the ease of implementation. And sometimes these are a little subjective, but the more data you can have to back these up, the better your focus and effort will be in delivering results.
Q: I notice that you often have multiple success metrics, rather than just one? Does this ever lead to cherry-picking a metric to make sure that the test you wanted to win seem like it’s the winner?
Chris Goward: Good question! We actually look for one primary metric that tells us what the business value of a winning test is. But we also track secondary metrics. The goal is to learn from the other metrics, but not use them for decision-making. In most cases, we’re looking for a revenue-driving primary metric. Revenue-per-visitor, for example, is a common metric we’ll use. But the other metrics, whether conversion rate or average order value or downloads, will tell us more about user behavior, and lead to further insights.
There are two steps in our optimization process that pair with each other in the Validate phase. One is design of experiments, and the other is results analysis. And if the results analysis is done correctly, all of the metrics that you’re looking at in terms of variation performance, will tell you more about the variations. And if the design of experiments has been done properly, then you’ll gather insights from all of the different data.
But you should be looking at one metric to tell you whether or not a test won.
Q: When do you make the call for A/B tests for statistical significance? We run into the issue of varying test results depending on part of the week we’re running a test. Sometimes, we even have to run a test multiple times.
Chris Goward: It sounds like you may be ending your tests or trying to analyze results too early. You certainly don’t want to be running into day-of-the-week seasonality. You should be running your tests over at least a week, and ideally two weekends to iron out that seasonality effect, because your test will be in a different context on different days of the week, depending on your industry.
So, run your tests a little bit longer and aim for statistical significance. And you want to use tools that calculate statistical significance reliably, and help answer the real questions that you’re trying to ask with optimization. You should aim for that high level of statistical significance, and iron out that seasonality. And sometimes you’ll want to look at monthly seasonality as well, and retest questionable things within high and low urgency periods. That, of course, will be more relevant depending on your industry and whether or not seasonality is a strong factor.
Q: Is there a way to conclusively tell why a test lost or was inconclusive? To know what the hidden gold is?
Chris Goward: Developing powerful hypotheses is dependent on having workable theories. Seeking to determine the “Why” behind the results is some of the most interesting part of the work.
The only way to tell conclusively is to infer a potential reason, then test again with new ways to validate that inference. Eventually, you can form conversion optimization theories and then test based on those theories. While you can never really know definitively the “why” behind the “what”, when you have theories and frameworks that work to predict results, they become just as useful.
As an example, I was reviewing a recent test for one of our clients and it didn’t make sense based on our LIFT Model. One of the variations was showing under-performance against another variation, but I believed strongly that it should have over-performed. I struggled for some time to align this performance with our existing theories and eventually discovered the conversion rate listed was a typo! The real result aligned perfectly with our existing framework, which allowed me to sleep at night again!
Q: How many visits do you need to get to statistically relevant data from any individual test?
Chris Goward: The number of visits is just one of the variables that determines statistical significance. The conversion rate of the Control and conversion rate delta between the variations are also part of the calculation. Statistical significance is achieved when there is enough traffic (i.e. sample size), enough conversions, and the conversion rate delta is great enough.
Here’s a handy Excel test duration calculator. Fortunately, today’s testing tools calculate statistical significance automatically, which simplifies the conversion champion’s decision-making (and saves hours of manual calculation!)
When planning tests, it’s helpful to estimate the test duration, but it isn’t an exact science. As a rule-of-thumb, you should plan for smaller isolation tests to run longer, as the impact on conversion rate may be less. The test may require more conversions to potentially achieve confidence.
Larger, more drastic cluster changes would typically run for a shorter period of time, as they have more potential to have a greater impact. However, we have seen that isolations CAN have the potential to have big impact. If the evidence is strong enough, test duration shouldn’t hinder you from trying smaller, more isolated changes as they can lead to some of the biggest insights.
Often, people that are new to testing become frustrated with tests that never seem to finish. If you’ve run a test with more than 30,000 to 50,000 visitors and one variation is still not statistically significant over another, then your test may not ever yield a clear winner and you should revise your test plan or reduce the number of variations being tested.
Q: We are new to optimization (had a few quick wins with A/B testing and working toward a geo targeting project). Looking at your Infinity Optimization Process, I feel like we are doing a decent job with exploration and validation – for this being a new program to us. Our struggle seems to be your orange dot… putting the two sides together – any advice?
Chris Goward: If you’re getting insights from your Exploratory research, those insights should tie into the Validate tests that you’re running. You should be validating the insights that you’re getting from your Explore phase. If you started with valid insights, the results that you get really should be generating growth, and they should be generating insights.
Part of it is your Design of Experiments (DOE). DOE is how you structure your hypotheses and how you structure your variations to generate both growth and insights, and those are the two goals of your tests.
If you’re not generating growth, or you’re not generating insights, then your DOE may be weak, and you need to go back to your strategy and ask, why am I testing this variation? Is it just a random idea? Or, am I really isolating it against another variation that’s going to teach me something as well as generate lift? If you’re not getting the orange dot right, then you probably need to look at researching more about Design of Experiments.
Q: When test results are insignificant after lots impressions, how do you know when to ‘call it a tie’ and stop that test and move on?
Chris Goward: That’s a question that requires a large portion of “it depends.” It depends on whether:
You have other tests ready to run with the same traffic sources
The test results are showing high volatility or have stabilized
The test insights will be important for the organization
There’s an opportunity cost to every test. You could always be testing something else and need to constantly be asking whether this is the best test to be running now vs. the cost and potential benefit of the next test in your conversion strategy.
Q: There are tools meant to increase testing velocity with pre-built widgets and pre-built test variations, even – what are your thoughts on this approach?
A PANEL RESPONSE
John Ekman: Pre-built templates provide a way to get quick wins and uplift. But you won’t understand why it created an uplift. You won’t understand what’s going on in the brain of your users. For someone who believes that experimentation is a way to look in the minds of whoever is in front of the screen, I think these methods are quite dangerous.
Chris Goward: I’ll take a slightly different stance. As much as I talk about understanding the mind of the customer, asking why, and testing based on hypotheses, there is a tradeoff. A tradeoff between understanding the why and just getting growth. If you want to understand the why infinitely, you’ll do multivariate testing and isolate every potential variable. But in practice, that can’t happen. Very few have enough traffic to multivariate test everything.
But if you don’t have tons of traffic and you want to get faster results, maybe you don’t want to know the why about anything, and you just want to get lift.
There might be a time to do both. Maybe your website performance is really bad, or you just want to try a left-field variation, just to see if it works…if you get a 20% lift in your revenue, that’s not a failure. That’s not a bad thing to do. But then, you can go back and isolate all of the things to ask yourself: Well, I wonder why that won, and start from there.
The approach we usually take at WiderFunnel is to reserve 10% of the variations for ‘left-field’ variations. As in, we don’t know why this will work, but we’re just going to test something crazy and see if it sticks.
David Darmanin: I agree, and disagree. We’re living in an era when technology has become so cheap, that I think it’s dangerous for any company to try to automate certain things, because they’re going to just become one of many.
Creating a unique customer experience is going to become more and more important.
If you are using tools like a platform, where you are picking and choosing what to use so that it serves your strategy and the way you want to try to build a business, that makes sense to me. But I think it’s very dangerous to leave that to be completely automated.
Some software companies out there are trying to build a completely automated conversion rate optimization platform that does everything. But that’s insane. If many sites are all aligned in the same way, if it’s pure AI, they’re all going to end up looking the same. And who’s going to win? The other company that pops up out of nowhere, and does everything differently. That isn’t fully ‘optimized’ and is more human.
Optimization, in itself, if it’s too optimized, there is a danger. If we eliminate the human aspect, we’re kind of screwed.
CORGI HomePlan provides boiler and home cover insurance in Great Britain. It offers various insurance policies and an annual boiler service. Its main value proposition is that it promises “peace of mind” to customers. It guarantees that if anything goes wrong, it’ll be fixed quickly and won’t cost anything extra over the monthly payments.
CORGI’s core selling points were not being communicated clearly throughout the website. Insurance is a hyper-competitive industry and most customers compare other providers before taking a decision. After analyzing its data, CORGI saw that there was an opportunity to improve conversions and reduce drop-offs at major points throughout the user journey. To help solve that problem, CORGI hired Worship Digital, a conversion optimization agency.
Lee Preston, a conversion optimization consultant at Worship Digital, analyzed CORGI’s existing Google Analytics data, conducted user testing and heuristic analysis, and used VWO to run surveys and scrollmaps. After conducting qualitative and quantitative analysis, Lee found that:
Users were skeptical of CORGI’s competition, believing they were not transparent enough. Part of CORGI’s value proposition is that it doesn’t have any hidden fees so conveying this to users could help convince them to buy.
On analyzing the scrollmap results, it was found that only around a third of mobile users scrolled down enough to see the value proposition at the bottom of the product pages.
They ran surveys for users and asked, “Did you look elsewhere before visiting this site? (If so, where?)” More than 70% of respondents had looked elsewhere.
They ran another survey and asked users what they care about most; 18% of users said “fast service” while another 12% said “reliability”.
This is how CORGI’s home page originally looked:
After compiling all these observations, Lee and his team distilled it down to one hypothesis:
CORGI’s core features were not being communicated properly. Displaying these more clearly on the home page, throughout the comparison journey, and the checkout could encourage more users to sign up rather than opting for a competitor.
Lee adds, “Throughout our user research with CORGI, we found that visitors weren’t fully exposed to the key selling points of the service. This information was available on different pages on the site, but was not present on the pages comprising the main conversion journey.”
Worship Digital first decided to put this hypothesis to test on the home page.
“We hypothesized that adding a USP bar below the header would mean 100% of visitors would be exposed to these anxiety-reducing features, therefore, improving motivation and increasing the user conversion rate,” Lee said.
This is how the variation looked.
The variation performed better than the control across all devices and majority of user types. The variation increased the conversions by 30.9%.
“We were very happy that this A/B test validated our research-driven hypothesis. We loved how we didn’t have to buy some other tool for running heatmaps and scrollmaps for our visitor behavior experiment,” Lee added.
Conversion optimization is a continuous process at CORGI. Lee has been constantly running new experiments and gathering deep understanding about the insurance provider’s visitors. For the next phase of testing, he plans to:
Improve the usability of the product comparing feature.
Identify and fix leaks during the checkout process.
Whether your current ROI is something to brag about or something to worry about, the secret to making it shine lies in a 2011 award-winning movie starring Brad Pitt.
Do you remember the plot?
The manager of the downtrodden Oakland A’s meets a baseball-loving Yale economics graduate who maintains certain theories about how to assemble a winning team.
His unorthodox methods run contrary to scouting recommendations and are generated by computer analysis models.
Despite the ridicule from scoffers and naysayers, the geek proves his point. His data-driven successes may even have been the secret sauce, fueling Boston’s World Series title in 2004 (true story, and the movie is Moneyball).
What’s my point?
Being data-driven seemed a geeks’ only game, or a far reach to many, just a few years ago. Today, it’s time to get on the data-driven bandwagon…or get crushed by it.
Let’s briefly look at the situation and the cure.
Being Data-Driven: The Situation
Brand awareness, test-drive, churn, customer satisfaction, and take rate—these are essential nonfinancial metrics, says Mark Jeffery, adjunct professor at the Kellogg School of Management.
Throw in a few more—payback, internal rate of return, transaction conversion rate, and bounce rate—and you’re well on your way to mastering Jeffery’s 15 metric essentials.
Why should you care?
Because Mark echoes the assessment of his peers from other top schools of management:
Organizations that embrace marketing metrics and create a data-driven marketing culture have a competitive advantage that results in significantly better financial performance than that of their competitors. – Mark Jeffery.
You don’t believe in taking marketing and business growth advice from a guy who earned a Ph.D. in theoretical physics? Search “data-driven stats” for a look at the research. Data-centric methods are leading the pack.
Being Data-Driven: The Problem
If learning to leverage data can help the Red Sox win the World Series, why are most companies still struggling to get on board, more than a decade later?
There’s one little glitch in the movement. We’ve quickly moved from “available data” to “abundant data” to “BIG data.”
CMO’s are swamped with information and are struggling to make sense of it all. It’s a matter of getting lost in the immensity of the forest and forgetting about the trees.
We want the fruits of a data-driven culture. We just aren’t sure where or how to pick them.
Data-Driven Marketing: The Cure
I’ve discovered that the answer to big data overload is hidden right in the problem, right there at the source.
Data is produced by scientific means. That’s why academics like Mark are the best interpreters of that data. They’re schooled in the scientific method.
That means I must either hire a data scientist or learn to approach the analytical part of business with the demeanor of a math major.
Turns out that it’s not that difficult to get started. This brings us to the most important aspect, that is, the scientific approach to growth.
Scientific Method of Growth
You’re probably already familiar with the components of the scientific method. Here’s one way of describing it:
Identify and observe a problem, then state it as a question.
Research the topic and then develop a hypothesis that would answer the question.
Create and run an experiment to test the hypothesis.
Go over the findings to establish conclusions.
Continue asking and continue testing.
By focusing on one part of the puzzle a time, neither the task nor the data will seem overwhelming. As you are designing the experiment, you can control it.
Here’s an example of how to apply the scientific method to data-driven growth/optimization, as online enterprises would know it.
Question: Say you have a product on your e-commerce site that’s not selling as well as you want. The category manager advises lowering the price. Is that a good idea?
Hypothesis: Research tells you that similar products are selling at an average price that is about the same as yours. You hypothesize that lowering your price will increase sales.
Test: You devise an A/B test that will offer the item at a lower price to half of your e-commerce visitors and at the same price to the other half. You run the test for one week.
Conclusions: Results show that lowering the price did not significantly increase sales.
Action: You create another hypothesis to explain the disappointing sales and test this hypothesis for accuracy.
You may think that the above example is an oversimplification, but we’ve seen our clients at The Good make impressive gains by arriving at data-driven decisions based on experiments even less complicated.
And the scientific methodology applies to companies both large and small, too. We’ve used the same approach with everyone from Xerox to Adobe.
Big data certainly is big, but it doesn’t have to be scary. Step-by-step analysis on fundamental questions followed by a data-driven optimization plan is enough to get you large gains.
The scientific approach to growth can be best implemented with a platform that is connected and comprehensive. Such a platform, which shows business performance on its goals, from one stage of the funnel to another, can help save a lot of time, effort, and money.
Businesses need to be data-driven in order to optimize for growth, and to achieve business success. The scientific method can help utilize data in the best possible ways to attain larger gains. Take A/B testing, for example. Smart A/B testing is more than just about testing random ideas. It is about following a scientific, data-driven approach. Follow the Moneyball method of data-driven testing and optimization, and you’ll be on your way to the World Series of increased revenues in no time.
Do you agree that a data-driven approach is a must for making your ROI shine? Share your thoughts and feedback in the comments section below.
If you’ve ever tested your website, you’ve probably been in the unfortunate situation of running out of ideas on what to test.
But don’t worry – it happens to everybody.
That’s of course, unless you have a website testing plan.
That’s why KlientBoost has teamed up with VWO to bring to you a gifographic that provides a simple guide on knowing the what, how, and why when it comes to testing your website.
Setting Your Testing Goals
Like a New Year’s resolution around getting fitter, if you don’t have any goals tied to your website testing plan, then you may be doing plenty of work, with little results to show.
With your goals in place, you can focus on the website tests that will help you achieve those goals –the fastest.
Testing a button color on your home page when you should be testing your checkout process, is a sure sign that you are heading to testing fatigue or the disappointment of never wanting to run a test again.
But let’s take it one step further.
While it’s easy to improve click-through rates, or CTRs, and conversion rates, the true measure of a great website testing plan comes from its ability to increase revenue.
No optimization efforts matter if they don’t connect to increased revenue in some shape or form.
Whether you improve the site user experience, your website’s onboarding process, or get more conversions from your upsell thank you page, all those improvements compound into incremental revenue gains.
Lesson to be learned?
Don’t pop the cork on the champagne until you know that an improvement in the CTRs or conversion rates would also lead to increased revenue.
Start closest to the money when it comes to your A/B tests.
Knowing What to Test
When you know your goals, the next step is to figure out what to test.
You have two options here:
Look at quantitative data like Google Analytics that show where your conversion bottlenecks may be.
Or gather qualitative data with visitor behavior analysis where your visitors can tell you the reasons for why they’re not converting.
Both types of data should fall under your conversion research umbrella. In addition to this gifographic, we created another one, all around the topic of CRO research.
When you’ve done your research, you may find certain aspects of a page that you’d like to test. For inspiration, VWO has created The Complete Guide To A/B Testing – and in it, you’ll find some ideas to test once you’ve identified which page to test:
Content near the fold
Awards and badges
As you can see, there are tons of opportunities and endless ideas to test when you decide what to test and in what order.
So now that you know your testing goals and what to test, the last step is forming a hypothesis.
With your hypothesis, you’re able to figure out what you think will have the biggest performance lift with the thought of effort in mind as well (easier to get quicker wins that don’t need heaps of development help).
Running an A/B Test
Alright, so you have your goals, list of things to test, and hypotheses to back these up, the next task now is to start testing.
With A/B testing, you’ll always have at least one variant running against your control.
In this case, your control is your actual website as it is now and your variant is the thing you’re testing.
With proper analytics and conversion tracking along with the goal in place, you can start seeing how each of these two variants (hence the name A/B) is doing.
When A/B testing, there are two things you may want to consider before you call winners or losers of a test.
One is statistical significance. Statistical significance gives you the thumbs up or thumbs down around whether your test results can be tied to a random chance. If a test is statistically significant, then the chances of the results are ruled out.
And VWO has created its own calculator so that you can see how your test is doing.
The second one is confidence level. It helps you decide whether you can replicate the results of your test again and again.
A confidence level of 95% tells you that your test will achieve the same results 95% of the time if you run it repeatedly. So, as you can tell, the higher your confidence level, the surer you can be that your test truly won or lost.
Multivariate Testing for Combination of Variations
Let’s say you have multiple ideas to test, and your testing list is looking way too long.
Wouldn’t it be cool if you could test multiple aspects of your page at once to get faster results?
That’s exactly what multivariate testing is.
Multivariate testing allows you to test which combinations of different page elements affect each other when it comes to CTRs, conversion rates, or revenue gains.
Look at the multivariate pizza example below:
The recipe for multivariate testing is simple and delicious.
And the best part is that VWO can automatically run through all the different combinations you set so that your multivariate test can be done without the heavy lifting.
If you’re curious about whether you should A/B test or run multivariate tests, then look at this chart that VWO created:
Split URL Testing for Heavier Variations
If you find that your A/B or multivariate tests lead you to the end of the rainbow that shows bigger initiatives in backend development or major design changes are needed, then you’re going to love split URL testing.
As VWO states:
“If your variation is on a different address or has major design changes compared to control, we’d recommend that you create a Split URL Test.”
Split URL testing allows you to host different variations of your website test without changing the actual URL.
As the visual shows above, you can see that the two different variations are set up in a way that the URL is different as well.
URL testing is great when you want to test some major redesigns such as your entire website built from scratch.
By not changing your current website code, you can host the redesign on a different URL and have VWO split the traffic between the control and the variant—giving you clear insight whether your redesign will perform better.
Over to You
Now that you have a clear understanding on different types of website tests to run, the only thing left is to, well, run some tests.
Armored with quantitative and qualitative knowledge of your visitors, focus on the areas that have the biggest and quickest impact to strengthen your business.
And I promise, when you finish your first successful website test, you’ll get hooked on.
A/B testing and conversion rate optimization (CRO) are not synonymous, but often confused.
A/B testing is exactly what it says—a test to verify different sets of variations on your website. Conversion rate optimization, however, is much more than just testing.
Conversion optimization is a scientific process that starts with analyzing your business’ leaks, making educated hypotheses to fix them, and then testing those hypotheses.
Conversion optimization is a process that needs to be repeated, but A/B testing is a technique. A formalized conversion optimization process can advance somewhat like this:
Tracking metrics and identifying what parts of the conversion funnel need fixing
Analyzing why visitors are doing what they are doing
Creating and Planning your hypotheses for optimization
Testing the hypotheses against the existing version of the website
Learning from the tests and applying the learning to the subsequent tests
To further clear up the air around the two terms, we got in touch with the top in line conversion rate experts and picked their brains on the same. The experts tell us about their experiences with A/B testing and conversion optimization and why you should switch to the latter.
Back in 2007, I could already see that a huge gap was developing among companies that are perfecting a process for conversion optimization and those that are following the easy advice of so many consultants.
Instead of selling top-of-mind advice, I focused WiderFunnel on refining the process of continuous optimization for leading brands. For each of our client engagements, we run a holistic CRO program that builds insights over time to continuously improve our understanding of their unique customer segments. The results speak for themselves.
Ad hoc A/B testing is a tragic use of your limited traffic when you realize how much growth and insights structured optimization program could be delivering. In an example that we published recently, a structured CRO program is exactly what this company needed to double its revenue two years in a row, over the ad hoc testing it was previously doing.
The most effective conversion optimization program seeps into the bones of your organization. Decisions that were once exclusively creative in nature gain a data component. Much of the guessing drains from your online marketing. We call this “rigorous creativity,” and it marries your best marketing work with insights about your visitors. It cannot be accomplished by running a few tests, but comes from asking daily, “Do we have some data to help guide us? If not, can we collect it?” The rigorously creative business is good at finding and creating this data and using it to maximize visitor satisfaction and business profit.
Without a strong CRO strategy that encompasses the experience visitors have discovering, using, exploring, and hopefully eventually converting on your site, you’ll always be plugging holes in a leaky bucket rather than building a better container.
The best opportunities to improve conversion usually aren’t from changing individual pages one at a time with a multitude of tests, but rather by crafting a holistic, thoughtful experience that runs throughout the site, then iterating on elements consistently with an eye to learning, and applying knowledge from each test to the site as a whole.
An AB test should come at the end of your homework. If you’re just AB testing, you’re probably gambling. Your tests are based on things you’ve read on the Internet, gut feeling, and opinions. Some of your tests will be winners, most of them losers. Because you’re shooting blanks.
The homework is data analysis and user research. This will reveal the problem areas and why your visitors are leaving or not doing what you want them to do. The better you know the dreams, the hopes, the fears, the barriers, and uncertainties of your users, the better you’ll be able to work out a test that will have a real impact.
In case you’re in doubt, impact seldom comes from design changes. Don’t change the color of your button, change the text on that button. Not randomly, but based on what users want and your knowledge of influencing people.
Don’t focus too much on the design. Focus on your offer, your value proposition, and how you sell your stuff.
Don’t sell the way you like to sell. Sell the way your customers want to buy.
André Scholten, SEO and Site Speed specialist, Google Analytics
Create a strategy that makes your clients happier and don’t focus on the money. Single non-related tests on the conversion funnel follow each other up, based on abandonment rates, judged on their influence on revenue. That’s not a strategy but more an operational process where test after test is conducted without vision. You should create a test culture within your company that tests everything that will make your website a nicer place for your customers. Give them feedback possibilities with feedback or chat tools to learn from these. Take their wishes into account and create tests to verify if their wishes are met. Create a test strategy that focuses on all goals: not only the money, but also information-type goals, contact-goals, etc. It will give you so much to do and to improve. That’s a holistic approach to testing.
“Winging it” may work for musicians and cooks; but in marketing, any decision made outside of a holistic CRO program is a bad one. Only through testing will you find the right message, the right audience, and the right offer. And only after you nail these critical elements will you see the profits you need. It doesn’t matter how small or new your business is, take time to test your ideas. You’ll be glad you did.
To say an online business is great due to AB Testing is like saying a Football team is great because of their stadium. It is the entire team framework that leads to winning. An optimization frameworkintegrates A/B testing as one component that includes the team, the brand, advertising, and a solid testing strategy. This is how industry-leading websites win year after year.
Rich Page, Conversion Rate Optimization and Web Analytics Expert
Many online businesses make the mistake of thinking that A/B testing is the same as CRO and don’t pay enough attention to the other key aspects of CRO. This usually gives them disappointing results on their conversion rates and online revenue. Web analytics, website usability, visitor feedback, and persuasion techniques are the other key CRO elements that you need to frequently use to gain greatest results.
Gaining an in-depth visitor feedback is a particularly essential part of CRO. This helps you discover your visitor’s main needs and common challenges, and forms high-impact ideas for your A/B tests (rather than just guessing or listening to your HiPPOs). Gaining visitor insights from usability tests and watching recordings of them using your website is particularly revealing.
Peter Sandeen, Value Proposition and Marketing Message Development Expert
Just about every statistic on A/B test results says that most tests don’t create positive results (or any results at all). That’s partly because of the inherent uncertainties of testing. But a big part is the usual lack of a real plan.
Actually, you need two plans.
The first plan, the big picture one, is there to keep you focused on testing the right parts of your marketing. It tells if you should spend most of your energy on testing landing pages, prices, or perhaps webinar content.
The second plan is there to make sure you’re creating impactful differences in your tests. So instead of testing two headlines that mean essentially the same thing (e.g. “Get good at golf fast” and “Improve your golf skills quickly”), you test things that are likely to create a different conversion rate (e.g. “3-hour practice recommended by golf pros”). And when you see increased or decreased conversion rates, you create the next test based on those results.
With good plans, you can get positive results from 50–75% of your tests.
Roger Dooley, Author of Brainfluence
Simple A/B testing often leads to a focus on individual elements of a landing page or campaign – a graphic, a headline, or a call to action. This can produce positive results, but often distracts one from looking at the bigger picture. My emphasis is on using behavior science to improve marketing, and that approach works best when applied to multiple elements of the customer journey.
Conversion rate (CR) is a measure of your ability to persuade visitors to take action the way you want them to. It’s a reflection of your effectiveness and customer satisfaction. For you to achieve your goals, visitors must first achieve theirs. Conversion rate, as a metric, is a single output. CR is a result of the many inputs that make up a customer experience. That experience has the chance to annoy, satisfy, or delight them. We need to optimize the inputs. Ad hoc A/B tests cannot do this. Companies that provide a superior experience are rewarded with higher conversion rates. Focus on improving customer experience, and you’ll find the results in your P&L, Balance Sheet, and Cash Flow statements.
Thinking beyond the individual A/B test as optimization is a natural part of gaining experience. We all probably started off by running a handful of ad hoc tests and that’s okay—that’s how we learn. However, as we grow, three things may happen which bring us closer towards becoming more strategic:
1. We become conscious of ways in which we can prioritize our testing ideas.
2. We become conscious of the structure of experiments and how tests can be designed.
3. We think of a series of upcoming tests which may or may not work together to maximize returns.
Here is one example of one test strategy/structure: The Best Shot Test. It aims to maximize the effect size and minimize the testing duration, while doing so at the cost of a blurred cause-effect relationship.
Running basic A/B tests based on best practices is okay for a start. But to really get to the next level, it’s important to see how all the pieces of the puzzle fit together. This gives us a better understanding of what exactly we’re testing for and reach for results that fit the specific goals of the organization.
Kristi Hines, Certified Digital Marketer
Depending on your business and the size of your marketing team, you may want to go beyond just testing your website or a landing page. You may want to expand your A/B testing to your entire online presence.
For example, try changing your main thing (keyword phrase, catch phrase, elevator pitch, headline, etc.) not just on your website, but also on all your homepage’s meta description, your social media bios and intros, your email signatures, etc.
Why? Because here’s what’s going to happen. If you have consistent messaging across a bunch of channels that someone follows you on, and all of a sudden, they come to your landing page with an inconsistent message (the variant, if you will), then they may not convert simply because of the inconsistency of your message. Not because it wasn’t a good message, but because it wasn’t the message they were used to receiving from you.
As my own personal case example, when I change my main phrase “Kristi Hines is a freelance writer, business blogger, and certified digital marketer.” I don’t do it just on my website. I do it everywhere. And I don’t do it for just a week. I do it for at least two to three months unless it’s a complete dud (i.e., no leads in the first week at all).
But what I usually find is when I find a good phrase, I’ll start getting leads from all over the place. And usually they will say they went from one channel to the next. Hence, don’t just test. Test consistency across your entire presence, if possible. The results may be astonishing.
I do think that Conversion Rate Optimization as a marketing discipline goes beyond just a series of A/B and/or Multivariate tests. As external factors such as your brand and what other people say about the business (reviews and referrals) can also heavily impact how a site can perform in terms of attracting more actions from its intended users/visitors.
For instance, positive social proof (number of people sharing/liking a particular product or a brand on different social networks) can also influence your customer’s buying process. And improving on this aspect of the brand involves a whole different campaign – which would involve a more holistic approach to be integrated to your CRO program. Another factor to consider is the quality of traffic your campaign is getting (through SEO, PPC, paid social campaigns, content marketing, etc.) The more targeted traffic you’re able to acquire, the better your conversions will be.
A full-fledged conversion optimization program goes a long way and is a lot more beneficial than ad hoc testing.
So what are you waiting for? Let stepping up to conversion optimization be your #1 goal in the new year.
Don’t wait until it’s too late. Check and maintain your conversion rates often, just like you would your car. Image via Shutterstock.
A major faux pas I often see with conversion rates is that businesses only seem to to address them when alarms are triggered.
Conversion rates require ongoing maintenance and should be regular focal points in your optimization and marketing efforts. Like a vehicle engine, they should be checked and maintained regularly.
When conversion rates aren’t what you had expected, it’s not uncommon for marketers and business owners to start making knee-jerk tweaks to on-page elements, hoping to lift conversions through A/B testing. While there may be some benefit to tweaking the size of buttons and adjusting landing page headlines and CTAs, there’s a great deal more to conversion optimization.
You must take a scientific approach that includes qualitative and quantitative data, rather than an à la carte strategy of piecing together what you think might be most effective.
Before making any changes to your landing pages, ask yourself these 10 critical questions:
1. Is there an audience/market fit for the product?
Analyzing the market for your product is something you do in the early stages of product development before launching. It’s part of gathering initial research on your audience and what they want or need. When you experience conversion problems, you may want to revisit this.
Use keyword tools, and platforms like Google Trends to discover the volume of interest in your particular product. If the traffic shows a steady or growing interest, then how well does the product in its current form align with the needs of the people searching for it?
Revisit your audience research and review the needs and problems of your customer. Make sure your product addresses those needs and provides a solution. Then look to how you position the product to ensure customers can see the value.
2. How accurate is your audience-targeting strategy?
There’s nothing quite as frustrating as watching hundreds of people visit your product or landing pages, only to be left with empty carts and no opt-ins.
It’s not easy to figure out what’s holding them back, but one of the first questions you should ask is whether you’re targeting the right people.
You may very well have a great product for the market, but if you’re presenting it to the wrong audience then you’ll never generate significant interest. This holds true for major, established brands as much as new startups.
Don’t start A/B testing without reading this ebook!
Learn how to build, test and optimize your landing pages with The Ultimate Guide to Landing Page Optimization.
By entering your email you’ll receive weekly Unbounce Blog updates and other resources to help you become a marketing genius.
3. Has trust been established?
Asking people to hand over personal and financial information on the web requires a huge leap of faith. You need to establish trust before asking them to add a product to their carts and complete the checkout process or even to give you their email address.
One study from Taylor Nelson Sofres showed that consumers might terminate as many as 70% of online purchases due to a lack of trust. People may really want what you’re selling, but if they don’t trust you, then they’ll never convert.
There are several ways to establish and grow trust, which include:
Testimonials, notable recognitions and brand affiliations help to build trust among prospective customers. Image via ContentMarketer.io.
4. Do customers understand the benefits and value?
For customers, everything comes down to value, which is the foundation of your unique selling positions (USP.) You can’t just convince someone to buy something through conversion tricks like big buttons and snappy graphics. If they don’t understand the product’s value or how it might benefit them, then they have no reason to buy.
You have to communicate the value of your products accurately and succinctly, breaking down what you’re selling to the most basic level so your customer sees the benefits, rather than just the features.
Here’s a great example that I took from Unbounce:
This landing page put a big the value proposition right up front, mixing in high-impact benefit statements that help seat the value with the audience.
5. What is the purchase experience really like?
It’s important to understand the journey your customer has to follow in order to reach the point where they’re willing to convert. While your landing pages or ecommerce site might look clean, the next step toward a conversion could make the whole thing come crashing down.
Providing top-notch user experiences across all devices is imperative, which includes minimizing the number of clicks necessary to complete the transaction.
Complicated site navigation and checkout processes are among the top causes of cart abandonment. Test your conversion paths internally, and consider trying out a service like UserTesting.com to get unbiased consumer feedback on your UX.
6. Where are the leaks in the funnel?
Figuring out where people exit your site can be a good indicator of why people leave —– at the very least, it can help you narrow down where to start your investigation. Working backwards from the exit point can uncover friction points you didn’t even know existed.
Open your analytics and monitor the visitor flow. Pay close attention to where traffic enters, the number of steps users have to take while navigating from page to page, and trace the point where they typically exit.
Chart your own journey through your website while examining the on-page elements and user experience. Be sure to compare visitor behavior with your funnel visualization to determine when a leak is actually a leak.
7. What are the biggest friction points?
Friction in your sales funnel can be defined as anything that gets in the way of a conversion, either by slowing it down or stopping it completely. Some friction points might include:
Slow load times
Too many form fields
Too many clicks to complete an action
Hidden or missing information (like withholding shipping or contact information)
You can reduce friction on your own site by taking small steps and testing them to see how they alter your conversion rates. Ask as few questions as possible, avoid overwhelming the customer with too many options, aim for clean and pleasing designs and hire a pro copywriter to make a stronger connection through words.
One of the simplest examples of improvement through the removal of friction comes from Expedia.
One seemingly insignificant change can have a dramatic impact on conversion. Image source.
By removing the “company name” field — just a single field on the submission form — Expedia made it easier for people to complete the form. That reduction in friction led to a $12 million increase in profit.
Given the size of Expedia and the volume of traffic they see, you could expect to see a lift like this through A/B testing. Changes don’t always being about such dramatic results, but you’ll never know the potential unless you start testing to remove those friction points in your funnel.
8. How do my customers feel about the process?
When you have concerns about your conversion rates, often the best place to turn for insights are the consumers.
Use feedback tools like a consumer survey to reach out to current customers, as well as those who abandoned their carts midway through the shopping experience. Ask them to provide information on why they made a purchase, why they chose not to, difficulties they experienced while on your site, feedback on design, etc.
This approach not only provides quality insight into what could be the likely cause of poor conversions, but also shows customers (and potential customers) that you’re making an effort to improve your site based on their feedback.
9. What does the data say?
Whenever possible, you want to make changes based on the data you’ve accumulated. Don’t focus solely on the conversion metrics of your website; analyze the data from your social ads and insights, visitor flow, bounce rates, time spent on page and more. Let the data drive your actions; otherwise you’re just firing wildly into the dark and hoping to hit your target.
Whether we’re talking about the ROI for content marketing or boosting ecommerce sales, data always matters. When you make changes, measure the new data and monitor those changes against the original. It’s the only way to know if you’re headed in the right direction.
10. How are my competitors selling this?
While I always warn people not to follow their competitors, you should still be aware of what they’re doing to leverage competitive insights garnered from their market research.
If your conversions are plummeting for specific products or services, look to the competition. How are they positioning their products? What are they doing differently to hook and engage the target audience? Draw comparisons and see how they align with the insights you’ve gleaned from your data to determine which elements you should test and improve upon.
Over to you for the questions
Now it’s time to look at your funnel and start asking the tough questions:
Do you need to re-verify product/market fit?
How accurate is your audience targeting?
Does your audience trust you?
Do your customers understand the benefits and value?
What’s the purchase experience like for the customer?
Where are the leaks in the funnel?
Are there major friction points killing conversions?
What feedback can customers offer about the process?
What does your data say about the conversion process?
What are your competitors doing right?
Remember to pay close attention to the numbers and make your changes based on data — not assumptions.
This post talks about why and how you should derive insights from your A/B test results and eventually apply them to your conversion rate optimization (CRO) plan.
Analyzing Your A/B Test Results
No matter how the overall result of your A/B test results turned out to be—positive, negative, or inconclusive—it is imperative to delve deeper and gather insights. Not only can this help you to aptly measure the success (or failure) of your A/B test, but also provide you with validations specific to your users.
AsBryan Clayton, CEO ofGreenPalputs it, “It amazes me how many organizations conflate the value of A/B testing. They often fail to understand that the value of testing is to get not just a lift but more of learning.
Sure 5% and 10% lifts in conversion are great; however, what you are trying to find out is the learning about what makes your customers say ‘yes’ to your offer. Only with A/B testing can you close the gap between customer logic and company logic and, gradually, over time, match the internal thought sequence that is going on in your customers’ heads when they are considering your offer on your landing page or within your app.”
Here is what you need to keep in mind while analyzing your A/B test results:
Tracking the Right Metric(s)
When you are analyzing A/B test results, check if you are looking for the correct metric. If multiple metrics (secondary metrics along with the primary) are involved, you need to analyze all of them individually.
Brandon Seymour, founder ofBeymour Consulting rightly points out: “It’s important to never rely on just one metric or data source. When we focus on only one metric at a time, we miss out on the bigger picture. Most A/B tests are designed to improve conversions. But what about other business impacts such as SEO?
It’s important to make an inventory of all metrics that matter to your business, before and after every test that you run. In the case of SEO, it may require you to wait for several months before the impacts surface. The same goes for data sources. Reporting and analytics platforms aren’t accurate 100 percent of the time, so it helps to use different tools to measure performance and engagement. It’s easier to isolate reporting inaccuracies and anomalies when you can compare results across different platforms.”
Post-test segmentation allows you to deploy variation based on a specific user segment. For instance, if you notice that a particular test affected new and returning users differently (and notably), you will want to apply your variation only to that particular user segment.
However, searching through lots of different types of segments after a test means you are assured of seeing a lot of positive results just because of random chance. To avoid that, make sure you have your goal defined clearly.
Delving Deeper into Visitor Behavior Analysis
You should also monitor visitor behavior analysis tools such as Heatmaps, Scrollmaps, Visitor Recordings and so on to gather further insights into A/B test results. For example, consider a search bar on an eCommerce website. An A/B test on the navigation bar works only if users actually use it. Visitor recordings can reveal if users are finding the navigation bar friendly and engaging. If the bar itself is complex to understand, all variations of it can fail to influence users.
Apart from giving insights on specific pages, visitor recordings can also help you understand user behavior across your entire website (or conversion funnel). You can learn how critical the page on which you are testing, is in your conversion funnel.
Maintaining a Knowledge Repository
After analyzing your A/B tests, it is imperative to document the observations from the tests. This helps you not only in transferring knowledge within the organization but also in using them as reference later.
For instance you are developing a hypothesis for your product page, and want to test the product image size. Using a structured repository, you can easily find similar past tests which could help you estimate patterns on that location.
To maintain a good knowledge base of your past tests, you need to structure it appropriately. You can organize past tests and the associated learning in a matrix, differentiated per their “funnel stage” (ToFu, MoFu or BoFu) and “the elements that were tested.” You can add other customized factors as well to enhance the repository.
Look at how Sarah Hodges, co-founder of Intelligent.ly, maintains track of the A/B test results, “At a previous company, I tracked tests in a spreadsheet on a shared drive that anyone across the organization could access. The document included fields for:
Start and end dates
Each campaign row also linked to a PDF with a full summary of the test hypotheses, campaign creative, and results. This included a high-level overview, as well as detailed charts, graphs, and findings.
At the time of deployment, I sent out a launch email to key stakeholders with a summary of the campaign hypothesis and test details, and attached the PDF. I followed up with a results summary email at the conclusion of each campaign.
Per my experience, concise email summaries were well-received; few users ever took a deep dive into the more comprehensive document. Earlier, I created PowerPoint decks for each campaign I deployed, but ultimately found that this was time-consuming and impeded the agility of our testing program.”
Applying the Learning to Your Next A/B Test
After you have analyzed the tests and documented them according to a predefined theme, make sure that you visit the knowledge repository before conducting any new test.
The results from past tests shed light on user behavior on a website. With better understanding of the user behavior, your CRO team can have a better idea about building hypotheses. This can help the team create on-page surveys that are contextual to a particular set of site visitors.
Moreover, results from past tests can help your team come up with new hypotheses quickly. The team can identify the areas where the win from a past A/B test can be duplicated. Also, the team can look at failed tests, know the reason for their failure and steer clear of repeating the same mistakes.
How do you analyze your A/B test results? Do you base your new test hypothesis on past learning? Write to us in the comments below.