Tag Archives: optimization strategy

Your frequently asked conversion optimization questions, answered!

Reading Time: 28 minutes

Got a question about conversion optimization?

Chances are, you’re not alone!

This Summer, WiderFunnel participated in several virtual events. And each one, from full-day summit to hour-long webinar, ended with a TON of great questions from all of you.

So, here is a compilation of 29 of your top conversion optimization questions. From how to get executive buy-in for experimentation, to the impact of CRO on SEO, to the power (or lack thereof) of personalization, you asked, and we answered.

As you’ll notice, many experts and thought-leaders weighed in on your questions, including:

Now, without further introduction…

Your conversion optimization questions

Optimization Strategy

  1. What do you see as the most common mistake people make that has a negative effect on website conversion?
  2. What are the most important questions to ask in the Explore phase?
  3. Is there such a thing as too much testing and / or optimizing?

Personalization

  1. Do you get better results with personalization or A/B testing or any other methods you have in mind?
  2. Is there such a thing as too much personalization? We have a client with over 40 personas, with a very complicated strategy, which makes reporting hard to justify.
  3. With the advance of personalization technology, will we see broader segments disappear? Will we go to 1:1 personalization, or will bigger segments remain relevant?
  4. How do you explain personalization to people who are still convinced that personalization is putting first and last name fields on landing pages?

SEO versus CRO

  1. How do you avoid harming organic SEO when doing conversion optimization?

Getting Buy-in for Experimentation

  1. When you are trying to solicit buy-in from leadership, do you recommend going for big wins to share with the higher ups or smaller wins?
  2. Who would you say are the key stakeholders you need buy-in from, not only in senior leadership but critical members of the team?

CRO for Low Traffic Sites

  1. Do you have any suggestions for success with lower traffic websites?
  2. What would you prioritize to test on a page that has lower traffic, in order to achieve statistical significance?
  3. How far can I go with funnel optimization and testing when it comes to small local business?

Tips from an In-House Optimization Champion

  1. How do you get buy-in from major stakeholders, like your CEO, to go with a conversion optimization strategy?
  2. What has surprised you or stood out to you while doing CRO?

Optimization Across Industries

  1. Do you have any tips for optimizing a website to conversion when the purchase cycle is longer, like 1.5 months?
  2. When you have a longer sales process, getting them to convert is the first step. We have softer conversions (eBooks) and urgent ones like demo requests. Do we need to pick ONE of these conversion options or can ‘any’ conversion be valued?
  3. You’ve mainly covered websites that have a particular conversion goal, for example, purchasing a product, or making a donation. What would you say can be a conversion metric for a customer support website?
  4. Do you find that results from one client apply to other clients? Are you learning universal information, or information more specific to each audience?
  5. For companies that are not strictly e-commerce and have multiple business units with different goals, can you speak to any challenges with trying to optimize a visible page like the homepage so that it pleases all stakeholders? Is personalization the best approach?
  6. Do you find that testing strategies differ cross-culturally?

Experiment Design & Setup

  1. How do you recommend balancing the velocity of experimentation with quality, or more isolated design?
  2. I notice that you often have multiple success metrics, rather than just one? Does this ever lead to cherry-picking a metric to make sure that the test you wanted to win seem like it’s the winner?
  3. When do you make the call for A/B tests for statistical significance? We run into the issue of varying test results depending on part of the week we’re running a test. Sometimes, we even have to run a test multiple times.
  4. Is there a way to conclusively tell why a test lost or was inconclusive?
  5. How many visits do you need to get to statistically relevant data from any individual test?
  6. We are new to optimization. Looking at your Infinity Optimization Process, I feel like we are doing a decent job with exploration and validation – for this being a new program to us. Our struggle seems to be your orange dot… putting the two sides together – any advice?
  7. When test results are insignificant after lots impressions, how do you know when to ‘call it a tie’ and stop that test and move on?

Testing and technology

  1. There are tools meant to increase testing velocity with pre-built widgets and pre-built test variations, even – what are your thoughts on this approach?

Your questions, answered

Q: What do you see as the most common mistake people make that has a negative effect on website conversion?

Chris Goward: I think the most common mistake is a strategic one, where marketers don’t create or ensure they have a great process and team in place before starting experimentation.

I’ve seen many teams get really excited about conversion optimization and bring it into their company. But they are like kids in a candy store: they’re grabbing at a bunch of ideas, trying to get quick wins, and making mistakes along the way, getting inconclusive results, not tracking properly, and looking foolish in the end.

And this burns the organizational momentum you have. The most important resource you have in an organization is the support from your high-level executives. And you need to be very careful with that support because you can quickly destroy it by doing things the wrong way.

It’s important to first make sure you have all of the right building blocks in place: the right process, the right team, the ability to track and the right technology. And make sure you get a few wins, perhaps under the radar, so that you already have some support equity to work with.

Further reading:

Back to Top

Q: What are the most important questions to ask in the Explore phase?

Chris Goward: During Explore, we are looking for your visitors’ barriers to conversion. It’s a general research phase. (It’s called ‘Explore’ for a reason). In it, we are looking for insights about what questions to ask and validate. We are trying to identify…

  • What are the barriers to conversion?
  • What are the motivational triggers for your audience?
  • Why are people buying from you?

And answering those questions comes through the qualitative and quantitative research that’s involved in Explore. But it’s a very open-ended process. It’s an expansive process. So the questions are more about how to identify opportunities for testing.

Whereas Validate is a reductive process. During Validate, we know exactly what questions we are trying to answer, to determine whether the insights gained in Explore actually work.

Further reading:

  • Explore is one of two phases in the Infinity Optimization Process – our framework for conversion optimization. Read about the whole process, here.

Back to Top

Q: Is there such a thing as too much testing and / or optimizing?

Chris Goward: A lot of people think that if they’re A/B testing, and improving an experience or a landing page or a website…they can’t improve forever. The question many marketers have is, how do I know how long to do this? Is there going to be diminishing returns? By putting in the same effort will I get smaller and smaller results?

But we haven’t actually found this to be true. We have yet to find a company that we have over-A/B tested. And the reason is that visitor expectations continue to increase, your competitors don’t stop improving, and you continuously have new questions to ask about your business, business model, value proposition, etc.

So my answer is…yes, you will run out of opportunities to test, as soon as you run out of business questions. When you’ve answered all of the questions you have as a business, then you can safely stop testing.

Of course, you never really run out of questions. No business is perfect and understands everything. The role of experimentation is never done.

Case Study: DMV.org has been running an optimization program for 4+ years. Read about how they continue to double revenue year-over-year in this case study.

Back to Top

Q: Do you get better results with personalization or A/B testing or any other methods you have in mind?

Chris Goward: Personalization is a buzzword right now that a lot of marketers are really excited about. And personalization is important. But it’s not a new idea. It’s simply that technology and new tools are now available, and we have so much data that allows us to better personalize experiences.

I don’t believe that personalization and A/B testing are mutually exclusive. I think that personalization is a tactic that you can test and validate within all your experiences. But experimentation is more strategic.

At the highest level of your organization, having an experimentation ethos means that you’ll test anything. You could test personalization, you could test new product lines, or number of products, or types of value proposition messaging, etc. Everything is included under the umbrella of experimentation, if a company is oriented that way.

Personalization is really a tactic. And the goal of personalization is to create a more relevant experience, or a more relevant message. And that’s the only thing it does. And it does it very well.

Further Reading: Are you evaluating personalization at your company? Learn how to create the most effective personalization strategy with our 4-step roadmap.

Back to Top

Q: Is there such a thing as too much personalization? We have a client with over 40 personas, with a very complicated strategy, which makes reporting hard to justify.

Chris Goward: That’s an interesting question. Unlike experimentation, I believe there is a very real danger of too much personalization. Companies are often very excited about it. They’ll use all of the features of the personalization tools available to create (in your client’s case) 40 personas and a very complicated strategy. And they don’t realize that the maintenance cost of personalization is very high. It’s important to prove that a personalization strategy actually delivers enough business value to justify the increase in cost.

When you think about it, every time you come out with a new product, a new message, or a new campaign, you would have to create personalized experiences against 40 different personas. And that’s 40 times the effort of having a generic message. If you haven’t tested from the outset, to prove that all of those personas are accurate and useful, you could be wasting a lot of time and effort.

We always start a personalization strategy by asking, ‘what are the existing personas?’, and proving out whether those existing personas actually deliver distinct value apart from each other, or whether they should be grouped into a smaller number of personas that are more useful. And then, we test the messaging to see if there are messages that work better for each persona. It’s a step by step process that makes sure we are only creating overhead where it’s necessary and will create value.

Further Reading: Are you evaluating personalization at your company? Learn how to create the most effective personalization strategy with our 4-step roadmap.

Back to Top

Q: With the advance of personalization technology, will we see broader segments disappear? Will we go to 1:1 personalization, or will bigger segments remain relevant?

Chris Goward: Broad segments won’t disappear; they will remain valid. With things like multi-threaded personalization, you’ll be able to layer on some of the 1:1 information that you have, which may be product recommendations or behavioral targeting, on top of a broader segment. If a user falls into a broad segment, they may see that messaging in one area, and 1:1 messaging may appear in another area.

But if you try to eliminate broad segments and only create 1:1 personalization, you’ll create an infinite workload for yourself in trying to sustain all of those different content messaging segments. And it’s almost impossible for a marketing department practically to create infinite marketing messages.

Hudson Arnold: You are absolutely going to need both. I think there’s a different kind of opportunity, and a different kind of UX solution to those questions. Some media and commerce companies won’t have to struggle through that content production, because their natural output of 1:1 personalization will be showing a specific product or a certain article, which they don’t have to support from a content perspective.

What they will be missing out on is that notion of, what big segments are we missing? Are we not targeting moms? Newly married couples? CTOs vs. sales managers? Whatever the distinction is, that segment-level messaging is going to continue to be critical, for the foreseeable future. And the best personalization approach is going to balance both.

Back to Top

Q: How do you explain personalization to people who are still convinced that personalization is putting first and last name fields on landing pages?

A PANEL RESPONSE

André Morys: I compare it to the experience people have in a real store. If you go to a retail store, and you want to buy a TV, the salesperson will observe how you’re speaking, how you’re walking, how you’re dressed, and he will tailor his sales pitch to the type of person you are. He will notice if you’ve brought your family, if it’s your first time in a shop, or your 20th. He has all of these data points in his mind.

Personalization is the art of transporting this knowledge of how to talk to people on a 1:1 level to your website. And it’s not always easy, because you may not have all of the data. But you have to find out which data you can use. And if you can do personalization properly, you can get big uplift.

John Ekman: On the other hand, I heard a psychologist once say that people have more in common than what separates them. If you are looking for very powerful persuasion strategies, instead of thinking of the different individual traits and preferences that customers might have, it may be better to think about what they have in common. Because you’ll reach more people with your campaigns and landing pages. It will be interesting to see how the battle between general persuasion techniques and individual personalization techniques will result.

Chris Goward: It’s a good point. I tend to agree that the nirvana of 1:1 personalization may not be the right goal in some cases, because there are unintended consequences of that.

One is that it becomes more difficult to find generalized understanding of your positioning, of your value proposition, of your customers’ perspectives, if everything is personalized. There are no common threads.

The other is that there is significant maintenance cost in having really fine personalization. If you have 1:1 personalization with 1,000 people, and you update your product features, you have to think about how that message gets customized across 1,000 different messages rather than just updating one. So there is a cost to personalization. You have to validate that your approach to personalization pays off, and that is has enough benefit to balance out your cost and downside.

David Darmanin: [At Hotjar], we aren’t personalizing, actually. It’s a powerful thing to do, but there is a time to deploy it. If personalization adds too much complexity and slows you down, then obviously that can be a challenge. Like most things that can be complex, I think that they are the most valuable, when you have a high ticket price or very high value, where that touch of personalization has a big impact.

With Hotjar, we’re much more volume and lower price points, so it’s not yet a priority for us. Having said that, we have looked at it. But right now, we’re a startup, at the stage where speed is everything. And having many common threads is as important as possible, so we don’t want to add too much complexity now. But if you’re selling very expensive things, and you’re at a more advanced stage as a company, it would be crazy not to leverage personalization.

Video Resource: This panel response comes from the Growth & Conversion Virtual Summit held this Spring. You can still access all of the session recordings for free, here.

Back to Top

Q: How do you avoid harming organic SEO when doing conversion optimization?

Chris Goward: A common question! WiderFunnel was actually one of Google’s first authorized consultants for their testing tool, and Google told us is that they support optimization fully. They do not penalize companies for running A/B tests, if they are set up properly and the company is using a proper tool.

On top of that, what we’ve found is that the principles of conversion optimization parallel the principles of good SEO practice.

If you create a better experience for your users, and more of them convert, it actually sends a positive signal to Google that you have higher quality content.

Google looks at pogo-sticking, where people land on the SERP, find a result, and then return back to the SERP. Pogo-sticking signals to Google that this is not quality content. If a visitor lands on your page and converts, they are not going to come back to the SERP, which sends Google a positive signal. And we’ve actually never seen an example where SEO has been harmed by a conversion optimization program.

Video Resource: Watch SEO Wizard Rand Fishkin’s talk from CTA Conf 2017, “Why We Can’t Do SEO without CRO

Back to Top

Q:When you are trying to solicit buy-in from leadership do you recommend going for big wins to share with the higher ups or smaller wins?

Chris Goward: Partly, it depends on how much equity you have to burn up front. If you are in a situation where you don’t have a lot of confidence from higher-ups about implementing an optimization program, I would recommend starting with more under the radar tests. Try to get momentum, get some early wins, and then share your success with the executives to show the potential. This will help you get more buy-in for more prominent areas.

This is actually one of the factors that you want to consider when prioritizing where to test. The “PIE Framework” shows you the three factors to help you prioritize.

PIE framework for A/B testing prioritization.
A sample PIE prioritization analysis.

One of them is Ease. Potential, Importance, and Ease. And one of the important aspects within Ease is political ease. So you want to look for areas that have political ease, which means there might not be as much sensitivity around them (so maybe not the homepage). Get those wins first, and create momentum, and then you can start sharing that throughout the organization to build that buy-in.

Further Reading: Marketers from ASICS’ global e-commerce team weigh in on evangelizing optimization at a global organization in this post, “A day in the life of an optimization champion

Back to Top

Q: Who would you say are the key stakeholders you need buy-in from, not only in senior leadership but critical members of the team?

Nick So: Besides the obvious senior leadership and key decision-makers as you mention, we find getting buy-in from related departments like branding, marketing, design, copywriters and content managers, etc., can be very helpful.

Having these teams on board can not only help with the overall approval process, but also helps ensure winning tests and strategies are aligned with your overall business and marketing strategy.

You should also consider involving more tangentially-related teams like customer support. This makes them a part of the process and testing culture, but your customer-facing teams can also be a great source for business insights and test ideas as well!

Back to Top

Q: Do you have any suggestions for success with lower traffic websites?

Nick So: In our testing experience, we find we get the most impactful results when we feel we have a strong understanding of the website’s visitors. In the Infinity Optimization Process, this understanding is gained through a balanced approach of Exploratory research, and Validated insights and results.

infinity optimization process
The Infinity Optimization Process is iterative and leads to continuous growth and insights.

When a site’s traffic is low, the ability to Validate is decreased, and so we try to make up for it by increasing the time spent and work done in the Explore phase.

We take those yet-to-be-validated insights found in the Explore phase, and build a larger, more impactful single variation, and test the cluster of changes. (This variation is generally more drastic than we would create for a higher-traffic client, since we can validate those insights easily through multiple tests.)

Because of the more drastic changes, the variation should have a larger impact on conversion rate (and hopefully gain statistical significance with lower traffic). And because we have researched evidence to support these changes, there is a higher likelihood that they will perform better than a standard re-design.

If a site does not have enough overall primary conversions, but you definitely, absolutely MUST test, then I would look for a secondary metric further ‘upstream’ to optimize for. These should be goals that indicate or guide the primary conversion (e.g. clicks to form > form submission, add to cart > transaction). However with this strategy, stakeholders have to be aware that increases in this secondary goal may not be tied directly to increases of the primary goal at the same rate.

Back to Top

Q: What would you prioritize to test on a page that has lower traffic, in order to achieve statistical significance?

Chris Goward: The opportunities that are going to make the most impact really depend on the situation and the context. So if it’s a landing page or the homepage or a product page, they’ll have different opportunities.

But with any area, start by trying to understand your customers. If you have a low-traffic site, you’ll need to spend more time on the qualitative research side, really trying to understand: what are the opportunities, the barriers your visitors might be facing, and drilling into more of their perspective. Then you’ll have a more powerful test setup.

You’ll want to test dramatically. Test with fewer variations, make more dramatic changes with the variations, and be comfortable with your tests running longer. And while they are running and you are waiting for results, go talk to your customers. Go and run some more user testing, drill into your surveys, do post-purchase surveys, get on the phone and get the voice of customer. All of these things will enrich your ability to imagine their perspective and come up with more powerful insights.

In general, the things that are going to have the most impact are value proposition changes themselves. Trying to understand, do you have the right product-market fit, do you have the right description of your product, are you leading with the right value proposition point or angle?

Back to Top

 

Q: How far can I go with funnel optimization and testing when it comes to small local business?

A PANEL RESPONSE

David Darmanin: What do you mean by small local business? If you’re a startup just getting started, my advice would be to stop thinking about optimization and focus on failing fast. Get out there, change things, get some traction, get growth and you can optimize later. Whereas, if you’re a small but established local business, and you have traffic but it’s low, that’s different. In the end, conversion optimization is a traffic game. Small local business with a lot of traffic, maybe. But if traffic is low, focus on the qualitative, speak to your users, spend more time understanding what’s happening.

John Ekman:

If you can’t test to significance, you should turn to qualitative research.

That would give you better results. If you don’t have the traffic to test against the last step in your funnel, you’ll end up testing at the beginning of your funnel. You’ll test for engagement or click through, and you’ll have to assume that people who don’t bounce and click through will convert. And that’s not always true. Instead, go start working with qualitative tools to see what the visitors you have are actually doing on your page and start optimizing from there.

André Morys: Testing with too small a sample size is really dangerous because it can lead to incorrect assumptions if you are not an expert in statistics. Even if you’re getting 10,000 to 20,000 orders per month, that is still a low number for A/B testing. Be aware of how the numbers work together. We’ve had people claiming 70% uplift, when the numbers are 64 versus 27 conversions. And this is really dangerous because that result is bull sh*t.

Video Resource: This panel response comes from the Growth & Conversion Virtual Summit held this Spring. You can still access all of the session recordings for free, here.

Back to Top

Q: How do you get buy-in from major stakeholders, like your CEO, to go with an evolutionary, optimized redesign approach vs. a radical redesign?

Jamie Elgie: It helps when you’ve had a screwup. When we started this process, we had not been successful with the radical design approach. But my advice for anyone championing optimization within an organization would be to focus on the overall objective.

For us, it was about getting our marketing spend to be more effective. If you can widen the funnel by making more people convert on your site, and then chase the people who convert (versus people who just land on your site) with your display media efforts, your social media efforts, your email efforts, and with all your paid efforts, you are going to be more effective. And that’s ultimately how we sold it.

It really sells itself though, once the process begins. It did not take long for us to see really impactful results that were helping our bottom line, as well as helping that overall strategy of making our display media spend, and all of our media spend more targeted.

Video Resource: Watch this webinar recording and discover how Jamie increased his company’s sales by more than 40% with evolutionary site redesign and conversion optimization.

Back to Top

Q: What has surprised you or stood out to you while doing CRO?

Jamie Elgie: There have been so many ‘A-ha!’s, and that’s the best part. We are always learning. Things that we are all convinced we should change on our website, or that we should change in our messaging in general, we’ll test them and actually find out.

We have one test running right now, and it’s failing, which is disappointing. But our entire emphasis as a team is changing, because we are learning something. And we are learning it without a huge amount of risk. And that, to me, has been the greatest thing about optimization. It’s not just the impact to your marketing funnel, it’s also teaching us. And it’s making us a better organization because we’re learning more.

One of the biggest benefits for me and my team has been how effective it is just to be able to say, ‘we can test that’.

If you have a salesperson who feels really strongly about something, and you feel really strongly that they’re wrong, the best recourse is to put it out on the table and say, ok, fine, we’ll go test that.

It enables conversations to happen that might not otherwise happen. It eliminates disputes that are not based on objective data, but on subjective opinion. It actually brings organizations together when people start to understand that they don’t need to be subjective about their viewpoints. Instead, you can bring your viewpoint to a test, and then you can learn from it. It’s transformational not just for a marketing organization, but for the entire company, if you can start to implement experimentation across all of your touch points.

Case Study: Read the details of how Jamie’s company, weBoost, saw a 100% lift in year-over-year conversion rate with and optimization program.

Back to Top

Q: Do you have any tips for optimizing a website to conversion when the purchase cycle is longer, like 1.5 months?

Chris Goward: That’s a common challenge in B2B or with large ticket purchases for consumers. The best way to approach this is to

  1. Track your leads and opportunities to the variation,
  2. Then, track them through to the sale,
  3. And then look at whether average order value changes between the variations, which implies the quality of the leads.

Because it’s easy to measure lead volume between variations. But if lead quality changes, then that makes a big impact.

We actually have a case study about this with Magento. We asked the question, “Which of these calls-to-action is actually generating the most valuable leads?”. And ran an experiment to try to find out. We tracked the leads all the way through to sale. This helped Magento optimize for the right calls-to-action going forward. And that’s an important question to ask near the beginning of your optimization program, which is, am I providing the right hook for my visitor?

Case Study: Discover how Magento increased lead volume and lead quality in the full case study.

Back to Top

Q: When you have a longer sales process, getting visitors to convert is the first step. We have softer conversions (eBooks) and urgent ones like demo requests. Do we need to pick ONE of these conversion options or can ‘any’ conversion be valued?

Nick So: Each test variation should be based on a single, primary hypothesis. And each hypothesis should be based on a single, primary conversion goal. This helps you keep your hypotheses and strategy focused and tactical, rather than taking a shotgun approach to just generally ‘improve the website’.

However, this focused approach doesn’t mean you should disregard all other business goals. Instead, count these as secondary goals and consider them in your post-test results analysis.

If a test increases demo requests by 50%, but cannibalizes ebook downloads by 75%, then, depending on the goal values of the two, a calculation has to be made to see if the overall net benefit of this tradeoff is positive or negative.

Different test hypotheses can also have different primary conversion goals. One test can focus on demos, but the next test can be focused on ebook downloads. You just have to track any other revenue-driving goals to ensure you aren’t cannibalizing conversions and having a net negative impact for each test.

Back to Top

Q: You’ve mainly covered websites that have a particular conversion goal, for example, purchasing a product, or making a donation. What would you say can be a conversion metric for a customer support website?

Nick So: When we help a client determine conversion metrics…

…we always suggest following the money.

Find the true impact that customer support might have on your company’s bottom line, and then determine a measurable KPI that can be tracked.

For example, would increasing the usefulness of the online support decrease costs required to maintain phone or email support lines (conversion goal: reduction in support calls/submissions)? Or, would it result in higher customer satisfaction and thus greater customer lifetime value (conversion goal: higher NPS responses via website poll)?

Back to Top

Q: Do you find that results from one client apply to other clients? Are you learning universal information, or information more specific to each audience?

Chris Goward: That question really gets at the nub of where we have found our biggest opportunity. When I started WiderFunnel in 2007, I thought that we would specialize in an industry, because that’s what everyone was telling us to do. They said, you need to specialize, you need to focus and become an expert in an industry. But I just sort of took opportunities as they came, with all kinds of different industries. And what I found is the exact opposite.

We’ve specialized in the process of optimization and personalization and creating powerful test design, but the insights apply to all industries.

What we’ve found is people are people, regardless of whether they’re shopping for a server, or shopping for socks, or donating to third-world countries, they go through the same mental process in each case.

The tactics are a bit different, sometimes. But often, we’re discovering breakthrough insights because we’re able to apply principles from one industry to another. For example, taking an e-commerce principle and identifying where on a B2B lead generation website we can apply that principle because someone is going through the same step in the process.

Most marketers spend most of their time thinking about their near-field competitors rather than in different industries, because it’s overwhelming to look at all of the other opportunities. But we are often able to look at an experience in a completely different way, because we are able to look at it through the lens of a different industry. That is very powerful.

Back to Top

Q: For companies that are not strictly e-commerce and have multiple business units with different goals, can you speak to any challenges with trying to optimize a visible page like the homepage so that it pleases all stakeholders? Is personalization the best approach?

Nick So: At WiderFunnel, we often work with organizations that have various departments with various business goals and agendas. We find the best way to manage this is to clearly quantify the monetary value of the #1 conversion goal of each stakeholder and/or business unit, and identify areas of the site that have the biggest potential impact for each conversion goal.

In most cases, the most impactful test area for one conversion goal will be different for another conversion goal (e.g. brand awareness on the homepage versus checkout for e-commerce conversions).

When there is a need to consider two different hypotheses with differing conversion goals on a single test area (like the homepage), teams can weigh the quantifiable impact + the internal company benefits in their decision and make that negotiation of prioritization and scheduling between teams.

I would not recommend personalization for this purpose, as that would be a stop-gap compromise that would limit the creativity and strategy of hypotheses, as well as create a disjointed experience for visitors, which would generally have a negative impact overall.

If you HAVE to run opposing strategies simultaneously on an area of the site, you could run multiple variations for different teams and measure different goals. Or, run mutually exclusive tests (keeping in mind these tactics would reduce test velocity, and would require more coordination between teams).

Back to Top

 

Q: Do you find testing strategies differ cross-culturally? Do conversion rates vary drastically across different countries / languages when using these strategies?

Chris Goward: We have run tests for many clients outside of the USA, such as in Israel, Sweden, Australia, UK, Canada, Japan, Korea, Spain, Italy and for the Olympics store, which is itself a global e-commerce experience in one site!

There are certainly cultural considerations and interesting differences in tactics. Some countries don’t have widespread credit card use, for example, and retailers there are accustomed to using alternative payment methods. Website design preferences in many Asian countries would seem very busy and overly colorful to a Western European visitor. At WiderFunnel, we specialize in English-speaking and Western-European conversion optimization and work with partner optimization companies around the world to serve our global and international clients.

Back to Top

Q: How do you recommend balancing the velocity of experimentation with quality, or more isolated design?

Chris Goward: This is where the art of the optimization strategist comes into play. And it’s where we spend the majority of our effort – in creating experiment plans. We look at all of the different options we could be testing, and ruthlessly narrow them down to the things that are going to maximize the potential growth and the potential insights.

And there are frameworks we use to do that. Its all about prioritization. There are hundreds of ideas that we could be testing, so we need to prioritize with as much data as we can. So, we’ve developed some frameworks to do that. The PIE Framework allows you to prioritize ideas and test areas based on the potential, importance, and ease. The potential for improvement, the importance to the business, and the ease of implementation. And sometimes these are a little subjective, but the more data you can have to back these up, the better your focus and effort will be in delivering results.

Further Reading:

Back to Top

Q: I notice that you often have multiple success metrics, rather than just one? Does this ever lead to cherry-picking a metric to make sure that the test you wanted to win seem like it’s the winner?

Chris Goward: Good question! We actually look for one primary metric that tells us what the business value of a winning test is. But we also track secondary metrics. The goal is to learn from the other metrics, but not use them for decision-making. In most cases, we’re looking for a revenue-driving primary metric. Revenue-per-visitor, for example, is a common metric we’ll use. But the other metrics, whether conversion rate or average order value or downloads, will tell us more about user behavior, and lead to further insights.

There are two steps in our optimization process that pair with each other in the Validate phase. One is design of experiments, and the other is results analysis. And if the results analysis is done correctly, all of the metrics that you’re looking at in terms of variation performance, will tell you more about the variations. And if the design of experiments has been done properly, then you’ll gather insights from all of the different data.

But you should be looking at one metric to tell you whether or not a test won.

Further Reading: Learn more about proper design of experiments in this blog post.

Back to Top

 

Q: When do you make the call for A/B tests for statistical significance? We run into the issue of varying test results depending on part of the week we’re running a test. Sometimes, we even have to run a test multiple times.

Chris Goward: It sounds like you may be ending your tests or trying to analyze results too early. You certainly don’t want to be running into day-of-the-week seasonality. You should be running your tests over at least a week, and ideally two weekends to iron out that seasonality effect, because your test will be in a different context on different days of the week, depending on your industry.

So, run your tests a little bit longer and aim for statistical significance. And you want to use tools that calculate statistical significance reliably, and help answer the real questions that you’re trying to ask with optimization. You should aim for that high level of statistical significance, and iron out that seasonality. And sometimes you’ll want to look at monthly seasonality as well, and retest questionable things within high and low urgency periods. That, of course, will be more relevant depending on your industry and whether or not seasonality is a strong factor.

Further Reading: You can’t make business decisions based on misleading A/B test results. Learn how to avoid the top 3 mistakes that make your A/B test results invalid in this post.

Back to Top

Q: Is there a way to conclusively tell why a test lost or was inconclusive? To know what the hidden gold is?

Chris Goward: Developing powerful hypotheses is dependent on having workable theories. Seeking to determine the “Why” behind the results is some of the most interesting part of the work.

The only way to tell conclusively is to infer a potential reason, then test again with new ways to validate that inference. Eventually, you can form conversion optimization theories and then test based on those theories. While you can never really know definitively the “why” behind the “what”, when you have theories and frameworks that work to predict results, they become just as useful.

As an example, I was reviewing a recent test for one of our clients and it didn’t make sense based on our LIFT Model. One of the variations was showing under-performance against another variation, but I believed strongly that it should have over-performed. I struggled for some time to align this performance with our existing theories and eventually discovered the conversion rate listed was a typo! The real result aligned perfectly with our existing framework, which allowed me to sleep at night again!

Back to Top

Q: How many visits do you need to get to statistically relevant data from any individual test?

Chris Goward: The number of visits is just one of the variables that determines statistical significance. The conversion rate of the Control and conversion rate delta between the variations are also part of the calculation. Statistical significance is achieved when there is enough traffic (i.e. sample size), enough conversions, and the conversion rate delta is great enough.

Here’s a handy Excel test duration calculator. Fortunately, today’s testing tools calculate statistical significance automatically, which simplifies the conversion champion’s decision-making (and saves hours of manual calculation!)

When planning tests, it’s helpful to estimate the test duration, but it isn’t an exact science. As a rule-of-thumb, you should plan for smaller isolation tests to run longer, as the impact on conversion rate may be less. The test may require more conversions to potentially achieve confidence.

Larger, more drastic cluster changes would typically run for a shorter period of time, as they have more potential to have a greater impact. However, we have seen that isolations CAN have the potential to have big impact. If the evidence is strong enough, test duration shouldn’t hinder you from trying smaller, more isolated changes as they can lead to some of the biggest insights.

Often, people that are new to testing become frustrated with tests that never seem to finish. If you’ve run a test with more than 30,000 to 50,000 visitors and one variation is still not statistically significant over another, then your test may not ever yield a clear winner and you should revise your test plan or reduce the number of variations being tested.

Further Reading: Do you have to wait for each test to reach statistical significance? Learn more in this blog post: “The more tests, the better!” and other A/B testing myths, debunked

Back to Top

Q: We are new to optimization (had a few quick wins with A/B testing and working toward a geo targeting project). Looking at your Infinity Optimization Process, I feel like we are doing a decent job with exploration and validation – for this being a new program to us. Our struggle seems to be your orange dot… putting the two sides together – any advice?

Chris Goward: If you’re getting insights from your Exploratory research, those insights should tie into the Validate tests that you’re running. You should be validating the insights that you’re getting from your Explore phase. If you started with valid insights, the results that you get really should be generating growth, and they should be generating insights.

Part of it is your Design of Experiments (DOE). DOE is how you structure your hypotheses and how you structure your variations to generate both growth and insights, and those are the two goals of your tests.

If you’re not generating growth, or you’re not generating insights, then your DOE may be weak, and you need to go back to your strategy and ask, why am I testing this variation? Is it just a random idea? Or, am I really isolating it against another variation that’s going to teach me something as well as generate lift? If you’re not getting the orange dot right, then you probably need to look at researching more about Design of Experiments.

Q: When test results are insignificant after lots impressions, how do you know when to ‘call it a tie’ and stop that test and move on?

Chris Goward: That’s a question that requires a large portion of “it depends.” It depends on whether:

  • You have other tests ready to run with the same traffic sources
  • The test results are showing high volatility or have stabilized
  • The test insights will be important for the organization

There’s an opportunity cost to every test. You could always be testing something else and need to constantly be asking whether this is the best test to be running now vs. the cost and potential benefit of the next test in your conversion strategy.

Back to Top

 

Q: There are tools meant to increase testing velocity with pre-built widgets and pre-built test variations, even – what are your thoughts on this approach?

A PANEL RESPONSE

John Ekman: Pre-built templates provide a way to get quick wins and uplift. But you won’t understand why it created an uplift. You won’t understand what’s going on in the brain of your users. For someone who believes that experimentation is a way to look in the minds of whoever is in front of the screen, I think these methods are quite dangerous.

Chris Goward: I’ll take a slightly different stance. As much as I talk about understanding the mind of the customer, asking why, and testing based on hypotheses, there is a tradeoff. A tradeoff between understanding the why and just getting growth. If you want to understand the why infinitely, you’ll do multivariate testing and isolate every potential variable. But in practice, that can’t happen. Very few have enough traffic to multivariate test everything.

But if you don’t have tons of traffic and you want to get faster results, maybe you don’t want to know the why about anything, and you just want to get lift.

There might be a time to do both. Maybe your website performance is really bad, or you just want to try a left-field variation, just to see if it works…if you get a 20% lift in your revenue, that’s not a failure. That’s not a bad thing to do. But then, you can go back and isolate all of the things to ask yourself: Well, I wonder why that won, and start from there.

The approach we usually take at WiderFunnel is to reserve 10% of the variations for ‘left-field’ variations. As in, we don’t know why this will work, but we’re just going to test something crazy and see if it sticks.

David Darmanin: I agree, and disagree. We’re living in an era when technology has become so cheap, that I think it’s dangerous for any company to try to automate certain things, because they’re going to just become one of many.

Creating a unique customer experience is going to become more and more important.

If you are using tools like a platform, where you are picking and choosing what to use so that it serves your strategy and the way you want to try to build a business, that makes sense to me. But I think it’s very dangerous to leave that to be completely automated.

Some software companies out there are trying to build a completely automated conversion rate optimization platform that does everything. But that’s insane. If many sites are all aligned in the same way, if it’s pure AI, they’re all going to end up looking the same. And who’s going to win? The other company that pops up out of nowhere, and does everything differently. That isn’t fully ‘optimized’ and is more human.

Optimization, in itself, if it’s too optimized, there is a danger. If we eliminate the human aspect, we’re kind of screwed.

Video Resource: This panel response comes from the Growth & Conversion Virtual Summit held this Spring. You can still access all of the session recordings for free, here.

Back to Top

What conversion optimization questions do you have?

Add your questions in the comments section below!

The post Your frequently asked conversion optimization questions, answered! appeared first on WiderFunnel Conversion Optimization.

View article:

Your frequently asked conversion optimization questions, answered!

Your mobile website optimization guide (or, how to stop frustrating your mobile users)

Reading Time: 15 minutes

One lazy Sunday evening, I decided to order Thai delivery for dinner. It was a Green-Curry-and-Crispy-Wonton kind of night.

A quick google search from my iPhone turned up an ad for a food delivery app. In that moment, I wanted to order food fast, without having to dial a phone number or speak to a human. So, I clicked.

From the ad, I was taken to the company’s mobile website. There was a call-to-action to “Get the App” below the fold, but I didn’t want to download a whole app for this one meal. I would just order from the mobile site.

Dun, dun, duuuun.

Over the next minute, I had one of the most frustrating ordering experiences of my life. Labeless hamburger menus, the inability to edit my order, and an overall lack of guidance through the ordering process led me to believe I would never be able to adjust my order from ‘Chicken Green Curry’ to ‘Prawn Green Curry’.

After 60 seconds of struggling, I gave up, utterly defeated.

I know this wasn’t a life-altering tragedy, but it sure was an awful mobile experience. And I bet you have had a similar experience in the last 24 hours.

Let’s think about this for a minute:

  1. This company paid good money for my click
  2. I was ready to order online: I was their customer to lose
  3. I struggled for about 30 seconds longer than most mobile users would have
  4. I gave up and got a mediocre burrito from the Mexican place across the street.

Not only was I frustrated, but I didn’t get my tasty Thai. The experience left a truly bitter taste in my mouth.

10 test ideas for optimizing your mobile website!

Get this checklist of 10 experiment ideas you should test on your mobile website.




Why is mobile website optimization important?

In 2017, every marketer ‘knows’ the importance of the mobile shopping experience. Americans spend more time on mobile devices than any other. But we are still failing to meet our users where they are on mobile.

Americans spend 54% of online time on mobile devices. Source: KPCB.

For most of us, it is becoming more and more important to provide a seamless mobile experience. But here’s where it gets a little tricky…

Conversion optimization”, and the term “optimization” in general, often imply improving conversion rates. But a seamless mobile experience does not necessarily mean a high-converting mobile experience. It means one that meets your user’s needs and propels them along the buyer journey.

I am sure there are improvements you can test on your mobile experience that will lift your mobile conversion rates, but you shouldn’t hyper-focus on a single metric. Instead, keep in mind that mobile may just be a step within your user’s journey to purchase.

So, let’s get started! First, I’ll delve into your user’s mobile mindset, and look at how to optimize your mobile experience. For real.

You ready?

What’s different about mobile?

First things first: let’s acknowledge that your user is the same human being whether they are shopping on a mobile device, a desktop computer, a laptop, or in-store. Agreed?

So, what’s different about mobile? Well, back in 2013, Chris Goward said, “Mobile is a state of being, a context, a verb, not a device. When your users are on mobile, they are in a different context, a different environment, with different needs.”

Your user is the same person when she is shopping on her iPhone, but she is in a different context. She may be in a store comparing product reviews on her phone, or she may be on the go looking for a good cup of coffee, or she may be trying to order Thai delivery from her couch.

Your user is the same person on mobile, but in a different context, with different needs.

This is why many mobile optimization experts recommend having a mobile website versus using responsive design.

Responsive design is not an optimization strategy. We should stop treating mobile visitors as ‘mini-desktop visitors’. People don’t use mobile devices instead of desktop devices, they use it in addition to desktop in a whole different way.

– Talia Wolf, Founder & Chief Optimizer at GetUplift

Step one, then, is to understand who your target customer is, and what motivates them to act in any context. This should inform all of your marketing and the creation of your value proposition.

(If you don’t have a clear picture of your target customer, you should re-focus and tackle that question first.)

Step two is to understand how your user’s mobile context affects their existing motivation, and how to facilitate their needs on mobile to the best of your ability.

Understanding the mobile context

To understand the mobile context, let’s start with some stats and work backwards.

  • Americans spend more than half (54%) of their online time on mobile devices (Source: KPCB, 2016)
  • Mobile accounts for 60% of time spent shopping online, but only 16% of all retail dollars spent (Source: ComScore, 2015)

Insight: Americans are spending more than half of their online time on their mobile devices, but there is a huge gap between time spent ‘shopping’ online, and actually buying.

  • 29% of smartphone users will immediately switch to another site or app if the original site doesn’t satisfy their needs (Source: Google, 2015)
  • Of those, 70% switch because of lagging load times and 67% switch because it takes too many steps to purchase or get desired information (Source: Google, 2015)

Insight: Mobile users are hypersensitive to slow load times, and too many obstacles.

So, why the heck are our expectations for immediate gratification so high on mobile? I have a few theories.

We’re reward-hungry

Mobile devices provide constant access to the internet, which means a constant expectation for reward.

“The fact that we don’t know what we’ll find when we check our email, or visit our favorite social site, creates excitement and anticipation. This leads to a small burst of pleasure chemicals in our brains, which drives us to use our phones more and more.” – TIME, “You asked: Am I addicted to my phone?

If non-stop access has us primed to expect non-stop reward, is it possible that having a negative mobile experience is even more detrimental to our motivation than a negative experience in another context?

When you tap into your Facebook app and see three new notifications, you get a burst of pleasure. And you do this over, and over, and over again.

So, when you tap into your Chrome browser and land on a mobile website that is difficult to navigate, it makes sense that you would be extra annoyed. (No burst of fun reward chemicals!)

A mobile device is a personal device

Another facet to mobile that we rarely discuss is the fact that mobile devices are personal devices. Because our smartphones and wearables are with us almost constantly, they often feel very intimate.

In fact, our smartphones are almost like another limb. According to research from dscout, the average cellphone user touches his or her phone 2,167 times per day. Our thumbprints are built into them, for goodness’ sake.

Just think about your instinctive reaction when someone grabs your phone and starts scrolling through your pictures…

It is possible, then, that our expectations are higher on mobile because the device itself feels like an extension of us. Any experience you have on mobile should speak to your personal situation. And if the experience is cumbersome or difficult, it may feel particularly dissonant because it’s happening on your mobile device.

User expectations on mobile are extremely high. And while you can argue that mobile apps are doing a great job of meeting those expectations, the mobile web is failing.

If yours is one of the millions of organizations without a mobile app, your mobile website has got to work harder. Because a negative experience with your brand on mobile may have a stronger effect than you can anticipate.

Even if you have a mobile app, you should recognize that not everyone is going to use it. You can’t completely disregard your mobile website. (As illustrated by my extremely negative experience trying to order food.)

You need to think about how to meet your users where they are in the buyer journey on your mobile website:

  1. What are your users actually doing on mobile?
  2. Are they just seeking information before purchasing from a computer?
  3. Are they seeking information on your mobile site while in your actual store?

The great thing about optimization is that you can test to pick off low-hanging fruit, while you are investigating more impactful questions like those above. For instance, while you are gathering data about how your users are using your mobile site, you can test usability improvements.

Usability on mobile websites

If you are looking take get a few quick wins to prove the importance of a mobile optimization program, usability is a good place to begin.

The mobile web presents unique usability challenges for marketers. And given your users’ ridiculously high expectations, your mobile experience must address these challenges.

mobile website optimization - usability
This image represents just a few mobile usability best practices.

Below are four of the core mobile limitations, along with recommendations from the WiderFunnel Strategy team around how to address (and test) them.

Note: For this section, I relied heavily on research from the Nielsen Norman Group. For more details, click here.

1. The small screen struggle

No surprise, here. Compared to desktop and laptop screens, even the biggest smartphone screen is smaller―which means they display less content.

“The content displayed above the fold on a 30-inch monitor requires 5 screenfuls on a small 4-inch screen. Thus mobile users must (1) incur a higher interaction cost in order to access the same amount of information; (2) rely on their short-term memory to refer to information that is not visible on the screen.” – Nielsen Norman Group, “Mobile User Experience: Limitations and Strengths

Strategist recommendations:

Consider persistent navigation and calls-to-action. Because of the smaller screen size, your users often need to do a lot of scrolling. If your navigation and main call-to-action aren’t persistent, you are asking your users to scroll down for information, and scroll back up for relevant links.

Note: Anything persistent takes up screen space as well. Make sure to test this idea before implementing it to make sure you aren’t stealing too much focus from other important elements on your page.

2. The touchy touchscreen

Two main issues with the touchscreen (an almost universal trait of today’s mobile devices) are typing and target size.

Typing on a soft keyboard, like the one on your user’s iPhone, requires them to constantly divide their attention between what they are typing, and the keypad area. Not to mention the small keypad and crowded keys…

Target size refers to a clickable target, which needs to be a lot larger on a touchscreen than it is does when your user has a mouse.

So, you need to make space for larger targets (bigger call-to-action buttons) on a smaller screen.

Strategist recommendations:

Test increasing the size of your clickable elements. Google provides recommendations for target sizing:

You should ensure that the most important tap targets on your site—the ones users will be using the most often—are large enough to be easy to press, at least 48 CSS pixels tall/wide (assuming you have configured your viewport properly).

Less frequently-used links can be smaller, but should still have spacing between them and other links, so that a 10mm finger pad would not accidentally press both links at once.

You may also want to test improving the clarity around what is clickable and what isn’t. This can be achieved through styling, and is important for reducing ‘exploratory clicking’.

When a user has to click an element to 1) determine whether or not it is clickable, and 2) determine where it will lead, this eats away at their finite motivation.

Another simple tweak: Test your call-to-action placement. Does it match with the motion range of a user’s thumb?

3. Mobile shopping experience, interrupted

As the term mobile implies, mobile devices are portable. And because we can use ‘em in many settings, we are more likely to be interrupted.

“As a result, attention on mobile is often fragmented and sessions on mobile devices are short. In fact, the average session duration is 72 seconds […] versus the average desktop session of 150 seconds.”Nielsen Norman Group

Strategist recommendations:

You should design your mobile experience for interruptions, prioritize essential information, and simplify tasks and interactions. This goes back to meeting your users where they are within the buyer journey.

According to research by SessionM (published in 2015), 90% of smartphone users surveyed used their phones while shopping in a physical store to 1) compare product prices, 2) look up product information, and 3) check product reviews online.

You should test adjusting your page length and messaging hierarchy to facilitate your user’s main goals. This may be browsing and information-seeking versus purchasing.

4. One window at a time

As I’m writing this post, I have 11 tabs open in Google Chrome, split between two screens. If I click on a link that takes me to a new website or page, it’s no big deal.

But on mobile, your user is most likely viewing one window at a time. They can’t split their screen to look at two windows simultaneously, so you shouldn’t ask them to. Mobile tasks should be easy to complete in one app or on one website.

The more your user has to jump from page to page, the more they have to rely on their memory. This increases cognitive load, and decreases the likelihood that they will complete an action.

Strategist recommendations:

Your navigation should be easy to find and it should contain links to your most relevant and important content. This way, if your user has to travel to a new page to access specific content, they can find their way back to other important pages quickly and easily.

In e-commerce, we often see people “pogo-sticking”—jumping from one page to another continuously—because they feel that they need to navigate to another page to confirm that the information they have provided is correct.

A great solution is to ensure that your users can view key information that they may want to confirm (prices / products / address) on any page. This way, they won’t have to jump around your website and remember these key pieces of information.

Implementing mobile website optimization

As I’m sure you’ve noticed by now, the phrase “you should test” is peppered throughout this post. Because understanding the mobile context, and reviewing usability challenges and recommendations are first steps.

If you can, you should test any recommendation made in this post. Which brings us to mobile website optimization. At WiderFunnel, we approach mobile optimization just like we would desktop optimization: with process.

You should evaluate and prioritize mobile web optimization in the context of all of your marketing. If you can achieve greater Return on Investment by optimizing your desktop experience (or another element of your marketing), you should start there.

But assuming your mobile website ranks high within your priorities, you should start examining it from your user’s perspective. The WiderFunnel team uses the LIFT Model framework to identify problem areas.

The LIFT Model allows us to identify barriers to conversion, using the six factors of Value Proposition, Clarity, Relevance, Anxiety, Distraction, and Urgency. For more on the LIFT Model, check out this blog post.

A LIFT illustration

I asked the WiderFunnel Strategy team to do a LIFT analysis of the food delivery website that gave me so much grief that Sunday night. Here are some of the potential barriers they identified on the checkout page alone:

Mobile website LIFT analysis
This wireframe is based on the food delivery app’s checkout page. Each of the numbered LIFT points corresponds with the list below.
  1. Relevance: There is valuable page real estate dedicated to changing the language, when a smartphone will likely detect your language on its own.
  2. Anxiety: There are only 3 options available in the navigation: Log In, Sign Up, and Help. None of these are helpful when a user is trying to navigate between key pages.
  3. Clarity: Placing the call-to-action at the top of the page creates disjointed eyeflow. The user must scan the page from top to bottom to ensure their order is correct.
  4. Clarity: The “Order Now” call-to-action and “Allergy & dietary information links” are very close together. Users may accidentally tap one, when they want to tap the other.
  5. Anxiety: There is no confirmation of the delivery address.
  6. Anxiety: There is no way to edit an order within the checkout. A user has to delete items, return to the menu and add new items.
  7. Clarity: Font size is very small making the content difficult to read.
  8. Clarity: The “Cash” and “Card” icons have no context. Is a user supposed to select one, or are these just the payment options available?
  9. Distraction: The dropdown menus in the footer include many links that might distract a user from completing their order.

Needless to say, my frustrations were confirmed. The WiderFunnel team ran into the same obstacles I had run into, and identified dozens of barriers that I hadn’t.

But what does this mean for you?

When you are first analyzing your mobile experience, you should try to step into your user’s shoes and actually use your experience. Give your team a task and a goal, and walk through the experience using a framework like LIFT. This will allow you to identify usability issues within your user’s mobile context.

Every LIFT point is a potential test idea that you can feed into your optimization program.

Case study examples

This wouldn’t be a WiderFunnel blog post without some case study examples.

This is where we put ‘best mobile practices’ to the test. Because the smallest usability tweak may make perfect sense to you, and be off-putting to your users.

In the following three examples, we put our recommendations to the test.

Mobile navigation optimization

In mobile design in particular, we tend to assume our users understand ‘universal’ symbols.

Aritzia Hamburger Menu
The ‘Hamburger Menu’ is a fixture on mobile websites. But does that mean it’s a universally understood symbol?

But, that isn’t always the case. And it is certainly worth testing to understand how you can make the navigation experience (often a huge pain point on mobile) easier.

You can’t just expect your users to know things. You have to make it as clear as possible. The more you ask your user to guess, the more frustrated they will become.

– Dennis Pavlina, Optimization Strategist, WiderFunnel

This example comes from an e-commerce client that sells artwork. In this experiment, we tested two variations against the original.

In the first, we increased font and icon size within the navigation and menu drop-down. This was a usability update meant to address the small, difficult to navigate menu. Remember the conversation about target size? We wanted to tackle the low-hanging fruit first.

With variation B, we dug a little deeper into the behavior of this client’s specific users.

Qualitative Hotjar recordings had shown that users were trying to navigate the mobile website using the homepage as a homebase. But this site actually has a powerful search functionality, and it is much easier to navigate using search. Of course, the search option was buried in the hamburger menu…

So, in the second variation (built on variation A), we removed Search from the menu and added it right into the main Nav.

Mobile website optimization - navigation
Wireframes of the control navigation versus our variations.

Results

Both variations beat the control. Variation A led to a 2.7% increase in transactions, and a 2.4% increase in revenue. Variation B decreased clicks to the menu icon by -24%, increased transactions by 8.1%, and lifted revenue by 9.5%.

Never underestimate the power of helping your users find their way on mobile. But be wary! Search worked for this client’s users, but it is not always the answer, particularly if what you are selling is complex, and your users need more guidance through the funnel.

Mobile product page optimization

Let’s look at another e-commerce example. This client is a large sporting goods store, and this experiment focused on their product detail pages.

On the original page, our Strategists noted a worst mobile practice: The buttons were small and arranged closely together, making them difficult to click.

There were also several optimization blunders:

  1. Two calls-to-action were given equal prominence: “Find in store” and “+ Add to cart”
  2. “Add to wishlist” was also competing with “Add to cart”
  3. Social icons were placed near the call-to-action, which could be distracting

We had evidence from an experiment on desktop that removing these distractions, and focusing on a single call-to-action, would increase transactions. (In that experiment, we saw transactions increase by 6.56%).

So, we tested addressing these issues in two variations.

In the first, we de-prioritized competing calls-to-action, and increased the ‘Size’ and ‘Qty’ fields. In the second, we wanted to address usability issues, making the color options, size options, and quantity field bigger and easier to click.

mobile website optimization - product page variations
The control page versus our variations.

Results

Both of our variations lost to the Control. I know what you’re thinking…what?!

Let’s dig deeper.

Looking at the numbers, users responded in the way we expected, with significant increases to the actions we wanted, and a significant reduction in the ones we did not.

Visits to “Reviews”, “Size”, “Quantity”, “Add to Cart” and the Cart page all increased. Visits to “Find in Store” decreased.

And yet, although the variations were more successful at moving users through to the next step, there was not a matching increase in motivation to actually complete a transaction.

It is hard to say for sure why this result happened without follow-up testing. However, it is possible that this client’s users have different intentions on mobile: Browsing and seeking product information vs. actually buying. Removing the “Find in Store” CTA may have caused anxiety.

This example brings us back to the mobile context. If an experiment wins within a desktop experience, this certainly doesn’t guarantee it will win on mobile.

I was shopping for shoes the other day, and was actually browsing the store’s mobile site while I was standing in the store. I was looking for product reviews. In that scenario, I was information-seeking on my phone, with every intention to buy…just not from my phone.

Are you paying attention to how your unique users use your mobile experience? It may be worthwhile to take the emphasis off of ‘increasing conversions on mobile’ in favor of researching user behavior on mobile, and providing your users with the mobile experience that best suits their needs.

Note: When you get a test result that contradicts usability best practices, it is important that you look carefully at your experiment design and secondary metrics. In this case, we have a potential theory, but would not recommend any large-scale changes without re-validating the result.

Mobile checkout optimization

This experiment was focused on one WiderFunnel client’s mobile checkout page. It was an insight-driving experiment, meaning the focus was on gathering insights about user behavior rather than on increasing conversion rates or revenue.

Evidence from this client’s business context suggested that users on mobile may prefer alternative payment methods, like Apple Pay and Google Wallet, to the standard credit card and PayPal options.

To make things even more interesting, this client wanted to determine the desire for alternative payment methods before implementing them.

The hypothesis: By adding alternative payment methods to the checkout page in an unobtrusive way, we can determine by the percent of clicks which new payment methods are most sought after by users.

We tested two variations against the Control.

In variation A, we pulled the credit card fields and call-to-action higher on the page, and added four alternative payment methods just below the CTA: PayPal, Apple Pay, Amazon Payments, and Google Wallet.

If a user clicked on one of the four alternative payment methods, they would see a message:

“Google Wallet coming soon!
We apologize for any inconvenience. Please choose an available deposit method.
Credit Card | PayPal”

In variation B, we flipped the order. We featured the alternative payment methods above the credit card fields. The focus was on increasing engagement with the payment options to gain better insights about user preference.

mobile website optimization - checkout page
The control against variations testing alternative payment methods.

Note: For this experiment, iOS devices did not display the Google Wallet option, and Android devices did not display Apple Pay.

Results

On iOS devices, Apple Pay received 18% of clicks, and Amazon Pay received 12%. On Android devices, Google Wallet received 17% of clicks, and Amazon Pay also received 17%.

The client can use these insights to build the best experience for mobile users, offering Apple Pay and Google Wallet as alternative payment methods rather than PayPal or Amazon Pay.

Unexpectedly, both variations also increased transactions! Variation A led to an 11.3% increase in transactions, and variation B led to an 8.5% increase.

Because your user’s motivation is already limited on mobile, you should try to create an experience with the fewest possible steps.

You can ask someone to grab their wallet, decipher their credit card number, expiration date, and ccv code, and type it all into a small form field. Or, you can test leveraging the digital payment options that may already be integrated with their mobile devices.

The future of mobile website optimization

Imagine you are in your favorite outdoor goods store, and you are ready to buy a new tent.

You are standing in front of piles of tents: 2-person, 3-person, 4-person tents; 3-season and extreme-weather tents; affordable and pricey tents; light-weight and heavier tents…

You pull out your smartphone, and navigate to the store’s mobile website. You are looking for more in-depth product descriptions and user reviews to help you make your decision.

A few seconds later, a store employee asks if they can help you out. They seem to know exactly what you are searching for, and they help you choose the right tent for your needs within minutes.

Imagine that while you were browsing products on your phone, that store employee received a notification that you are 1) in the store, 2) looking at product descriptions for tent A and tent B, and 3) standing by the tents.

Mobile optimization in the modern era is not about increasing conversions on your mobile website. It is about providing a seamless user experience. In the scenario above, the in-store experience and the mobile experience are inter-connected. One informs the other. And a transaction happens because of each touch point.

Mobile experiences cannot live in a vacuum. Today’s buyer switches seamlessly between devices [and] your optimization efforts must reflect that.

Yonny Zafrani, Mobile Product Manager, Dynamic Yield

We wear the internet on our wrists. We communicate via chat bots and messaging apps. We spend our leisure time on our phones: streaming, gaming, reading, sharing.

And while I’m not encouraging you to shift your optimization efforts entirely to mobile, you must consider the role mobile plays in your customers’ lives. The online experience is mobile. And your mobile experience should be an intentional step within the buyer journey.

What does your ideal mobile shopping experience look like? Where do you think mobile websites can improve? Do you agree or disagree with the ideas in this post? Share your thoughts in the comments section below!

The post Your mobile website optimization guide (or, how to stop frustrating your mobile users) appeared first on WiderFunnel Conversion Optimization.

More:

Your mobile website optimization guide (or, how to stop frustrating your mobile users)

How pilot pesting can dramatically improve your user research

Reading Time: 6 minutes

Today, we are talking about user research, a critical component of any design toolkit. Quality user research allows you to generate deep, meaningful user insights. It’s a key component of WiderFunnel’s Explore phase, where it provides a powerful source of ideas that can be used to generate great experiment hypothesis.

Unfortunately, user research isn’t always as easy as it sounds.

Do any of the following sound familiar:

  • During your research sessions, your participants don’t understand what they have been asked to do?
  • The phrasing of your questions has given away the answer or has caused bias in your results?
  • During your tests, it’s impossible for your participants to complete the assigned tasks in the time provided?
  • After conducting participants sessions, you spend more time analyzing the research design rather than the actual results.

If you’ve experienced any of these, don’t worry. You’re not alone.

Even the most seasoned researchers experience “oh-shoot” moments, where they realize there are flaws in their research approach.

Fortunately, there is a way to significantly reduce these moments. It’s called pilot testing.

Pilot testing is a rehearsal of your research study. It allows you to test your research approach with a small number of test participants before the main study. Although this may seem like an additional step, it may, in fact, be the time best spent on any research project.
Just like proper experiment design is a necessity, investing time to critique, test, and iteratively improve your research design, before the research execution phase, can ensure that your user research runs smoothly, and dramatically improves the outputs from your study.

And the best part? Pilot testing can be applied to all types of research approaches, from basic surveys to more complex diary studies.

Start with the process

At WiderFunnel, our research approach is unique for every project, but always follows a defined process:

  1. Developing a defined research approach (Methodology, Tools, Participant Target Profile)
  2. Pilot testing of research design
  3. Recruiting qualified research participants
  4. Execution of research
  5. Analyzing the outputs
  6. Reporting on research findings
website user research in conversion optimization
User Research Process at WiderFunnel

Each part of this process can be discussed at length, but, as I said, this post will focus on pilot testing.

Your research should always start with asking the high-level question: “What are we aiming to learn through this research?”. You can use this question to guide the development of research methodology, select research tools, and determine the participant target profile. Pilot testing allows you to quickly test and improve this approach.

WiderFunnel’s pilot testing process consists of two phases: 1) an internal research design review and 2) participant pilot testing.

During the design review, members from our research and strategy teams sit down as a group and spend time critically thinking about the research approach. This involves reviewing:

  • Our high-level goals for what we are aiming to learn
  • The tools we are going to use
  • The tasks participants will be asked to perform
  • Participant questions
  • The research participant sample size, and
  • The participant target profile

Our team often spends a lot of time discussing the questions we plan to ask participants. It can be tempting to ask participants numerous questions over a broad range of topics. This inclination is often due to a fear of missing the discovery of an insight. Or, in some cases, is the result of working with a large group of stakeholders across different departments, each trying to push their own unique agenda.

However, applying a broad, unfocused approach to participant questions can be dangerous. It can cause a research team to lose sight of its original goals and produce research data that is difficult to interpret; thus limiting the number of actionable insights generated.

To overcome this, WiderFunnel uses the following approach when creating research questions:

Phase 1: To start, the research team creates a list of potential questions. These questions are then reviewed during the design review. The goal is to create a concise set of questions that are clearly written, do not bias the participant, and compliment each other. Often this involves removing a large number of the questions from our initial list and reworking those that remain.

Phase 2: The second phase of WiderFunnel’s research pilot testing consists of participant pilot testing.

This follows a rapid and iterative approach, where we pilot our defined research approach on an initial 1 to 2 participants. Based on how these participants respond, the research approach is evaluated, improved, and then tested on 1 to 2 new participants.

Researchers repeat this process until all of the research design “bugs” have been ironed out, much like QA-ing a new experiment. There are different criteria you can use to test the research experience, but we focus on testing three main areas: clarity of instructions, participant tasks and questions, and the research timing.

  • Clarity of instructions: This involves making sure that the instructions are not misleading or confusing to the participants
  • Testing of the tasks and questions: This involves testing the actual research workflow
  • Research timing: We evaluate the timing of each task and the overall experiment

Let’s look at an example.

Recently, a client approached us to do research on a new area of their website that they were developing for a new service offering. Specifically, the client wanted to conduct an eye tracking study on a new landing page and supporting content page.

With the client, we co-created a design brief that outlined the key learning goals, target participants, the client’s project budget, and a research timeline. The main learning goals for the study included developing an understanding of customer engagement (eye tracking) on both the landing and content page and exploring customer understanding of the new service.

Using the defined learning goals and research budget, we developed a research approach for the project. Due to the client’s budget and request for eye tracking we decided to use Sticky, a remote eye tracking tool to conduct the research.

We chose Sticky because it allows you to conduct unmoderated remote eye tracking experiments, and follow them up with a survey if needed.

In addition, we were also able to use Sticky’s existing participant pool, Sticky Crowd, to define our target participants. In this case, the criteria for the target participants were determined based on past research that had been conducted by the client.

Leveraging the capabilities of Sticky, we were able to define our research methodology and develop an initial workflow for our research participants. We then created an initial list of potential survey questions to supplement the eye tracking test.

At this point, our research and strategy team conducted an internal research design review. We examined both the research task and flow, the associated timing, and finalized the survey questions.

In this case, we used open-ended questions in order to not bias the participants, and limited the total number of questions to five. Questions were reworked from the proposed lists to improve the wording, ensure that questions complimented each other, and were focused on achieving the learning goals: exploring customer understanding of the new service.

To help with question clarity, we used Grammarly to test the structure of each question.

Following the internal design review, we began participant pilot testing.

Unfortunately, piloting an eye tracking test on 1 to 2 users is not an affordable option when using the Sticky platform. To overcome this we got creative and used some free tools to test the research design.

We chose to use Keynote presentation (timed transitions) and its Keynote Live feature to remotely test the research workflow, and Google Forms to test the survey questions. GoToMeeting was used to observe participants via video chat during the participant pilot testing. Using these tools we were able to conduct a quick and affordable pilot test.

The initial pilot test was conducted with two individual participants, both of which fit the criteria for the target participants. The pilot test immediately pointed out flaws in the research design, which included confusion regarding the test instructions and issues with the timing for each task.

In this case, our initial instructions did not provide our participants with enough information on the context of what they were looking for, resulting in confusion of what they were actually supposed to do. Additionally, we made an initial assumption that 5 seconds would be enough time for each participant to view and comprehend each page. However, the supporting content page was very context rich and 5 seconds did not provide participants enough time to view all the content on the page.

With these insights, we adjusted our research design to remove the flaws, and then conducted an additional pilot with two new individual participants. All of the adjustments seemed to resolve the previous “bugs”.

In this case, pilot testing not only gave us the confidence to move forward with the main study, it actually provide its own “A-ha” moment. Through our initial pilot tests, we realized that participants expected a set function for each page. For the landing page, participants expected a page that grabbed their attention and attracted them to the service, whereas, they expect the supporting content page to provide more details on the service and educate them on how it worked. Insights from these pilot tests reshaped our strategic approach to both pages.

Nick So

The seemingly ‘failed’ result of the pilot test actually gave us a huge Aha moment on how users perceived these two pages, which not only changed the answers we wanted to get from the user research test, but also drastically shifted our strategic approach to the A/B variations themselves.

Nick So, Director of Strategy, WiderFunnel

In some instances, pilot testing can actually provide its own unique insights. It is a nice bonus when this happens, but it is important to remember to always validate these insights through additional research and testing.

Final Thoughts

Still not convinced about the value of pilot testing? Here’s one final thought.

By conducting pilot testing you not only improve the insights generated from a single project, but also the process your team uses to conduct research. The reflective and iterative nature of pilot testing will actually accelerate the development of your skills as a researcher.

Pilot testing your research, just like proper experiment design, is essential. Yes, this will require an investment of both time and effort. But trust us, that small investment will deliver significant returns on your next research project and beyond.

Do you agree that pilot testing is an essential part of all research projects?

Have you had an “oh-shoot” research moment that could have been prevented by pilot testing? Let us know in the comments!

The post How pilot pesting can dramatically improve your user research appeared first on WiderFunnel Conversion Optimization.

Credit: 

How pilot pesting can dramatically improve your user research

How pilot testing can dramatically improve your user research

Reading Time: 6 minutes

Today, we are talking about user research, a critical component of any design toolkit. Quality user research allows you to generate deep, meaningful user insights. It’s a key component of WiderFunnel’s Explore phase, where it provides a powerful source of ideas that can be used to generate great experiment hypothesis.

Unfortunately, user research isn’t always as easy as it sounds.

Do any of the following sound familiar:

  • During your research sessions, your participants don’t understand what they have been asked to do?
  • The phrasing of your questions has given away the answer or has caused bias in your results?
  • During your tests, it’s impossible for your participants to complete the assigned tasks in the time provided?
  • After conducting participants sessions, you spend more time analyzing the research design rather than the actual results.

If you’ve experienced any of these, don’t worry. You’re not alone.

Even the most seasoned researchers experience “oh-shoot” moments, where they realize there are flaws in their research approach.

Fortunately, there is a way to significantly reduce these moments. It’s called pilot testing.

Pilot testing is a rehearsal of your research study. It allows you to test your research approach with a small number of test participants before the main study. Although this may seem like an additional step, it may, in fact, be the time best spent on any research project.
Just like proper experiment design is a necessity, investing time to critique, test, and iteratively improve your research design, before the research execution phase, can ensure that your user research runs smoothly, and dramatically improves the outputs from your study.

And the best part? Pilot testing can be applied to all types of research approaches, from basic surveys to more complex diary studies.

Start with the process

At WiderFunnel, our research approach is unique for every project, but always follows a defined process:

  1. Developing a defined research approach (Methodology, Tools, Participant Target Profile)
  2. Pilot testing of research design
  3. Recruiting qualified research participants
  4. Execution of research
  5. Analyzing the outputs
  6. Reporting on research findings
website user research in conversion optimization
User Research Process at WiderFunnel

Each part of this process can be discussed at length, but, as I said, this post will focus on pilot testing.

Your research should always start with asking the high-level question: “What are we aiming to learn through this research?”. You can use this question to guide the development of research methodology, select research tools, and determine the participant target profile. Pilot testing allows you to quickly test and improve this approach.

WiderFunnel’s pilot testing process consists of two phases: 1) an internal research design review and 2) participant pilot testing.

During the design review, members from our research and strategy teams sit down as a group and spend time critically thinking about the research approach. This involves reviewing:

  • Our high-level goals for what we are aiming to learn
  • The tools we are going to use
  • The tasks participants will be asked to perform
  • Participant questions
  • The research participant sample size, and
  • The participant target profile

Our team often spends a lot of time discussing the questions we plan to ask participants. It can be tempting to ask participants numerous questions over a broad range of topics. This inclination is often due to a fear of missing the discovery of an insight. Or, in some cases, is the result of working with a large group of stakeholders across different departments, each trying to push their own unique agenda.

However, applying a broad, unfocused approach to participant questions can be dangerous. It can cause a research team to lose sight of its original goals and produce research data that is difficult to interpret; thus limiting the number of actionable insights generated.

To overcome this, WiderFunnel uses the following approach when creating research questions:

Phase 1: To start, the research team creates a list of potential questions. These questions are then reviewed during the design review. The goal is to create a concise set of questions that are clearly written, do not bias the participant, and compliment each other. Often this involves removing a large number of the questions from our initial list and reworking those that remain.

Phase 2: The second phase of WiderFunnel’s research pilot testing consists of participant pilot testing.

This follows a rapid and iterative approach, where we pilot our defined research approach on an initial 1 to 2 participants. Based on how these participants respond, the research approach is evaluated, improved, and then tested on 1 to 2 new participants.

Researchers repeat this process until all of the research design “bugs” have been ironed out, much like QA-ing a new experiment. There are different criteria you can use to test the research experience, but we focus on testing three main areas: clarity of instructions, participant tasks and questions, and the research timing.

  • Clarity of instructions: This involves making sure that the instructions are not misleading or confusing to the participants
  • Testing of the tasks and questions: This involves testing the actual research workflow
  • Research timing: We evaluate the timing of each task and the overall experiment

Let’s look at an example.

Recently, a client approached us to do research on a new area of their website that they were developing for a new service offering. Specifically, the client wanted to conduct an eye tracking study on a new landing page and supporting content page.

With the client, we co-created a design brief that outlined the key learning goals, target participants, the client’s project budget, and a research timeline. The main learning goals for the study included developing an understanding of customer engagement (eye tracking) on both the landing and content page and exploring customer understanding of the new service.

Using the defined learning goals and research budget, we developed a research approach for the project. Due to the client’s budget and request for eye tracking we decided to use Sticky, a remote eye tracking tool to conduct the research.

We chose Sticky because it allows you to conduct unmoderated remote eye tracking experiments, and follow them up with a survey if needed.

In addition, we were also able to use Sticky’s existing participant pool, Sticky Crowd, to define our target participants. In this case, the criteria for the target participants were determined based on past research that had been conducted by the client.

Leveraging the capabilities of Sticky, we were able to define our research methodology and develop an initial workflow for our research participants. We then created an initial list of potential survey questions to supplement the eye tracking test.

At this point, our research and strategy team conducted an internal research design review. We examined both the research task and flow, the associated timing, and finalized the survey questions.

In this case, we used open-ended questions in order to not bias the participants, and limited the total number of questions to five. Questions were reworked from the proposed lists to improve the wording, ensure that questions complimented each other, and were focused on achieving the learning goals: exploring customer understanding of the new service.

To help with question clarity, we used Grammarly to test the structure of each question.

Following the internal design review, we began participant pilot testing.

Unfortunately, piloting an eye tracking test on 1 to 2 users is not an affordable option when using the Sticky platform. To overcome this we got creative and used some free tools to test the research design.

We chose to use Keynote presentation (timed transitions) and its Keynote Live feature to remotely test the research workflow, and Google Forms to test the survey questions. GoToMeeting was used to observe participants via video chat during the participant pilot testing. Using these tools we were able to conduct a quick and affordable pilot test.

The initial pilot test was conducted with two individual participants, both of which fit the criteria for the target participants. The pilot test immediately pointed out flaws in the research design, which included confusion regarding the test instructions and issues with the timing for each task.

In this case, our initial instructions did not provide our participants with enough information on the context of what they were looking for, resulting in confusion of what they were actually supposed to do. Additionally, we made an initial assumption that 5 seconds would be enough time for each participant to view and comprehend each page. However, the supporting content page was very context rich and 5 seconds did not provide participants enough time to view all the content on the page.

With these insights, we adjusted our research design to remove the flaws, and then conducted an additional pilot with two new individual participants. All of the adjustments seemed to resolve the previous “bugs”.

In this case, pilot testing not only gave us the confidence to move forward with the main study, it actually provide its own “A-ha” moment. Through our initial pilot tests, we realized that participants expected a set function for each page. For the landing page, participants expected a page that grabbed their attention and attracted them to the service, whereas, they expect the supporting content page to provide more details on the service and educate them on how it worked. Insights from these pilot tests reshaped our strategic approach to both pages.

Nick So

The seemingly ‘failed’ result of the pilot test actually gave us a huge Aha moment on how users perceived these two pages, which not only changed the answers we wanted to get from the user research test, but also drastically shifted our strategic approach to the A/B variations themselves.

Nick So, Director of Strategy, WiderFunnel

In some instances, pilot testing can actually provide its own unique insights. It is a nice bonus when this happens, but it is important to remember to always validate these insights through additional research and testing.

Final Thoughts

Still not convinced about the value of pilot testing? Here’s one final thought.

By conducting pilot testing you not only improve the insights generated from a single project, but also the process your team uses to conduct research. The reflective and iterative nature of pilot testing will actually accelerate the development of your skills as a researcher.

Pilot testing your research, just like proper experiment design, is essential. Yes, this will require an investment of both time and effort. But trust us, that small investment will deliver significant returns on your next research project and beyond.

Do you agree that pilot testing is an essential part of all research projects?

Have you had an “oh-shoot” research moment that could have been prevented by pilot testing? Let us know in the comments!

The post How pilot testing can dramatically improve your user research appeared first on WiderFunnel Conversion Optimization.

Source – 

How pilot testing can dramatically improve your user research

“The more tests, the better!” and other A/B testing myths, debunked

Reading Time: 8 minutes

Will the real A/B testing success metrics please stand up?

It’s 2017, and most marketers understand the importance of A/B testing. The strategy of applying the scientific method to marketing to prove whether an idea will have a positive impact on your bottom-line is no longer novel.

But, while the practice of A/B testing has become more and more common, too many marketers still buy into pervasive A/B testing myths. #AlternativeFacts.

This has been going on for years, but the myths continue to evolve. Other bloggers have already addressed myths like “A/B testing and conversion optimization are the same thing”, and “you should A/B test everything”.

As more A/B testing ‘experts’ pop up, A/B testing myths have become more specific. Driven by best practices and tips and tricks, these myths represent ideas about A/B testing that will derail your marketing optimization efforts if left unaddressed.

Avoid the pitfalls of ad-hoc A/B testing…

Get this guide, and learn how to build an optimization machine at your company. Discover how to use A/B testing as part of your bigger marketing optimization strategy!



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.



But never fear! With the help of WiderFunnel Optimization Strategist, Dennis Pavlina, I’m going to rebut four A/B testing myths that we hear over and over again. Because there is such a thing as a successful, sustainable A/B testing program…

Into the light, we go!

Myth #1: The more tests, the better!

A lot of marketers equate A/B testing success with A/B testing velocity. And I get it. The more tests you run, the faster you run them, the more likely you are to get a win, and prove the value of A/B testing in general…right?

Not so much. Obsessing over velocity is not going to get you the wins you’re hoping for in the long run.

Mike St Laurent

The key to sustainable A/B testing output, is to find a balance between short-term (maximum testing speed), and long-term (testing for data-collection and insights).

Michael St Laurent, Senior Optimization Strategist, WiderFunnel

When you focus solely on speed, you spend less time structuring your tests, and you will miss out on insights.

With every experiment, you must ensure that it directly addresses the hypothesis. You must track all of the most relevant goals to generate maximum insights, and QA all variations to ensure bugs won’t skew your data.

Dennis Pavlina

An emphasis on velocity can create mistakes that are easily avoided when you spend more time on preparation.

Dennis Pavlina, Optimization Strategist, WiderFunnel

Another problem: If you decide to test many ideas, quickly, you are sacrificing your ability to really validate and leverage an idea. One winning A/B test may mean quick conversion rate lift, but it doesn’t mean you’ve explored the full potential of that idea.

You can often apply the insights gained from one experiment, when building out the strategy for another experiment. Plus, those insights provide additional evidence for testing a particular concept. Lining up a huge list of experiments at once without taking into account these past insights can result in your testing program being more scattershot than evidence-based.

While you can make some noise with an ‘as-many-tests-as-possible’ strategy, you won’t see the big business impact that comes from a properly structured A/B testing strategy.

Myth #2: Statistical significance is the end-all, be-all

A quick definition

Statistical significance: The probability that a certain result is not due to chance. At WiderFunnel, we use a 95% confidence level. In other words, we can say that there is a 95% chance that the observed result is because of changes in our variation (and a 5% chance it is due to…well…chance).

If a test has a confidence level of less than 95% (positive or negative), it is inconclusive and does not have our official recommendation. The insights are deemed directional and subject to change.

Ok, here’s the thing about statistical significance: It is important, but marketers often talk about it as if it is the only determinant for completing an A/B test. In actuality, you cannot view it within a silo.

For example, a recent experiment we ran reached statistical significance three hours after it went live. Because statistical significance is viewed as the end-all, be-all, a result like this can be exciting! But, in three hours, we had not gathered a representative sample size.

Claire Vignon Keser

You should not wait for a test to be significant (because it may never happen) or stop a test as soon as it is significant. Instead, you need to wait for the calculated sample size to be reached before stopping a test. Use a test duration calculator to understand better when to stop a test.

After 24 hours, the same experiment had dropped to a confidence level of 88%, meaning that there was now only an 88% likelihood that the difference in conversion rates was not due to chance – i.e. statistically significant.

Traffic behaves differently over time for all businesses, so you should always run a test for full business cycles, even if you have reached statistical significance. This way, your experiment has taken into account all of the regular fluctuations in traffic that impact your business.

For an e-commerce business, a full business cycle is typically a one-week period; for subscription-based businesses, this might be one month or longer.

Myth #2, Part II: You have to run a test until reaches statistical significance

As Claire pointed out, this may never happen. And it doesn’t mean you should walk away from an A/B test, completely.

As I said above, anything below 95% confidence is deemed subject to change. But, with testing experience, an expert understanding of your testing tool, and by observing the factors I’m about to outline, you can discover actionable insights that are directional (directionally true or false).

  • Results stability: Is the conversion rate difference stable over time, or does it fluctuate? Stability is a positive indicator.
ab testing results stability
Check your graphs! Are conversion rates crossing? Are the lines smooth and flat, or are there spikes and valleys?
  • Experiment timeline: Did I run this experiment for at least a full business cycle? Did conversion rate stability last throughout that cycle?
  • Relativity: If my testing tool uses t-test to determine significance, am I looking at the hard numbers of actual conversions in addition to conversion rate? Does the calculated lift make sense?
  • LIFT & ROI: Is there still potential for the experiment to achieve X% lift? If so, you should let it run as long as it is viable, especially when considering the ROI.
  • Impact on other elements: If elements outside the experiment are unstable (social shares, average order value, etc.) the observed conversion rate may also be unstable.

You can use these factors to make the decision that makes the most sense for your business: implement the variation based on the observed trends, abandon the variation based on observed trends, and/or create a follow-up test!

Myth #3: An A/B test is only as good as its effect on conversion rates

Well, if conversion rate is the only success metric you are tracking, this may be true. But you’re underestimating the true growth potential of A/B testing if that’s how you structure your tests!

To clarify: Your main success metric should always be linked to your biggest revenue driver.

But, that doesn’t mean you shouldn’t track other relevant metrics! At WiderFunnel, we set up as many relevant secondary goals (clicks, visits, field completions, etc.) as possible for each experiment.

Dennis Pavlina

This ensures that we aren’t just gaining insights about the impact a variation has on conversion rate, but also the impact it’s having on visitor behavior.

– Dennis Pavlina

When you observe secondary goal metrics, your A/B testing becomes exponentially more valuable because every experiment generates a wide range of secondary insights. These can be used to create follow up experiments, identify pain points, and create a better understanding of how visitors move through your site.

An example

One of our clients provides an online consumer information service — users type in a question and get an Expert answer. This client has a 4-step funnel. With every test we run, we aim to increase transactions: the final, and most important conversion.

But, we also track secondary goals, like click-through-rates, and refunds/chargebacks, so that we can observe how a variation influences visitor behavior.

In one experiment, we made a change to step one of the funnel (the landing page). Our goal was to set clearer visitor expectations at the beginning of the purchasing experience. We tested 3 variations against the original, and all 3 won resulted in increased transactions (hooray!).

The secondary goals revealed important insights about visitor behavior, though! Firstly, each variation resulted in substantial drop-offs from step 1 to step 2…fewer people were entering the funnel. But, from there, we saw gradual increases in clicks to steps 3 and 4.

Our variations seemed to be filtering out visitors without strong purchasing intent. We also saw an interesting pattern with one of our variations: It increased clicks from step 3 to step 4 by almost 12% (a huge increase), but decreased actual conversions by -1.6%. This result was evidence that the call-to-action on step 4 was extremely weak (which led to a follow-up test!)

ab testing funnel analysis
You can see how each variation fared against the Control in this funnel analysis.

We also saw large decreases in refunds and chargebacks for this client, which further supported the idea that the right visitors (i.e. the wrong visitors) were the ones who were dropping off.

This is just a taste of what every A/B test could be worth to your business. The right goal tracking can unlock piles of insights about your target visitors.

Myth #4: A/B testing takes little to no thought or planning

Believe it or not, marketers still think this way. They still view A/B testing on a small scale, in simple terms.

But A/B testing is part of a greater whole—it’s one piece of your marketing optimization program—and you must build your tests accordingly. A one-off, ad-hoc test may yield short-term results, but the power of A/B testing lies in iteration, and in planning.

ab testing infinity optimization process
A/B testing is just a part of the marketing optimization machine.

At WiderFunnel, a significant amount of research goes into developing ideas for a single A/B test. Even tests that may seem intuitive, or common-sensical, are the result of research.

ab testing planning
The WiderFunnel strategy team gathers to share and discuss A/B testing insights.

Because, with any test, you want to make sure that you are addressing areas within your digital experiences that are the most in need of improvement. And you should always have evidence to support your use of resources when you decide to test an idea. Any idea.

So, what does a revenue-driving A/B testing program actually look like?

Today, tools and technology allow you to track almost any marketing metric. Meaning, you have an endless sea of evidence that you can use to generate ideas on how to improve your digital experiences.

Which makes A/B testing more important than ever.

An A/B test shows you, objectively, whether or not one of your many ideas will actually increase conversion rates and revenue. And, it shows you when an idea doesn’t align with your user expectations and will hurt your conversion rates.

And marketers recognize the value of A/B testing. We are firmly in the era of the data-driven CMO: Marketing ideas must be proven, and backed by sound data.

But results-driving A/B testing happens when you acknowledge that it is just one piece of a much larger puzzle.

One of our favorite A/B testing success stories is that of DMV.org, a non-government content website. If you want to see what a truly successful A/B testing strategy looks like, check out this case study. Here are the high level details:

We’ve been testing with DMV.org for almost four years. In fact, we just launched our 100th test with them. For DMV.org, A/B testing is a step within their optimization program.

Continuous user research and data gathering informs hypotheses that are prioritized and created into A/B tests (that are structured using proper Design of Experiments). Each A/B test delivers business growth and/or insights, and these insights are fed back into the data gathering. It’s a cycle of continuous improvement.

And here’s the kicker: Since DMV.org began A/B testing strategically, they have doubled their revenue year over year, and have seen an over 280% conversion rate increase. Those numbers kinda speak for themselves, huh?

What do you think?

Do you agree with the myths above? What are some misconceptions around A/B testing that you would like to see debunked? Let us know in the comments!

The post “The more tests, the better!” and other A/B testing myths, debunked appeared first on WiderFunnel Conversion Optimization.

Excerpt from:

“The more tests, the better!” and other A/B testing myths, debunked

Your growth strategy and the true potential of A/B testing

Reading Time: 7 minutes

Imagine being a leader who can see the future…

Who can know if a growth strategy will succeed or fail before investing in it.

Who makes confident decisions based on what she knows her users want.

Who puts proven ideas to work to cut spending and lift revenue.

Okay. Now stop imagining, because you can be that leader…right now. You just need the right tool. (And no, I’m not talking about a crystal ball.) I’m talking about testing.

Build a future-proof growth machine at your company

Get this free guide on how to build an optimization program that delivers continuous growth and insights for your business. Learn the in’s and out’s of the proven processes and frameworks used by brands like HP, ASICS, and 1-800 Flowers.



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.

So many marketers approach “conversion optimization” and “A/B testing” with the wrong goals: they think too small. Their testing strategy is hyper focused on increasing conversions. Your Analytics team can A/B test button colors and copy tweaks and design changes until they are blue in the face. But if that’s all your company is doing, you are missing out on the true potential of conversion optimization.

Testing should not be a small piece of your overall growth strategy. It should not be relegated to your Analytics department, or shouldered by a single optimizer. Because you can use testing to interrogate and validate major business decisions.

“Unfortunately, most marketers get [conversion optimization] wrong by considering it to be a means for optimizing a single KPI (e.g – registrations, sales or downloads of an app). However conversion optimization testing is much much more than that. Done correctly with a real strategic process, CRO provides in-depth knowledge about our customers.

All this knowledge can then be translated into a better customer journey, optimized customer success and sales teams, we can even improve shipping and of course the actual product or service we provide. Every single aspect of our business can be optimized leading to higher conversion rates, more sales and higher retention rates. This is how you turn CRO from a “X%” increase in sign ups to complete growth of your business and company.

Once marketers and business owners follow a process, stop testing elements such as call to action buttons or titles for the sake of it and move onto testing more in-depth processes and strategies, only then will they see those uplifts and growth they strive for that scale and keep.”Talia Wolf, CMO, Banana Splash

Testing and big picture decision making should be intertwined. And if you want to grow and scale your business, you must be open to testing the fundamentals of said business.

Imagine spearheading a future-proof growth strategy. That’s what A/B testing can do for you.

In this post, I’m going to look at three examples of using testing to make business decisions. Hopefully, these examples will inspire you to put conversion optimization to work as a truly influential determinant of your growth strategy.

Testing a big business decision before you make it

Often, marketers look to testing as a way to improve digital experiences that already exist. When your team tests elements on your page, they are testing what you have already invested in (and they may find those elements aren’t working…)

  • “If I improve the page UX, I can increase conversions”
  • “If I remove distracting links from near my call-to-action button, I can increase conversions”
  • “If I add a smiling person to my hero image, I can capture more leads”, etc.

But if you want to stay consistently ahead of the marketing curve, you should test big changes before you invest in them. You’ll save money, time, resources. And, as with any properly-structured test, you will learn something about your users.

A B2C Example

One WiderFunnel client is a company that provides an online consumer information service—visitors type in a question and get an Expert answer.

The marketing leaders at this company wanted to add some new payment options to the checkout page of their mobile experience. After all, it makes sense to offer alternative payment methods like Apple Pay and Amazon Payments to mobile users, right?

Fortunately, this company is of a test-first, implement-second mindset.

With the help of WiderFunnel’s Strategy team, this client ran a test to identify demand for new payment methods before actually putting any money or resources into implementing said alternative payment methods.

This test was not meant to lift conversion rates. Rather, it was designed to determine which alternative payment methods users preferred.

Note: This client did not actually support the new payment methods when we ran this test. When a user clicked on the Apple Pay method, for instance, they saw the following message:

“Apple Pay coming soon!
We apologize for any inconvenience.
Please choose an available deposit method:
Credit Card | PayPal”

marketing-strategy-payment-options
Should this client invest in alternative payment methods? Only the test will tell!

Not only did this test provide the client with the insight they were looking for about which alternative payment methods their users prefer, but (BONUS!) it also produced significant increases in conversions, even though that was not our intention.

Because they tested first, this client can now invest in the alternative payment options that are most preferred by their users with confidence. Making a big business change doesn’t have to be a gamble.

As Sarah Breen of ASICS said,

We’re proving our assumptions with data. Testing allows me to say, ‘This is why we took this direction. We’re not just doing what our competitors do, it’s not just doing something that we saw on a site that sells used cars. This is something that’s been proven to work on our site and we’re going to move forward with it.’

Testing what you actually offer, part I

Your company has put a lot of thought (research, resources, money) into determining what you should actually offer. It can be overwhelming to even ask the question, “Is our product line actually the best offering A) for our users and B) for our business?”

But asking the big scary questions is a must. Your users are evolving, how they shop is evolving, your competition is evolving. Your product offering must evolve as well.

Some companies bring in experienced product consultants to advise them, but why not take the question to the people (aka your users)…and test your offering.

An E-commerce Example

Big scary question: Have you ever considered reducing the number of products you offer?

One WiderFunnel client offers a huge variety of products. During a conversation between our Strategists and the marketing leaders at this company, the idea to test a reduced product line surfaced.

The thinking was that even if conversions stayed flat with a fewer-products variation, this test would be considered a winner if the reduction in products meant money saved on overhead costs, such as operations costs, shipping and logistics costs, manufacturing costs and so on.

marketing-strategy-jam-study
The Jam Study is one of the most famous demonstrations of the Paradox of Choice.

Plus! There is a psychological motivator that backs up less-is-more thinking: The Paradox of Choice suggests that fewer options might mean less anxiety for visitors. If a visitor has less anxiety about which product is more suitable for them, they may have increased confidence in actually purchasing.

After working with this client’s team to cut down their product line to just the essential top 3 products, our Strategists created what they refer to as the ‘Minimalist’ variation. This variation will be tested against the original product page, which features many products.

marketing-strategy-product-offerings
This client’s current product category page features many products. The ‘Minimalist’ variation highlights just their top 3 products.

If the ‘Minimalist’ variation is a clear winner, this client will be armed with the information they need to consider halting the manufacture of several older products—a potentially dramatic cost-saving initiative.

Even if the variation is a loser, the insights gained could be game-changing. If the ‘Minimalist’ variation results in a revenue loss of 10%, but the cost of manufacturing all of those other products is more than 10%, this client would experience a net revenue gain! Which means, they would want to seriously consider fewer products as an option.

Regardless of the outcome, an experiment like this one will give the marketing decision-maker evidence to make a more informed decision about a fundamental aspect of their business.

Cutting products is a huge business decision, but if you know how your users will respond ahead of time, you can make that decision without breaking a sweat.

Testing what you actually offer, part II

Experienced marketers often assume that they know best. They assume they know what their user wants and needs, because they have ‘been around’. They may assume that, because everybody else is offering something, it is the best offering―(the “our-competitors-are-emphasizing-this-so-it-must-be-the-most-important-offering” mentality).

Well, here’s another big scary question: Does your offering reflect what your users value most? Rather than guessing, push your team to dig into the data, find the gaps in your user experience, and test your offering.

“Most conversion optimization work happens behind the scenes: the research process to understand the user. From the research you form various hypotheses for what they want and how they want it.

This informs [what] you come up with, and with A/B/n testing you’re able to validate market response…before you go full in and spend all that money on a strategy that performs sub-optimally.” Peep Laja, Founder, ConversionXL

A B2B Example

When we started working with SaaS company, Magento, they were offering a ‘Free Demo’ of the Enterprise Edition of their software. Offering a ‘Free Demo’ is a best practice for software companies—everybody does it and it was probably a no-brainer for Magento’s product team.

Looking at clickmap data, however, WiderFunnel’s Strategists noticed that Magento users were really engaged with the informational tabs lower down on the product page.

They had the option to try the ‘Free Demo’, but the data indicated that they were looking for more information. Unfortunately, once users had finished browsing tabs, there was nowhere else to go.

So, our Strategists decided to test a secondary ‘Talk to a specialist’ call-to-action.

marketing-strategy-magento-offering
Is the ‘Free Demo’ offering always what software shoppers are looking for?

This call-to-action hadn’t existed prior to this test, so the literal infinite conversion rate lift Magento saw in qualified sales calls was not surprising. What was surprising was the phone call we received 6 months later: Turns out the ‘Talk to a specialist’ leads were far more valuable than the ‘Get a free demo’ leads.

After several subsequent test rounds, “Talk to a specialist” became the main call-to-action on this page. Magento’s most valuable prospects value the opportunity to get more information from a specialist more than they value a free product demo. SaaS ‘best practices’ be damned.

Optimization is a way of doing business. It’s a strategy for embedding a test-and-learn culture within every fibre of your business.

– Chris Goward, Founder & CEO, WiderFunnel

You don’t need to be a mind-reader to know what your users want, and you don’t need to be a seer to know whether or not a big business change will succeed or flop. You simply need to test.

Leave your ego at the door and listen to what your users are telling you. Be the marketing leader with the answers, the leader who can see the future and can plan her growth strategy accordingly.

How do you use testing as a tool for making big business decisions? Let us know in the comments!

The post Your growth strategy and the true potential of A/B testing appeared first on WiderFunnel Conversion Optimization.

Link: 

Your growth strategy and the true potential of A/B testing

A day in the life of an optimization champion

Reading Time: 9 minutes

How do you make conversion optimization a priority within a global organization?

Especially, when there are so many other things you could spend your marketing dollars on?

And how do you keep multiple marketing teams aligned when it comes to your optimization efforts?

These are some of the challenges facing Jose Uzcategui, Global Analytics and Ecommerce Conversion Lead at ASICS, and Sarah Breen, Global Ecommerce Product Lead at ASICS.

ASICS, a global sporting goods retailer, is a giant company with multiple websites and marketing teams in multiple regions.

For an organization like this, deciding to pursue conversion optimization (CRO) as a marketing strategy is one thing, but actually implementing a successful, cohesive conversion optimization program is an entirely different thing.

Related: Get WiderFunnel’s free Optimization Champion’s Handbook for tips on how to be the Optimization Champion your company needs.

We started working with ASICS several months ago to help them with this rather daunting task.

A few weeks ago, I sat down with Jose and Sarah to discuss what it’s like to be an Optimization Champion within a company like ASICS.

Let’s start at the very beginning with a few introductions.

For almost 8 years, Jose has been involved in different areas of online marketing, but Analytics has always been a core part of his career. About five years ago, he began to move from paid marketing and SEO and started focusing on analysis and conversion optimization.

He was brought in to lead the conversion optimization program at ASICS, but it became obvious that proper conversion optimization wouldn’t be possible without putting the company’s Analytics in order first.

“For my first year at ASICS, I was focused on getting our Analytics where they need to be. Right now, we have a good Analytics foundation and that’s why we’re getting momentum on conversion optimization. We’re building our teams internally and externally and my role, right now, is both execution and strategy on these two fronts,” explains Jose.

Sarah has been with ASICS for a little over a year as the Ecommerce Global Product Lead. She hadn’t really been involved with testing until she started working more closely with WiderFunnel and Optimizely (a testing tool).

She started working with Nick So, WiderFunnel Optimization Strategist, and Aswin Kumar, WiderFunnel Optimization Coordinator, to try to figure out what experiments would make the biggest impact in the shortest amount of time on ASICS’ sites.

“I sometimes work with our designers to decide what a test should look like from the front end and how many variations we want to test, based on Nick and Aswin’s recommendations. I provide WiderFunnel the necessary assets, as well as a timeline and final approvals.

“Once a test is launched, I work with WiderFunnel and with Jose to figure out what the results mean, and whether or not the change is something we want to roll out globally and when we’ll be able to do that (considering how many other things we have in our queue that are required development work),” explains Sarah.

But optimization is just a part of Sarah’s role at ASICS: she works with a number of vendors to try to get third party solutions on their sites globally, and she works with ASICS’ regional teams to determine new product features and functionality.

Despite the fact that they wear many hats, Jose and Sarah are both heavily involved in ASICS’ conversion optimization efforts, and I wanted to know what drew them to CRO.

Q: What do each of you find exciting about conversion optimization?

“Conversion optimization gives immediate results and that’s a great feeling,” says Jose. “Particularly with e-commerce, if you have an idea, you test it, and you know you’re about to see what that idea is worth in monetary value.”

Sarah loves the certainty.

We’re proving our assumptions with data. Testing allows me to say, ‘This is why we took this direction. We’re not just doing what our competitors do, it’s not just doing something that we saw on a site that sells used cars. This is something that’s been proven to work on our site and we’re going to move forward with it.’

Of course, it’s not all high’s when you’re an Optimization Champion at an enterprise company, which led me to my next question…

Q: What are the biggest challenges you face as an Optimization Champion within a company like ASICS?

For Sarah, the biggest challenge is one of prioritization. “We have so many things we want to do: how do we prioritize? I want to do more and more testing. It’s just about picking our battles and deciding what the best investment will be,” she explains.

“When it comes to global teams, aligning the regions on initiatives you may want to test can be challenging,” adds Jose. “If a region doesn’t plan for testing at the beginning of their campaign planning process, for instance, it becomes very difficult to test something more dramatic like a new value proposition or personalization experiences.”

Despite the challenges, Sarah and Jose believe in conversion optimization. Of course, it’s a lot easier to sell the idea of CRO if there’s already a data-driven, testing culture within a company.

Q: Was there a testing culture at ASICS before your partnership with WiderFunnel?

“We had a process in place. We had introduced the LIFT Model®, actually. The LIFT Model is an easy framework to work with, it’s easy to communicate. But there wasn’t enough momentum, or resources, or time put into testing for us to say, ‘We have a testing culture and everybody is on board.’ Before WiderFunnel, there were a few seeds planted, but not a lot of soil or water for them to grow,” says Jose.

LIFT_Model
WiderFunnel’s LIFT Model details the 6 conversion factors.

Q: So, there wasn’t necessarily a solid testing culture at ASICS – how, then, did you go about convincing your team to invest in CRO versus another marketing strategy?

“Education. For everything in enterprise, education is the most important thing you can do. As soon as people understand that they can translate a campaign into a certain amount of money or ROI, then it becomes easy to say ‘Ok, let’s try something else that can tie to the money,’” says Jose, firmly.

“A different strategy is just downplaying the impact of testing. ‘It’s just a test, it’s just temporary for a couple of weeks,’ I might say. Either people understand the value of testing, or I diminish the impact that a test has on the site.”

“Until it’s a huge winner!” I interject.

“Yes! Obviously, if it’s a huge winner, I can say, ‘Oh, look at that! Let’s try another,’” chuckles Jose.

Jose and Sarah focused on education and, with a bit of luck and good timing, they convinced ASICS to invest in conversion optimization.

Q: Has it been a good investment?

“Everybody goes into this kind of investment hoping that there will be a test that will knock it out of the park. You know, a really clear, black and white winner that shows: we invested this amount in this test and in a year it will mean 5x that amount.

“We had a few tests that pointed in that direction, but we didn’t have that black and white winner. For some people, they have that black and white mentality and they might ask if it was worth it.

“I think it was a wise investment. It’s a matter of time before we run that test that proves that everything is worthwhile or the team as a whole realizes that things that we’re learning, even if they’re not at this moment translating into dollars, are worthwhile because we’re learning how our users think, what they do, etc.”

After establishing ASICS’ satisfaction, I wanted to move on to the logistics of managing a conversion optimization program both internally and in conjunction with a partner. First things first: successful relationships are all about communication.

Q: How do you communicate, share ideas, and implement experiments both between your internal teams and WiderFunnel? How do you keep everyone aligned and on the same page?

Sarah explains, “We’ve tried a few different management tools. Right now, JIRA seems to be working well for us. I can add people to an already existing ticket and I don’t have to add a lot of explanation. I can just say, Aswin and Nick came up with this idea, it’s approved, here’s a mock up. Everything is documented in one place and it’s searchable.

“I don’t necessarily think JIRA is the best tool for what we’re doing, but it allows us to have a whole history in a system that our development team is already using. And they know how to use it and check off a ticket and that’s helpful.

Related: Get organized with Liftmap. This free management tool makes it easy for teams to analyze web experiences, then present findings to stakeholders.

“I also send emails with recaps, because digging through those long JIRA discussions is kind of rough.”

Q: How do you share what you’re working on with other teams within ASICS?

“There are two parts to sharing our work: what’s going on and what’s coming,” explains Jose.

“You can see what’s coming in JIRA: tests that are coming and ideas that are being developed.

“Once we have results from a test and a write up, we’ll put a one-pager in a blog style report. When we have a new update, we send an email with the link to the one-pager and I also attach it as a PDF so that anyone who may not have access can still see the results.”

Sarah adds, “They’re very clear, paragraph form explanations with images of everything we’re doing. It’s less technical, more ‘this is what we tried, these are our assumptions, these are our results, this is what we’re going to do.’

This gives the Execs that aren’t on the day-to-day a snapshot showing we’ve made progress, what next steps are, and that we’re doing something good.

Q: How do you engage your co-workers and get them excited about conversion optimization?

Jose says, “I’ve gotten some comments and questions [on our one-page reports]. Obviously, I would like to get more. Once we have more resources, we’ll be able to put different strategies in place to get more engagement from the team. Lately, I’ve been trying to give credit to the region at least that came up with whatever idea we tested.

“I would like to get even more specific as we get more momentum, being able to say things like ‘Pete came up with this idea…and actually it didn’t work out, though we did learn insight X or insight Y.’ or ‘Pete came up with his third winning idea in a row—he gets a prize!’

There’s a level of fun that we can activate. We have some engagement, but I’m hoping for more.

Q: Ok, you’ve concluded a test, analyzed and shared your results — what’s your process for actually implementing changes based on testing?

Jose is quick to respond to this question, giving credit to Sarah: “Sarah’s involvement in our conversion optimization program has been great. Ultimately, Sarah is the one who gets things onto the site. And that’s half of the equation when it comes to testing. It’s so necessary having someone like Sarah invested in this. Without her, the tests might die in development.”

Sarah laughs and thanks Jose. “A lot of my job is managing expectations with our regions,” she explains. “Some regions want to test everything, and they want to do it now, and we have to tell them ‘That’s great, but we can’t give you all of our attention.’ Whereas some regions barely talk to us and have a lot of missed opportunities, so we have to manage the testing and implementation on their site.

“For less engaged regions, we try to communicate “Hey, we have evidence that this change really helped — look at all the sales you got and all of the clicks you got, we’d like you to have this on your page.

“Testing also takes a lot of the back and forth and Q & A out of implementation because we already have something that works. And, unless there’s some weird situation, we can roll a change out globally and say, ‘This is where the idea came from, it came from so-and-so, it’s pushed all the way through and now it’s a global change.’

“We can invite the regions to think of all of the awesome things we can do as a global team whenever we work together and go through this process. And other people can say ‘Hey, we did this! I have some more ideas.’ And the circle continues. It’s really great.”

You’ve both spent a lot of time working with WiderFunnel to build up ASICS’ conversion optimization program, so I’ve got to ask…

Q: What are the biggest challenges and benefits of working with a partner like WiderFunnel?

“The biggest challenge in working with any partner is response time: me responding in time, them responding in time. I’m also the middle man for a lot of things, so maintaining alignment can be tough,” says Sarah.

“But as far as benefits go, it’s hard to choose one. One of the biggest has been WiderFunnel’s ability to take the debate out of a testing decision. You’re able to evaluate testing ideas with a points structure, saying, ‘We think this would be the most valuable for you, for your industry, for what we’ve seen with your competitors, this is the site you should run it on, we think it would be best on mobile or desktop, etc.’

“And we can rely on WiderFunnel’s expertise and say, ‘Let’s do it.’ We just have to figure out if there’s anything that might really ruffle feathers, like making changes to our homepage. We have to be careful with that because it’s prime real estate.

“But if it’s a change to a cart page, I can say, ‘Yes, let’s go ahead and do that, get that in the queue!’ It’s all about getting those recommendations. And once we have a few smaller wins, we can move up to the homepage because we’ve built that trust.

“Another benefit is the thorough results analysis. The summary of assumptions, findings, charts, data, graphs, next steps and opportunities. That’s huge. We can look at the data quickly and identify what’s obvious, by ourselves, but it takes time for us to collate and collect and really break down the results into very clear terms. That’s been hugely helpful,” she adds.

For Jose, the benefit is simple: “Getting tests concluded and getting ideas tested has been the most helpful. Yes or no, next. Yes or no, next. Yes or no, next. That’s created the visibility that I’ve been hoping for — getting visibility across the organization and getting everybody fired up about testing. That’s been the best aspect for me.”

Are you your organization’s Optimization Champion? How do you spread the gospel of testing within your organization? Let us know in the comments!

The post A day in the life of an optimization champion appeared first on WiderFunnel Conversion Optimization.

See original article: 

A day in the life of an optimization champion

How to be a heavy hitter in enterprise e-commerce CRO

Reading Time: 8 minutes

There was a time when simply launching an A/B test was a big deal.

I remember my first test. It was a lead gen form. I completely redesigned it. I learned nothing. And it felt like I was on top of the world.

Today, things are different, especially if you’re a major e-commerce company doing high-volume conversion optimization in a team setting. The demands have shifted; the expectations are far greater. New tools are being created to solve new problems.

So what does it take to own enterprise e-commerce CRO in 2016 compared to before?

Make money during A/B tests

While “always be testing” is a great mantra, I have to ask, “is you ‘always be banking?’”

Most of us have been running tests that inform us first, and make money later. For example, you might run a test where you’ve got a clear winner, but it’s one of 5 other variations, so you’re only benefiting from it 20% of the time during the length of the experiment.

Furthermore, you may have 4 variations that are underperforming versus your Control, so you could even be losing money while you test. Imagine spending an entire year testing in that manner. You’d rarely be fully benefiting from your positive test results!

Of course, as part of a controlled experiment and in order to generate valid insights, it’s important to distribute traffic evenly and fairly between all variations (across multiple days of the week, etc).

But there also comes a time to be opportunistic.

Enter the multi-arm bandit (MAB) approach. MAB is an automated testing mechanism that diverts more traffic to better performing variations. Thresholds can be set to control how much better a variation has to perform before it is favored by the mechanism.

Hold your horses: MAB sounds amazing, but it is not the solution to all of your problems. It’s best reserved for times when the potential revenue gains outweigh the potential insights to be gained or the test has little long-term value.

Say, for example, you’re running a pre-Labor Day promotion and you’ve got a site-wide banner. This banner’s only going to be around for 5-10 days before you switch to the next holiday. So really, you just want to make the most of the opportunity and not think about it again until next year.

A bandit algorithm applied to an A/B test of your banner will help you find the best performer during the period of the experiment, and help generate the most revenue during the testing period.

While you may not be able to infer too many insights from the experiment, you should be able to generate more revenue than had you either not tested at all or gone with a traditional, even split test.
multi-armed-bandit-algorithm-ab-testing

  • BEFORE: Test, analyze results, decide, implement, make money later.
  • TODAY: Test and make money while you’re at it.
  • When to do it: Best used in cases where what you learn is not that useful for the future.
  • When not to do it: Not necessarily the most useful for long-term testing programs.

Track long-term revenue gains

If you’ve been testing over the course of many months and years, accurately tracking and reporting your cumulative gains can become a serious challenge.

You’re most likely testing across different zones of your website – homepage, category page, product detail page, site-wide, checkout, etc. Multiply those zones by the number of viewport ranges you’re specifically testing on.

What do you do, sum up each individual increase and project out over the course of a year? Do you create an equation to calculate the combined effect of all of your tests? Do you avoid trying to report at all?

There isn’t one good solution, but rather a few options that all have their strengths and weaknesses:

The first, and easiest, is using a formula to determine combined results. You’ll want a strong mathematician to help you with this one. Personally, I always have a lingering doubt that none of what is being reported is accurate, even using conservative estimations. And as time goes on, things only get less accurate.

The second is to periodically re-test your original Control from the moment at which you started testing. Say, every 6 months, test your best performing variation against the Control you had 6 months prior. If you’ve been testing across the funnel, test the entire funnel in one experiment.

Yes, it will be difficult. Yes, your developers will hate you. And yes, you will be able to prove the value of your work in a very confident manner.

It’s best to run these sorts of tests with a duplicate of each variation (2 “old” Controls vs 2 best performers) just to add an extra layer of certainty when you look at your results. It goes without saying that you should run these experiments for as long as reasonably possible.

Another option is to always be testing your “original” Control vs your most recent best performer in a side experiment. Take 10% of your total traffic and segment it to a constantly running experiment that pits the original control version of your site against your latest best performer.

It’s an experiment running in the background, not affected by what you are currently testing. It should serve as a constant benchmark to calculate the total effect of all your tests, combined.

Technically, this will be a challenge. You’ll be asking a lot of your developers and your analytics people, and at one point, you may ask yourself if it’s all worth it. But in the end, you will have some awesome reports to show, demonstrating the ridiculous revenue you’ve generated through CRO.
Doubled revenue

  • BEFORE: Individual test gains, cumulated.
  • TODAY: Taking into consideration interaction effects, re-running Control vs combined new variations OR using a model to predict combined effect of tests.
  • When to do it: When you want to better estimate the combined effect of multiple testing wins.
  • When not to do it: When your tests are highly seasonal and can’t be combined OR when it becomes impossible from a technical perspective (hence the importance of doing so in a reasonable time frame—don’t wait 2 years to do it).

Track and distribute cumulative insights

If you do this right, you will learn a ton about your customers and how to increase your revenue in the future. Ideally, you should have a goody-bag of insights to look through whenever you’re in need of inspiration.

So, how do you track insights over time and revalidate them in subsequent experiments? Also, does Jenny in branding know about your latest insights into the importance of your product imagery? How do you get her on board and keep her up to date on a consistent basis?

Both of these challenges deserve attention.

The simplest “system” for tracking insights is via spreadsheet, with columns that codify insights by type, device, and any other useful criteria for browsing and grouping. This proves unscalable when you’re testing at high velocity. That’s where a custom platform comes into play that does the job of tracking and sharing insights.

For example, the team at The Next Web created in internal tool for tracking tests, insights, then easily sharing ideas via Slack. There are other publicly available options, most of which integrate with Optimizely or VWO.
cro-management-tool

  • BEFORE: Excel sheets, Powerpoint presentations, word of mouth, or nothing at all.
  • TODAY: A shared and tagged database of insights that link back to the experiments that generated them and is updated on the fly. Tools such as Experiment Engine, Effective Experiments, Iridion and Liftmap are all solving some part of this puzzle.
  • When to do it: When you’re learning a lot of valuable things, but having trouble tracking or sharing what you learn. (BTW, if you’re not having this problem, you might be doing something wrong.)
  • When not to do it: When the future is of little importance.

Code implementation-ready variations

High velocity testing doesn’t just mean quickly getting tests out the door; it means being able to implement winners immediately and move on. To make this possible, your test code has to be ready to implement, meaning:

  • Code should be modularized. Your scripts should be modularized into sections functionality and design changes.
  • If you’re doing it right, style changes should be done by applying classes rather than using javascript. All css should be in one file and class names should align with your website, ready to be added when your test is completed.

ab-testing-code-modularity

  • BEFORE: Messy jQuery.
  • TODAY: Modularized experiment code, separated css that aligns with classnames.
  • When to do it: When you wish to make the implementation process as painless as possible.
  • When not to do it: When you just don’t care.

Create FOOC-free variations

If your test variations “flicker” or “flash” as they load, you’re experiencing Flash of Original Content or FOOC. It will affect your results if it goes untreated. Some of the best ways to prevent it are as follows:

  • Place your code snippets as high as possible on the page.
  • Improve site load time in general (regardless of your testing tool).
  • Briefly hide the body or div element being tested.
  • Here are 8 more remedies to fight FOOC.
Don't code your variations like this.
Don’t code your variations like this.
  • BEFORE: FOOC-galore.
  • TODAY: FOOC-free variations abound.
  • When to do it: Always.
  • When not to do it: Never.

Don’t test buttons, test business decisions

Some people think of A/B testing as a way to improve the look of their website, while others use it to test the fundamentals of their business. Take advantage of the tools at your disposal to get to the heart of what makes your business tick.

For example, we tested reducing the product range of one of our clients and discovered that they could save millions on manufacturing and marketing without losing revenue. What are the big lingering questions you could answer through A/B testing?

  • BEFORE: Most of us tested button colors at one point or another.
  • TODAY: Business decisions are being validated through A/B tests.
  • When to do it: When business decisions can be tested online, in a controlled manner.
  • When not to do it: When most factors cannot be controlled for online, during the length of an A/B test.

Use data science to test predictions, not ideas

It is highly likely that you are underutilizing the customer analytics that are available to you. Most of us don’t have the team in place or the time to dig through the data constantly. But this could be costing you dearly in missed opportunities.

If you have access to a data scientist, even on a project-basis, you can uncover insights that will vastly improve the quality of your A/B test hypotheses.
what-is-a-data-scientist

Source: Become a data scientist in 8 steps: the infographic – DataCamp

  • BEFORE: Throwing spaghetti at the wall.
  • TODAY: Predictive analytics can uncover data-driven test hypotheses.
  • When to do it: When you’ve got lots of well-organized analytics data.
  • When not to do it: When you prefer the spaghetti method.

Optimize for volume of tests

There was a time when “always be testing” was enough. These days, it’s about “always be testing in 100 different places at once.” This creates new challenges:

How do you test in multiple parts of the same funnel synchronously without concern for cross pollination?

How do you organize your human resources in a way to get all the work done?

This is the art of being a conversion optimization project manager: knowing how to juggle speed vs value of insights and considering resource availability. At WiderFunnel, we do a few things that help make sure we go as fast as possible without sacrificing insights:

  • We stagger “difficult” experiments with “easy” ones so that production can be completed on “difficult” ones while “easy” ones are running.
  • We integrate with testing tool APIs to quickly generate coding templates, meaning our development doesn’t need to do any manual work before starting to code variations.
  • We use detailed briefs to keep everyone on the same page and reduce gaps in communication.
  • We schedule experiments based on “insight flow” so that earlier experiments help inform subsequent ones.
  • We use algorithms to control for cross-pollination so that multiple tests within the same funnel can be run while being able to segment any cross-pollinated visitors.

widerfunnel-insight-flow-ab-testing

  • BEFORE: Running one experiment at a time.
  • TODAY: Running experiments across devices, segments, and funnels.
  • When to do it: When you’ve got the traffic, conversions and the team to make it happen.
  • When not to do it: When there aren’t enough conversions to go around for all of your tests.

Don’t get stuck in the optimization ways of the past. The industry is moving quickly, and the only way to stay ahead of your competitors (who are also testing) is to always be improving your conversion optimization program.

Bring your testing strategies into the modern era by mastering the 8 tactics outlined above. You’re an optimizer, after all―it’s only fitting that you optimize your optimization.

Do you agree with this list? Are there other aspects of modern-era CRO not listed here? Share your thoughts in the comments!

The post How to be a heavy hitter in enterprise e-commerce CRO appeared first on WiderFunnel Conversion Optimization.

Read original article:  

How to be a heavy hitter in enterprise e-commerce CRO

How to A/B test for long-term success (don’t underestimate insights!)

Reading Time: 6 minutes

Imagine you’re a factory manager.

You’re under pressure from your new boss to produce big results this quarter. (Results were underwhelming last quarter). You have a good team with high-end equipment, and can meet her demands if you ramp up your production speed over the coming months.

Production

You’re eager to impress her and you know if you reduce the time you spend on machine maintenance you can make up for the lacklustre results from last quarter.

Flash forward: The end of the Q3 rolls around, and you’ve met your output goals! You were able to meet your production levels by continuing to run the equipment during scheduled down-time periods. You’ve achieved numbers that impress your boss…

…but in order to maintain this level of output you will have to continue to sacrifice maintenance.

In Q4, disaster strikes! One of your 3 machines breaks down leaving you with zero output, and no way to move the needle forward for your department. Your boss gets on your back for your lack of foresight, and eventually your job is given to the young hot-shot on your team and you are left searching for a new gig.

A sad turn of events, right? Many people would label this a familiar tale of poor management (and correctly so!). Yet, when it comes to conversion optimization, there are many companies making the same mistake.

Optimizers are so often under pressure to satisfy the speed side of the equation that they are sacrificing its equally important counterpart…

Insights.

Consider the following graphic.

Growth-insights-spectrum
The spectrum ranges from straight forward growth-driving A/B tests, to multivariate insight-driving tests.

If you’ve got Amazon-level traffic and proper Design of Experiments (DOE), you may not have to choose between growth and insights. But in smaller organizations this can be a zero-sum equation. If you want fast wins, you sacrifice insights, and if you want insights, you may have to sacrifice a win or two.

Sustainable, optimal progress for any organization will fall somewhere in the middle. Companies often put so much emphasis on reaching certain testing velocities that they shoot themselves in the foot for long-term success.

Maximum velocity does not equal maximum impact

Sacrificing insights in the short-term may lead to higher testing output this quarter, but it will leave you at a roadblock later. (Sound familiar?) One 10% win without insights may turn heads your direction now, but a test that delivers insights can turn into five 10% wins down the line. It’s similar to the compounding effect: collecting insights now can mean massive payouts over time.

As with factory production, the key to sustainable output is to find a balance between short-term (maximum testing speed) and long-term (data collection/insights).

Growth vs. Insights

Christopher Columbus had an exploration mindset.

He set sail looking to find a better trade-route to India. He had no expectation of what that was going to look like, but he was open to anything he discovered and his sense of adventure rewarded him with what is likely the largest geographical discovery in History.

insight-driving-mindset
Have a Christopher Columbus mindset: test in pursuit of unforeseeable insights.

Exploration often leads to the biggest discoveries. Yet this is not what most companies are doing when it comes to conversion optimization. Why not?

Organizations tend to view testing solely as a growth-driving process— a way of settling long-term discussions between two firmly held opinions. No doubt growth is an important part of testing, but you can’t overlook exploration.

This is the testing that will propel your business forward and lead to the kind of conversion rate lift you keep reading about in case studies. Those companies aren’t achieving that level of lift on their first try; it’s typically the result of a series of insight-driving experiments that help the tester land on the big insight.

At WiderFunnel we classify A/B tests into two buckets: growth-driving and insight-driving…and we consider them equally important!

Growth-driving experiments (Case study here)

During our partnership with Annie Selke, a retailer of home-ware goods, we ran a test featuring a round of insight-driving variations. We were testing different sections on the product category page for sensitivity: Were users sensitive to changes to the left-hand filter? How might users respond to new ‘Sort By’ functionality?

Insight-driving-test
Round I of testing for Annie Selke: Note the left-hand filter and ‘Sort By’ functionality.

Neither of our variations led to a conversion rate lift. In fact, both lost to the Control page. But the results of this first round of testing revealed key, actionable insights ― namely that the changes we had made to the left-hand filter might actually be worth significant lift, had they not been negatively impacted by other changes.

We took these insights and, combined with supplementary heatmap data, we designed a follow-up experiment. We knew exactly what to test and we knew what the projected lift would be. And we were right. In the end, we turned insights into results, getting a 23.6% lift in conversion rate for Annie Selke.

In Round II of testing, we reverted to the original 'Sort By' functionality.
In Round II of testing, we reverted to the original ‘Sort By’ functionality.

For more on the testing we did with Annie Selke, you should read this post >> “A-ha! Isolations turn a losing experiment into a winner

This follow-up test is what we call a growth-driving experiment. We were armed with compelling evidence and we had a strong hypothesis which proved to be true.

But as any optimizer knows, it can be tough to gather compelling evidence to inform every hypothesis. And this is where a tester must be brave and turn their attention to exploration. Be like Christopher.

Insight-driving experiments

The initial round of testing we did for Annie Selke, where we were looking for sensitivities, is a perfect example of an insight-driving experiment. In insight-driving experiments, the primary purpose of your test is to answer a question, and lifting conversion rates is a secondary goal.

This doesn’t mean that the two cannot go hand-in-hand. They can. But when you’re conducting insight-driving experiments, you should be asking “Did we learn what we wanted to?” before asking “What was the lift?”. This is your factory down-time, the time during which you restock the cupboard with ideas, and put those ideas into your testing piggy-bank.

We’ve seen entire organizations get totally caught up on the question “How is this test going to move the needle?”

But here’s the kicker: Often the right answer is “It’s not.”

At least not right away. This type of testing has a different purpose. With insight-driving experiments, you’re setting out on a quest for your unicorn insight.

unicorn insight
What’s your unicorn insight?

These are the ideas that aren’t applicable to any other business. You can’t borrow them from industry-leading websites, and they’re not ideas a competitor can steal.

Your unicorn insight is unique to your business. It could be finding that magic word that helps users convert all over your site, or discovering that key value proposition that keeps customers coming back. Every business has a unicorn insight, but you are not going to find it by testing in your regular wheelhouse. It’s important to think differently, and approach problem solving in new ways.

We sometimes run a test for our clients where we take the homepage and isolate, removing every section of that page individually. Are we expecting this test to deliver a big lift? Nope, but we are expecting this test to teach us something.

We know that this is the fastest possible way to answer the question “What do users care about most on this page?” After this type of experiment, we suddenly have a lot of answers to our questions.

That’s right: no lift, but we have insights and clear next steps. We can then rank the importance of every element on the page and start to leverage the things that seem to be important to users on the homepage on other areas of a site. Does this sound like a losing test to you?

Rather than guessing at what we think users are going to respond to best, we run an insight-driving test and let the users give us the insights that can then be applied all over a site.

The key is to manage your expectations during a test like this. This variation won’t be your homepage for eternity. Rather, it should be considered a temporary experiment to generate learning for your business. By definition it is an experiment.

Optimization is an infinite process, and what your page looks like today is not what it will look like in a few months.

Proper Design of Experiments (DOE)

It’s important to note that these experimental categories do have grey lines. With proper DOE and high enough traffic levels, both growth-driving and insight-driving strategies can be executed simultaneously. This is what we call “Factorial Design”.

Factorial design
Factorial design allows you to test with both growth and insights in mind.

Factorial design allows you to test more than one element change within the same experiment, without forcing you to test every possible combination of changes.

Rather than creating a variation for every combination of changed elements (as you would with multivariate testing), you can design a test to focus on specific isolations that you hypothesize will have the biggest impact or drive insights.

How to get started with Factorial Design

Start by making a cluster of changes in one variation (producing variations that are significantly different from the control), and then isolate these changes within subsequent variations (to identify the elements that are having the greatest impact). This hybrid test, using both “variable cluster” with “isolation” variations gives you the best of both worlds: radical change options and the ability to gain insights from the results.

For more on proper Design of Experiments, you should read this post >> “Design your A/B tests to get consistently better results

We see Optimization Managers make the same mistakes over and over again, discounting the future for results today. If you overlook testing “down-time” (those insight-driving experiments), you’ll prevent your testing program from reaching its full potential.

You wouldn’t run a factory without down-time, you don’t collect a paycheck without saving for the future, so why would you run a testing program without investing in insight exploration?

Rather, find the balance between speed and insights with proper factorial design that promises growth now as well as in the future.

How do you ensure your optimization program is testing for both growth and insights? Let us know in the comments!

The post How to A/B test for long-term success (don’t underestimate insights!) appeared first on WiderFunnel Conversion Optimization.

Continue reading here – 

How to A/B test for long-term success (don’t underestimate insights!)

The high cost of conversion-before-education thinking

Are you pushing your visitors toward an action before they’re ready?

WF education illustration
Do your visitors have all the information necessary to buy with confidence?

Or, are you setting them up to become satisfied customers?

A couple of weeks ago, I wrote a post called The problem with your high conversion rate. In it, I focused on the dangers of optimizing for the click rather than for your customer, and the internal steps you can take to avoid doing just that.

But there’s another side to ensuring that your optimization efforts are capturing and retaining the right visitors. And it’s all about education.

In this post, I’ll talk about the cost of converting an uneducated visitor and how you can find the balance between education and conversion.

Imagine this scenario…

Joe is a Marketing Manager. He keeps hearing about conversion rate optimization (CRO) – it’s the hottest buzzword in digital marketing since SEO. He’s heard rumblings in his industry and he suspects that his competitors are already optimizing their sites, whatever that means.

He’s thinking to himself, “I need to get on this train.” He starts googling “conversion rate optimization” and he stumbles on an ad for a conversion optimization agency. He clicks on a link and, all of a sudden, he’s dropped onto a landing page promising ambiguous “lift” for his business and demanding that he “Contact us.”

He’s foggy on the details, but he’s looking for information so he clicks. Next thing Joe knows, he’s being pitched hard on CRO services, but he’s still not even sure what conversion rate optimization is.

He’s quickly overwhelmed and frustrated, so he tables the whole project. Gah!

Nobody wants to have this experience: it’s a lose-lose. Joe misses out on retaining a service that could have had real positive impact on his business and the agency misses out on a new client.

So, what went wrong?

Namely, the agency was rushing to convert Joe, making no effort to educate him before asking him to take an action.

The blind purchase vs. the educated decision

Optimizers often want to make it as easy as possible for visitors to convert. But, a change that increases your conversion rate while sacrificing an informed customer might be bad news for your business.

The general rule: by the time your visitor arrives at ‘Purchase’ or ‘Contact Us’, they should already know what they’re getting themselves into.

For B2B businesses especially, education is vital. If you’re selling complex software or an automation tool or a testing tool or an in-depth service, chances are your visitors require a certain amount of education before they can use whatever you’re selling properly.

Warning: If you push a prospect to convert too early on, you could cost yourself a long-term customer.

Let’s think about this in real terms. Take a look at Mobify’s homepage, it’s a mobile customer engagement platform. You’ll notice right away that, of the 4 main calls-to-action on this page, 3 of them read “Learn More”.

Education on Mobify's HP
Mobify pushes visitors to “Learn More”.

That first “Learn More” leads to an entire web of educational content pages; these pages encourage the visitor to explore Mobify’s offerings in incredible depth. Clearly, the goal is to educate visitors as much as possible. The “Contact Us” call-to-action in the upper right hand corner lingers on each page, allowing the visitor to reach out when they’re confident that, yes, Mobify is the solution for them.

Now, I’m not saying that your site should emphasize education in this way. For Mobify, the informational portion of the funnel is key, so they spend a lot of time informing (more on stages of the funnel in a minute).

What I am saying is that educational ‘barriers’ to conversion (like page after page of educational content) can actually bolster lead quality and customer retention. How much you educate depends on your unique funnel.

How to present the right information at the right time

Conversion Funnel
Be aware of each stage of the funnel.

“My unique funnel doesn’t require that much education,” you might be thinking.

Fair enough, but your visitors need some information before they can make an informed purchase. The question is: what information do you highlight and where?

When you broaden your gaze and look past the conversion itself, you can start to ask yourself the questions that will help you hit that sweet spot of Relevance and Clarity for your visitors, enabling you to present the right information to them at the right time.

What does your ideal customer need to know? Do they know it? Why do your customers cancel? What are their misconceptions? What are the questions your new customers have?

Once you’ve identified what your visitors need to know in order to become happy, satisfied customers, make that information available. Strategically, of course.

Aswin Kumar

It’s a balancing act. The customer needs to know a certain amount of information before they buy, but you need to be strategic about when and how you present that information.

Aswin Kumar, Optimization Coordinator, WiderFunnel

One of the challenges of educating before converting is timing. Understand the stages within the funnel and present information accordingly:

  1. Persuasional (the top of the funnel): Your prospects need to know that they are on the right website, that you have the products or services they’re looking for and that they should spend time exploring to find out more
  2. Informational (the middle of the funnel): Answer your prospects’ questions, sooth their objections and move them to take action
  3. Transactional (the bottom of the funnel): Where conversions happen – the shopping cart, lead gen forms, whitepaper download forms, webinar signup forms, payment processors, etc.

We recently ran a test for a WiderFunnel client that has a four-page funnel – this client provides an online consumer information service. Visitors enter on a landing page featuring an urgent value proposition and a compelling call-to-action meant to pull them into the funnel.

Our strategists did extensive user research during the Explore phase for this client and found that certain elements of the value proposition were being under-utilized. They also found that the call-to-action, while compelling, was setting incorrect user expectations and causing friction further down the funnel.

We created several variations to test against the original landing page. In Variation A, we replaced the original value proposition with new copy that was more relevant to what the user was searching for. This value proposition was less urgent than the original, but more accurate.

With this variation, we saw a decrease in the number of visitors that actually entered the funnel. But! We saw an incredible uptick in conversions (13%) at the end of the funnel. In setting user expectations from the outset, we were able to weed out visitors who were never going to become customers. We also saw an 11% decrease in refunds and chargebacks!

The results confirmed that presenting the right information at the right time can both filter out unqualified visitors and increase final conversions, while simultaneously leading to happier, more satisfied customers.

As you continue to optimize your site and get to know what your visitors are more sensitive to and how they behave at each stage in their journey, it will become easier to present the right information at the right time.

In the end, an ill-educated, misguided or unqualified prospect is a waste of your time, regardless of positive conversion rate percentages. Don’t simply guess at what information to present to your users and when – test it, instead.

How does education fit into your optimization strategy? Have you tested providing your visitors with more or less information? What was the outcome? Let us know in the comments!

The post The high cost of conversion-before-education thinking appeared first on WiderFunnel Conversion Optimization.

Excerpt from:  

The high cost of conversion-before-education thinking