Tag Archives: a/b testing

Your frequently asked conversion optimization questions, answered!

Reading Time: 28 minutes

Got a question about conversion optimization?

Chances are, you’re not alone!

This Summer, WiderFunnel participated in several virtual events. And each one, from full-day summit to hour-long webinar, ended with a TON of great questions from all of you.

So, here is a compilation of 29 of your top conversion optimization questions. From how to get executive buy-in for experimentation, to the impact of CRO on SEO, to the power (or lack thereof) of personalization, you asked, and we answered.

As you’ll notice, many experts and thought-leaders weighed in on your questions, including:

Now, without further introduction…

Your conversion optimization questions

Optimization Strategy

  1. What do you see as the most common mistake people make that has a negative effect on website conversion?
  2. What are the most important questions to ask in the Explore phase?
  3. Is there such a thing as too much testing and / or optimizing?

Personalization

  1. Do you get better results with personalization or A/B testing or any other methods you have in mind?
  2. Is there such a thing as too much personalization? We have a client with over 40 personas, with a very complicated strategy, which makes reporting hard to justify.
  3. With the advance of personalization technology, will we see broader segments disappear? Will we go to 1:1 personalization, or will bigger segments remain relevant?
  4. How do you explain personalization to people who are still convinced that personalization is putting first and last name fields on landing pages?

SEO versus CRO

  1. How do you avoid harming organic SEO when doing conversion optimization?

Getting Buy-in for Experimentation

  1. When you are trying to solicit buy-in from leadership, do you recommend going for big wins to share with the higher ups or smaller wins?
  2. Who would you say are the key stakeholders you need buy-in from, not only in senior leadership but critical members of the team?

CRO for Low Traffic Sites

  1. Do you have any suggestions for success with lower traffic websites?
  2. What would you prioritize to test on a page that has lower traffic, in order to achieve statistical significance?
  3. How far can I go with funnel optimization and testing when it comes to small local business?

Tips from an In-House Optimization Champion

  1. How do you get buy-in from major stakeholders, like your CEO, to go with a conversion optimization strategy?
  2. What has surprised you or stood out to you while doing CRO?

Optimization Across Industries

  1. Do you have any tips for optimizing a website to conversion when the purchase cycle is longer, like 1.5 months?
  2. When you have a longer sales process, getting them to convert is the first step. We have softer conversions (eBooks) and urgent ones like demo requests. Do we need to pick ONE of these conversion options or can ‘any’ conversion be valued?
  3. You’ve mainly covered websites that have a particular conversion goal, for example, purchasing a product, or making a donation. What would you say can be a conversion metric for a customer support website?
  4. Do you find that results from one client apply to other clients? Are you learning universal information, or information more specific to each audience?
  5. For companies that are not strictly e-commerce and have multiple business units with different goals, can you speak to any challenges with trying to optimize a visible page like the homepage so that it pleases all stakeholders? Is personalization the best approach?
  6. Do you find that testing strategies differ cross-culturally?

Experiment Design & Setup

  1. How do you recommend balancing the velocity of experimentation with quality, or more isolated design?
  2. I notice that you often have multiple success metrics, rather than just one? Does this ever lead to cherry-picking a metric to make sure that the test you wanted to win seem like it’s the winner?
  3. When do you make the call for A/B tests for statistical significance? We run into the issue of varying test results depending on part of the week we’re running a test. Sometimes, we even have to run a test multiple times.
  4. Is there a way to conclusively tell why a test lost or was inconclusive?
  5. How many visits do you need to get to statistically relevant data from any individual test?
  6. We are new to optimization. Looking at your Infinity Optimization Process, I feel like we are doing a decent job with exploration and validation – for this being a new program to us. Our struggle seems to be your orange dot… putting the two sides together – any advice?
  7. When test results are insignificant after lots impressions, how do you know when to ‘call it a tie’ and stop that test and move on?

Testing and technology

  1. There are tools meant to increase testing velocity with pre-built widgets and pre-built test variations, even – what are your thoughts on this approach?

Your questions, answered

Q: What do you see as the most common mistake people make that has a negative effect on website conversion?

Chris Goward: I think the most common mistake is a strategic one, where marketers don’t create or ensure they have a great process and team in place before starting experimentation.

I’ve seen many teams get really excited about conversion optimization and bring it into their company. But they are like kids in a candy store: they’re grabbing at a bunch of ideas, trying to get quick wins, and making mistakes along the way, getting inconclusive results, not tracking properly, and looking foolish in the end.

And this burns the organizational momentum you have. The most important resource you have in an organization is the support from your high-level executives. And you need to be very careful with that support because you can quickly destroy it by doing things the wrong way.

It’s important to first make sure you have all of the right building blocks in place: the right process, the right team, the ability to track and the right technology. And make sure you get a few wins, perhaps under the radar, so that you already have some support equity to work with.

Further reading:

Back to Top

Q: What are the most important questions to ask in the Explore phase?

Chris Goward: During Explore, we are looking for your visitors’ barriers to conversion. It’s a general research phase. (It’s called ‘Explore’ for a reason). In it, we are looking for insights about what questions to ask and validate. We are trying to identify…

  • What are the barriers to conversion?
  • What are the motivational triggers for your audience?
  • Why are people buying from you?

And answering those questions comes through the qualitative and quantitative research that’s involved in Explore. But it’s a very open-ended process. It’s an expansive process. So the questions are more about how to identify opportunities for testing.

Whereas Validate is a reductive process. During Validate, we know exactly what questions we are trying to answer, to determine whether the insights gained in Explore actually work.

Further reading:

  • Explore is one of two phases in the Infinity Optimization Process – our framework for conversion optimization. Read about the whole process, here.

Back to Top

Q: Is there such a thing as too much testing and / or optimizing?

Chris Goward: A lot of people think that if they’re A/B testing, and improving an experience or a landing page or a website…they can’t improve forever. The question many marketers have is, how do I know how long to do this? Is there going to be diminishing returns? By putting in the same effort will I get smaller and smaller results?

But we haven’t actually found this to be true. We have yet to find a company that we have over-A/B tested. And the reason is that visitor expectations continue to increase, your competitors don’t stop improving, and you continuously have new questions to ask about your business, business model, value proposition, etc.

So my answer is…yes, you will run out of opportunities to test, as soon as you run out of business questions. When you’ve answered all of the questions you have as a business, then you can safely stop testing.

Of course, you never really run out of questions. No business is perfect and understands everything. The role of experimentation is never done.

Case Study: DMV.org has been running an optimization program for 4+ years. Read about how they continue to double revenue year-over-year in this case study.

Back to Top

Q: Do you get better results with personalization or A/B testing or any other methods you have in mind?

Chris Goward: Personalization is a buzzword right now that a lot of marketers are really excited about. And personalization is important. But it’s not a new idea. It’s simply that technology and new tools are now available, and we have so much data that allows us to better personalize experiences.

I don’t believe that personalization and A/B testing are mutually exclusive. I think that personalization is a tactic that you can test and validate within all your experiences. But experimentation is more strategic.

At the highest level of your organization, having an experimentation ethos means that you’ll test anything. You could test personalization, you could test new product lines, or number of products, or types of value proposition messaging, etc. Everything is included under the umbrella of experimentation, if a company is oriented that way.

Personalization is really a tactic. And the goal of personalization is to create a more relevant experience, or a more relevant message. And that’s the only thing it does. And it does it very well.

Further Reading: Are you evaluating personalization at your company? Learn how to create the most effective personalization strategy with our 4-step roadmap.

Back to Top

Q: Is there such a thing as too much personalization? We have a client with over 40 personas, with a very complicated strategy, which makes reporting hard to justify.

Chris Goward: That’s an interesting question. Unlike experimentation, I believe there is a very real danger of too much personalization. Companies are often very excited about it. They’ll use all of the features of the personalization tools available to create (in your client’s case) 40 personas and a very complicated strategy. And they don’t realize that the maintenance cost of personalization is very high. It’s important to prove that a personalization strategy actually delivers enough business value to justify the increase in cost.

When you think about it, every time you come out with a new product, a new message, or a new campaign, you would have to create personalized experiences against 40 different personas. And that’s 40 times the effort of having a generic message. If you haven’t tested from the outset, to prove that all of those personas are accurate and useful, you could be wasting a lot of time and effort.

We always start a personalization strategy by asking, ‘what are the existing personas?’, and proving out whether those existing personas actually deliver distinct value apart from each other, or whether they should be grouped into a smaller number of personas that are more useful. And then, we test the messaging to see if there are messages that work better for each persona. It’s a step by step process that makes sure we are only creating overhead where it’s necessary and will create value.

Further Reading: Are you evaluating personalization at your company? Learn how to create the most effective personalization strategy with our 4-step roadmap.

Back to Top

Q: With the advance of personalization technology, will we see broader segments disappear? Will we go to 1:1 personalization, or will bigger segments remain relevant?

Chris Goward: Broad segments won’t disappear; they will remain valid. With things like multi-threaded personalization, you’ll be able to layer on some of the 1:1 information that you have, which may be product recommendations or behavioral targeting, on top of a broader segment. If a user falls into a broad segment, they may see that messaging in one area, and 1:1 messaging may appear in another area.

But if you try to eliminate broad segments and only create 1:1 personalization, you’ll create an infinite workload for yourself in trying to sustain all of those different content messaging segments. And it’s almost impossible for a marketing department practically to create infinite marketing messages.

Hudson Arnold: You are absolutely going to need both. I think there’s a different kind of opportunity, and a different kind of UX solution to those questions. Some media and commerce companies won’t have to struggle through that content production, because their natural output of 1:1 personalization will be showing a specific product or a certain article, which they don’t have to support from a content perspective.

What they will be missing out on is that notion of, what big segments are we missing? Are we not targeting moms? Newly married couples? CTOs vs. sales managers? Whatever the distinction is, that segment-level messaging is going to continue to be critical, for the foreseeable future. And the best personalization approach is going to balance both.

Back to Top

Q: How do you explain personalization to people who are still convinced that personalization is putting first and last name fields on landing pages?

A PANEL RESPONSE

André Morys: I compare it to the experience people have in a real store. If you go to a retail store, and you want to buy a TV, the salesperson will observe how you’re speaking, how you’re walking, how you’re dressed, and he will tailor his sales pitch to the type of person you are. He will notice if you’ve brought your family, if it’s your first time in a shop, or your 20th. He has all of these data points in his mind.

Personalization is the art of transporting this knowledge of how to talk to people on a 1:1 level to your website. And it’s not always easy, because you may not have all of the data. But you have to find out which data you can use. And if you can do personalization properly, you can get big uplift.

John Ekman: On the other hand, I heard a psychologist once say that people have more in common than what separates them. If you are looking for very powerful persuasion strategies, instead of thinking of the different individual traits and preferences that customers might have, it may be better to think about what they have in common. Because you’ll reach more people with your campaigns and landing pages. It will be interesting to see how the battle between general persuasion techniques and individual personalization techniques will result.

Chris Goward: It’s a good point. I tend to agree that the nirvana of 1:1 personalization may not be the right goal in some cases, because there are unintended consequences of that.

One is that it becomes more difficult to find generalized understanding of your positioning, of your value proposition, of your customers’ perspectives, if everything is personalized. There are no common threads.

The other is that there is significant maintenance cost in having really fine personalization. If you have 1:1 personalization with 1,000 people, and you update your product features, you have to think about how that message gets customized across 1,000 different messages rather than just updating one. So there is a cost to personalization. You have to validate that your approach to personalization pays off, and that is has enough benefit to balance out your cost and downside.

David Darmanin: [At Hotjar], we aren’t personalizing, actually. It’s a powerful thing to do, but there is a time to deploy it. If personalization adds too much complexity and slows you down, then obviously that can be a challenge. Like most things that can be complex, I think that they are the most valuable, when you have a high ticket price or very high value, where that touch of personalization has a big impact.

With Hotjar, we’re much more volume and lower price points, so it’s not yet a priority for us. Having said that, we have looked at it. But right now, we’re a startup, at the stage where speed is everything. And having many common threads is as important as possible, so we don’t want to add too much complexity now. But if you’re selling very expensive things, and you’re at a more advanced stage as a company, it would be crazy not to leverage personalization.

Video Resource: This panel response comes from the Growth & Conversion Virtual Summit held this Spring. You can still access all of the session recordings for free, here.

Back to Top

Q: How do you avoid harming organic SEO when doing conversion optimization?

Chris Goward: A common question! WiderFunnel was actually one of Google’s first authorized consultants for their testing tool, and Google told us is that they support optimization fully. They do not penalize companies for running A/B tests, if they are set up properly and the company is using a proper tool.

On top of that, what we’ve found is that the principles of conversion optimization parallel the principles of good SEO practice.

If you create a better experience for your users, and more of them convert, it actually sends a positive signal to Google that you have higher quality content.

Google looks at pogo-sticking, where people land on the SERP, find a result, and then return back to the SERP. Pogo-sticking signals to Google that this is not quality content. If a visitor lands on your page and converts, they are not going to come back to the SERP, which sends Google a positive signal. And we’ve actually never seen an example where SEO has been harmed by a conversion optimization program.

Video Resource: Watch SEO Wizard Rand Fishkin’s talk from CTA Conf 2017, “Why We Can’t Do SEO without CRO

Back to Top

Q:When you are trying to solicit buy-in from leadership do you recommend going for big wins to share with the higher ups or smaller wins?

Chris Goward: Partly, it depends on how much equity you have to burn up front. If you are in a situation where you don’t have a lot of confidence from higher-ups about implementing an optimization program, I would recommend starting with more under the radar tests. Try to get momentum, get some early wins, and then share your success with the executives to show the potential. This will help you get more buy-in for more prominent areas.

This is actually one of the factors that you want to consider when prioritizing where to test. The “PIE Framework” shows you the three factors to help you prioritize.

PIE framework for A/B testing prioritization.
A sample PIE prioritization analysis.

One of them is Ease. Potential, Importance, and Ease. And one of the important aspects within Ease is political ease. So you want to look for areas that have political ease, which means there might not be as much sensitivity around them (so maybe not the homepage). Get those wins first, and create momentum, and then you can start sharing that throughout the organization to build that buy-in.

Further Reading: Marketers from ASICS’ global e-commerce team weigh in on evangelizing optimization at a global organization in this post, “A day in the life of an optimization champion

Back to Top

Q: Who would you say are the key stakeholders you need buy-in from, not only in senior leadership but critical members of the team?

Nick So: Besides the obvious senior leadership and key decision-makers as you mention, we find getting buy-in from related departments like branding, marketing, design, copywriters and content managers, etc., can be very helpful.

Having these teams on board can not only help with the overall approval process, but also helps ensure winning tests and strategies are aligned with your overall business and marketing strategy.

You should also consider involving more tangentially-related teams like customer support. This makes them a part of the process and testing culture, but your customer-facing teams can also be a great source for business insights and test ideas as well!

Back to Top

Q: Do you have any suggestions for success with lower traffic websites?

Nick So: In our testing experience, we find we get the most impactful results when we feel we have a strong understanding of the website’s visitors. In the Infinity Optimization Process, this understanding is gained through a balanced approach of Exploratory research, and Validated insights and results.

infinity optimization process
The Infinity Optimization Process is iterative and leads to continuous growth and insights.

When a site’s traffic is low, the ability to Validate is decreased, and so we try to make up for it by increasing the time spent and work done in the Explore phase.

We take those yet-to-be-validated insights found in the Explore phase, and build a larger, more impactful single variation, and test the cluster of changes. (This variation is generally more drastic than we would create for a higher-traffic client, since we can validate those insights easily through multiple tests.)

Because of the more drastic changes, the variation should have a larger impact on conversion rate (and hopefully gain statistical significance with lower traffic). And because we have researched evidence to support these changes, there is a higher likelihood that they will perform better than a standard re-design.

If a site does not have enough overall primary conversions, but you definitely, absolutely MUST test, then I would look for a secondary metric further ‘upstream’ to optimize for. These should be goals that indicate or guide the primary conversion (e.g. clicks to form > form submission, add to cart > transaction). However with this strategy, stakeholders have to be aware that increases in this secondary goal may not be tied directly to increases of the primary goal at the same rate.

Back to Top

Q: What would you prioritize to test on a page that has lower traffic, in order to achieve statistical significance?

Chris Goward: The opportunities that are going to make the most impact really depend on the situation and the context. So if it’s a landing page or the homepage or a product page, they’ll have different opportunities.

But with any area, start by trying to understand your customers. If you have a low-traffic site, you’ll need to spend more time on the qualitative research side, really trying to understand: what are the opportunities, the barriers your visitors might be facing, and drilling into more of their perspective. Then you’ll have a more powerful test setup.

You’ll want to test dramatically. Test with fewer variations, make more dramatic changes with the variations, and be comfortable with your tests running longer. And while they are running and you are waiting for results, go talk to your customers. Go and run some more user testing, drill into your surveys, do post-purchase surveys, get on the phone and get the voice of customer. All of these things will enrich your ability to imagine their perspective and come up with more powerful insights.

In general, the things that are going to have the most impact are value proposition changes themselves. Trying to understand, do you have the right product-market fit, do you have the right description of your product, are you leading with the right value proposition point or angle?

Back to Top

 

Q: How far can I go with funnel optimization and testing when it comes to small local business?

A PANEL RESPONSE

David Darmanin: What do you mean by small local business? If you’re a startup just getting started, my advice would be to stop thinking about optimization and focus on failing fast. Get out there, change things, get some traction, get growth and you can optimize later. Whereas, if you’re a small but established local business, and you have traffic but it’s low, that’s different. In the end, conversion optimization is a traffic game. Small local business with a lot of traffic, maybe. But if traffic is low, focus on the qualitative, speak to your users, spend more time understanding what’s happening.

John Ekman:

If you can’t test to significance, you should turn to qualitative research.

That would give you better results. If you don’t have the traffic to test against the last step in your funnel, you’ll end up testing at the beginning of your funnel. You’ll test for engagement or click through, and you’ll have to assume that people who don’t bounce and click through will convert. And that’s not always true. Instead, go start working with qualitative tools to see what the visitors you have are actually doing on your page and start optimizing from there.

André Morys: Testing with too small a sample size is really dangerous because it can lead to incorrect assumptions if you are not an expert in statistics. Even if you’re getting 10,000 to 20,000 orders per month, that is still a low number for A/B testing. Be aware of how the numbers work together. We’ve had people claiming 70% uplift, when the numbers are 64 versus 27 conversions. And this is really dangerous because that result is bull sh*t.

Video Resource: This panel response comes from the Growth & Conversion Virtual Summit held this Spring. You can still access all of the session recordings for free, here.

Back to Top

Q: How do you get buy-in from major stakeholders, like your CEO, to go with an evolutionary, optimized redesign approach vs. a radical redesign?

Jamie Elgie: It helps when you’ve had a screwup. When we started this process, we had not been successful with the radical design approach. But my advice for anyone championing optimization within an organization would be to focus on the overall objective.

For us, it was about getting our marketing spend to be more effective. If you can widen the funnel by making more people convert on your site, and then chase the people who convert (versus people who just land on your site) with your display media efforts, your social media efforts, your email efforts, and with all your paid efforts, you are going to be more effective. And that’s ultimately how we sold it.

It really sells itself though, once the process begins. It did not take long for us to see really impactful results that were helping our bottom line, as well as helping that overall strategy of making our display media spend, and all of our media spend more targeted.

Video Resource: Watch this webinar recording and discover how Jamie increased his company’s sales by more than 40% with evolutionary site redesign and conversion optimization.

Back to Top

Q: What has surprised you or stood out to you while doing CRO?

Jamie Elgie: There have been so many ‘A-ha!’s, and that’s the best part. We are always learning. Things that we are all convinced we should change on our website, or that we should change in our messaging in general, we’ll test them and actually find out.

We have one test running right now, and it’s failing, which is disappointing. But our entire emphasis as a team is changing, because we are learning something. And we are learning it without a huge amount of risk. And that, to me, has been the greatest thing about optimization. It’s not just the impact to your marketing funnel, it’s also teaching us. And it’s making us a better organization because we’re learning more.

One of the biggest benefits for me and my team has been how effective it is just to be able to say, ‘we can test that’.

If you have a salesperson who feels really strongly about something, and you feel really strongly that they’re wrong, the best recourse is to put it out on the table and say, ok, fine, we’ll go test that.

It enables conversations to happen that might not otherwise happen. It eliminates disputes that are not based on objective data, but on subjective opinion. It actually brings organizations together when people start to understand that they don’t need to be subjective about their viewpoints. Instead, you can bring your viewpoint to a test, and then you can learn from it. It’s transformational not just for a marketing organization, but for the entire company, if you can start to implement experimentation across all of your touch points.

Case Study: Read the details of how Jamie’s company, weBoost, saw a 100% lift in year-over-year conversion rate with and optimization program.

Back to Top

Q: Do you have any tips for optimizing a website to conversion when the purchase cycle is longer, like 1.5 months?

Chris Goward: That’s a common challenge in B2B or with large ticket purchases for consumers. The best way to approach this is to

  1. Track your leads and opportunities to the variation,
  2. Then, track them through to the sale,
  3. And then look at whether average order value changes between the variations, which implies the quality of the leads.

Because it’s easy to measure lead volume between variations. But if lead quality changes, then that makes a big impact.

We actually have a case study about this with Magento. We asked the question, “Which of these calls-to-action is actually generating the most valuable leads?”. And ran an experiment to try to find out. We tracked the leads all the way through to sale. This helped Magento optimize for the right calls-to-action going forward. And that’s an important question to ask near the beginning of your optimization program, which is, am I providing the right hook for my visitor?

Case Study: Discover how Magento increased lead volume and lead quality in the full case study.

Back to Top

Q: When you have a longer sales process, getting visitors to convert is the first step. We have softer conversions (eBooks) and urgent ones like demo requests. Do we need to pick ONE of these conversion options or can ‘any’ conversion be valued?

Nick So: Each test variation should be based on a single, primary hypothesis. And each hypothesis should be based on a single, primary conversion goal. This helps you keep your hypotheses and strategy focused and tactical, rather than taking a shotgun approach to just generally ‘improve the website’.

However, this focused approach doesn’t mean you should disregard all other business goals. Instead, count these as secondary goals and consider them in your post-test results analysis.

If a test increases demo requests by 50%, but cannibalizes ebook downloads by 75%, then, depending on the goal values of the two, a calculation has to be made to see if the overall net benefit of this tradeoff is positive or negative.

Different test hypotheses can also have different primary conversion goals. One test can focus on demos, but the next test can be focused on ebook downloads. You just have to track any other revenue-driving goals to ensure you aren’t cannibalizing conversions and having a net negative impact for each test.

Back to Top

Q: You’ve mainly covered websites that have a particular conversion goal, for example, purchasing a product, or making a donation. What would you say can be a conversion metric for a customer support website?

Nick So: When we help a client determine conversion metrics…

…we always suggest following the money.

Find the true impact that customer support might have on your company’s bottom line, and then determine a measurable KPI that can be tracked.

For example, would increasing the usefulness of the online support decrease costs required to maintain phone or email support lines (conversion goal: reduction in support calls/submissions)? Or, would it result in higher customer satisfaction and thus greater customer lifetime value (conversion goal: higher NPS responses via website poll)?

Back to Top

Q: Do you find that results from one client apply to other clients? Are you learning universal information, or information more specific to each audience?

Chris Goward: That question really gets at the nub of where we have found our biggest opportunity. When I started WiderFunnel in 2007, I thought that we would specialize in an industry, because that’s what everyone was telling us to do. They said, you need to specialize, you need to focus and become an expert in an industry. But I just sort of took opportunities as they came, with all kinds of different industries. And what I found is the exact opposite.

We’ve specialized in the process of optimization and personalization and creating powerful test design, but the insights apply to all industries.

What we’ve found is people are people, regardless of whether they’re shopping for a server, or shopping for socks, or donating to third-world countries, they go through the same mental process in each case.

The tactics are a bit different, sometimes. But often, we’re discovering breakthrough insights because we’re able to apply principles from one industry to another. For example, taking an e-commerce principle and identifying where on a B2B lead generation website we can apply that principle because someone is going through the same step in the process.

Most marketers spend most of their time thinking about their near-field competitors rather than in different industries, because it’s overwhelming to look at all of the other opportunities. But we are often able to look at an experience in a completely different way, because we are able to look at it through the lens of a different industry. That is very powerful.

Back to Top

Q: For companies that are not strictly e-commerce and have multiple business units with different goals, can you speak to any challenges with trying to optimize a visible page like the homepage so that it pleases all stakeholders? Is personalization the best approach?

Nick So: At WiderFunnel, we often work with organizations that have various departments with various business goals and agendas. We find the best way to manage this is to clearly quantify the monetary value of the #1 conversion goal of each stakeholder and/or business unit, and identify areas of the site that have the biggest potential impact for each conversion goal.

In most cases, the most impactful test area for one conversion goal will be different for another conversion goal (e.g. brand awareness on the homepage versus checkout for e-commerce conversions).

When there is a need to consider two different hypotheses with differing conversion goals on a single test area (like the homepage), teams can weigh the quantifiable impact + the internal company benefits in their decision and make that negotiation of prioritization and scheduling between teams.

I would not recommend personalization for this purpose, as that would be a stop-gap compromise that would limit the creativity and strategy of hypotheses, as well as create a disjointed experience for visitors, which would generally have a negative impact overall.

If you HAVE to run opposing strategies simultaneously on an area of the site, you could run multiple variations for different teams and measure different goals. Or, run mutually exclusive tests (keeping in mind these tactics would reduce test velocity, and would require more coordination between teams).

Back to Top

 

Q: Do you find testing strategies differ cross-culturally? Do conversion rates vary drastically across different countries / languages when using these strategies?

Chris Goward: We have run tests for many clients outside of the USA, such as in Israel, Sweden, Australia, UK, Canada, Japan, Korea, Spain, Italy and for the Olympics store, which is itself a global e-commerce experience in one site!

There are certainly cultural considerations and interesting differences in tactics. Some countries don’t have widespread credit card use, for example, and retailers there are accustomed to using alternative payment methods. Website design preferences in many Asian countries would seem very busy and overly colorful to a Western European visitor. At WiderFunnel, we specialize in English-speaking and Western-European conversion optimization and work with partner optimization companies around the world to serve our global and international clients.

Back to Top

Q: How do you recommend balancing the velocity of experimentation with quality, or more isolated design?

Chris Goward: This is where the art of the optimization strategist comes into play. And it’s where we spend the majority of our effort – in creating experiment plans. We look at all of the different options we could be testing, and ruthlessly narrow them down to the things that are going to maximize the potential growth and the potential insights.

And there are frameworks we use to do that. Its all about prioritization. There are hundreds of ideas that we could be testing, so we need to prioritize with as much data as we can. So, we’ve developed some frameworks to do that. The PIE Framework allows you to prioritize ideas and test areas based on the potential, importance, and ease. The potential for improvement, the importance to the business, and the ease of implementation. And sometimes these are a little subjective, but the more data you can have to back these up, the better your focus and effort will be in delivering results.

Further Reading:

Back to Top

Q: I notice that you often have multiple success metrics, rather than just one? Does this ever lead to cherry-picking a metric to make sure that the test you wanted to win seem like it’s the winner?

Chris Goward: Good question! We actually look for one primary metric that tells us what the business value of a winning test is. But we also track secondary metrics. The goal is to learn from the other metrics, but not use them for decision-making. In most cases, we’re looking for a revenue-driving primary metric. Revenue-per-visitor, for example, is a common metric we’ll use. But the other metrics, whether conversion rate or average order value or downloads, will tell us more about user behavior, and lead to further insights.

There are two steps in our optimization process that pair with each other in the Validate phase. One is design of experiments, and the other is results analysis. And if the results analysis is done correctly, all of the metrics that you’re looking at in terms of variation performance, will tell you more about the variations. And if the design of experiments has been done properly, then you’ll gather insights from all of the different data.

But you should be looking at one metric to tell you whether or not a test won.

Further Reading: Learn more about proper design of experiments in this blog post.

Back to Top

 

Q: When do you make the call for A/B tests for statistical significance? We run into the issue of varying test results depending on part of the week we’re running a test. Sometimes, we even have to run a test multiple times.

Chris Goward: It sounds like you may be ending your tests or trying to analyze results too early. You certainly don’t want to be running into day-of-the-week seasonality. You should be running your tests over at least a week, and ideally two weekends to iron out that seasonality effect, because your test will be in a different context on different days of the week, depending on your industry.

So, run your tests a little bit longer and aim for statistical significance. And you want to use tools that calculate statistical significance reliably, and help answer the real questions that you’re trying to ask with optimization. You should aim for that high level of statistical significance, and iron out that seasonality. And sometimes you’ll want to look at monthly seasonality as well, and retest questionable things within high and low urgency periods. That, of course, will be more relevant depending on your industry and whether or not seasonality is a strong factor.

Further Reading: You can’t make business decisions based on misleading A/B test results. Learn how to avoid the top 3 mistakes that make your A/B test results invalid in this post.

Back to Top

Q: Is there a way to conclusively tell why a test lost or was inconclusive? To know what the hidden gold is?

Chris Goward: Developing powerful hypotheses is dependent on having workable theories. Seeking to determine the “Why” behind the results is some of the most interesting part of the work.

The only way to tell conclusively is to infer a potential reason, then test again with new ways to validate that inference. Eventually, you can form conversion optimization theories and then test based on those theories. While you can never really know definitively the “why” behind the “what”, when you have theories and frameworks that work to predict results, they become just as useful.

As an example, I was reviewing a recent test for one of our clients and it didn’t make sense based on our LIFT Model. One of the variations was showing under-performance against another variation, but I believed strongly that it should have over-performed. I struggled for some time to align this performance with our existing theories and eventually discovered the conversion rate listed was a typo! The real result aligned perfectly with our existing framework, which allowed me to sleep at night again!

Back to Top

Q: How many visits do you need to get to statistically relevant data from any individual test?

Chris Goward: The number of visits is just one of the variables that determines statistical significance. The conversion rate of the Control and conversion rate delta between the variations are also part of the calculation. Statistical significance is achieved when there is enough traffic (i.e. sample size), enough conversions, and the conversion rate delta is great enough.

Here’s a handy Excel test duration calculator. Fortunately, today’s testing tools calculate statistical significance automatically, which simplifies the conversion champion’s decision-making (and saves hours of manual calculation!)

When planning tests, it’s helpful to estimate the test duration, but it isn’t an exact science. As a rule-of-thumb, you should plan for smaller isolation tests to run longer, as the impact on conversion rate may be less. The test may require more conversions to potentially achieve confidence.

Larger, more drastic cluster changes would typically run for a shorter period of time, as they have more potential to have a greater impact. However, we have seen that isolations CAN have the potential to have big impact. If the evidence is strong enough, test duration shouldn’t hinder you from trying smaller, more isolated changes as they can lead to some of the biggest insights.

Often, people that are new to testing become frustrated with tests that never seem to finish. If you’ve run a test with more than 30,000 to 50,000 visitors and one variation is still not statistically significant over another, then your test may not ever yield a clear winner and you should revise your test plan or reduce the number of variations being tested.

Further Reading: Do you have to wait for each test to reach statistical significance? Learn more in this blog post: “The more tests, the better!” and other A/B testing myths, debunked

Back to Top

Q: We are new to optimization (had a few quick wins with A/B testing and working toward a geo targeting project). Looking at your Infinity Optimization Process, I feel like we are doing a decent job with exploration and validation – for this being a new program to us. Our struggle seems to be your orange dot… putting the two sides together – any advice?

Chris Goward: If you’re getting insights from your Exploratory research, those insights should tie into the Validate tests that you’re running. You should be validating the insights that you’re getting from your Explore phase. If you started with valid insights, the results that you get really should be generating growth, and they should be generating insights.

Part of it is your Design of Experiments (DOE). DOE is how you structure your hypotheses and how you structure your variations to generate both growth and insights, and those are the two goals of your tests.

If you’re not generating growth, or you’re not generating insights, then your DOE may be weak, and you need to go back to your strategy and ask, why am I testing this variation? Is it just a random idea? Or, am I really isolating it against another variation that’s going to teach me something as well as generate lift? If you’re not getting the orange dot right, then you probably need to look at researching more about Design of Experiments.

Q: When test results are insignificant after lots impressions, how do you know when to ‘call it a tie’ and stop that test and move on?

Chris Goward: That’s a question that requires a large portion of “it depends.” It depends on whether:

  • You have other tests ready to run with the same traffic sources
  • The test results are showing high volatility or have stabilized
  • The test insights will be important for the organization

There’s an opportunity cost to every test. You could always be testing something else and need to constantly be asking whether this is the best test to be running now vs. the cost and potential benefit of the next test in your conversion strategy.

Back to Top

 

Q: There are tools meant to increase testing velocity with pre-built widgets and pre-built test variations, even – what are your thoughts on this approach?

A PANEL RESPONSE

John Ekman: Pre-built templates provide a way to get quick wins and uplift. But you won’t understand why it created an uplift. You won’t understand what’s going on in the brain of your users. For someone who believes that experimentation is a way to look in the minds of whoever is in front of the screen, I think these methods are quite dangerous.

Chris Goward: I’ll take a slightly different stance. As much as I talk about understanding the mind of the customer, asking why, and testing based on hypotheses, there is a tradeoff. A tradeoff between understanding the why and just getting growth. If you want to understand the why infinitely, you’ll do multivariate testing and isolate every potential variable. But in practice, that can’t happen. Very few have enough traffic to multivariate test everything.

But if you don’t have tons of traffic and you want to get faster results, maybe you don’t want to know the why about anything, and you just want to get lift.

There might be a time to do both. Maybe your website performance is really bad, or you just want to try a left-field variation, just to see if it works…if you get a 20% lift in your revenue, that’s not a failure. That’s not a bad thing to do. But then, you can go back and isolate all of the things to ask yourself: Well, I wonder why that won, and start from there.

The approach we usually take at WiderFunnel is to reserve 10% of the variations for ‘left-field’ variations. As in, we don’t know why this will work, but we’re just going to test something crazy and see if it sticks.

David Darmanin: I agree, and disagree. We’re living in an era when technology has become so cheap, that I think it’s dangerous for any company to try to automate certain things, because they’re going to just become one of many.

Creating a unique customer experience is going to become more and more important.

If you are using tools like a platform, where you are picking and choosing what to use so that it serves your strategy and the way you want to try to build a business, that makes sense to me. But I think it’s very dangerous to leave that to be completely automated.

Some software companies out there are trying to build a completely automated conversion rate optimization platform that does everything. But that’s insane. If many sites are all aligned in the same way, if it’s pure AI, they’re all going to end up looking the same. And who’s going to win? The other company that pops up out of nowhere, and does everything differently. That isn’t fully ‘optimized’ and is more human.

Optimization, in itself, if it’s too optimized, there is a danger. If we eliminate the human aspect, we’re kind of screwed.

Video Resource: This panel response comes from the Growth & Conversion Virtual Summit held this Spring. You can still access all of the session recordings for free, here.

Back to Top

What conversion optimization questions do you have?

Add your questions in the comments section below!

The post Your frequently asked conversion optimization questions, answered! appeared first on WiderFunnel Conversion Optimization.

View article:

Your frequently asked conversion optimization questions, answered!

Structured Approach To Testing Increased This Insurance Provider’s Conversions By 30%

CORGI HomePlan provides boiler and home cover insurance in Great Britain. It offers various insurance policies and an annual boiler service. Its main value proposition is that it promises “peace of mind” to customers. It guarantees that if anything goes wrong, it’ll be fixed quickly and won’t cost anything extra over the monthly payments.

Problem

CORGI’s core selling points were not being communicated clearly throughout the website. Insurance is a hyper-competitive industry and most customers compare other providers before taking a decision. After analyzing its data, CORGI saw that there was an opportunity to improve conversions and reduce drop-offs at major points throughout the user journey. To help solve that problem, CORGI hired Worship Digital, a conversion optimization agency.

Observations

Lee Preston, a conversion optimization consultant at Worship Digital, analyzed CORGI’s existing Google Analytics data, conducted user testing and heuristic analysis, and used VWO to run surveys and scrollmaps. After conducting qualitative and quantitative analysis, Lee found that:

  • Users were skeptical of CORGI’s competition, believing they were not transparent enough. Part of CORGI’s value proposition is that it doesn’t have any hidden fees so conveying this to users could help convince them to buy.
  • On analyzing the scrollmap results, it was found that only around a third of mobile users scrolled down enough to see the value proposition at the bottom of the product pages.
  • They ran surveys for users and asked, “Did you look elsewhere before visiting this site? (If so, where?)” More than 70% of respondents had looked elsewhere.
  • They ran another survey and asked users what they care about most; 18% of users said “fast service” while another 12% said “reliability”.

This is how CORGI’s home page originally looked:

corgi_original

Hypothesis

After compiling all these observations, Lee and his team distilled it down to one hypothesis:

CORGI’s core features were not being communicated properly. Displaying these more clearly on the home page, throughout the comparison journey, and the checkout could encourage more users to sign up rather than opting for a competitor.

Lee adds, “Throughout our user research with CORGI, we found that visitors weren’t fully exposed to the key selling points of the service. This information was available on different pages on the site, but was not present on the pages comprising the main conversion journey.”

Test

Worship Digital first decided to put this hypothesis to test on the home page.

“We hypothesized that adding a USP bar below the header would mean 100% of visitors would be exposed to these anxiety-reducing features, therefore, improving motivation and increasing the user conversion rate,” Lee said.

This is how the variation looked.

corgi_variation

Results

The variation performed better than the control across all devices and majority of user types. The variation increased the conversions by 30.9%.

“We were very happy that this A/B test validated our research-driven hypothesis. We loved how we didn’t have to buy some other tool for running heatmaps and scrollmaps for our visitor behavior experiment,” Lee added.

Next Steps

Conversion optimization is a continuous process at CORGI. Lee has been constantly running new experiments and gathering deep understanding about the insurance provider’s visitors. For the next phase of testing, he plans to:

  • Improve the usability of the product comparing feature.
  • Identify and fix leaks during the checkout process.
  • Make complex product pages easier to digest.

0

0 ratings

How will you rate this content?

Please choose a rating

The post Structured Approach To Testing Increased This Insurance Provider’s Conversions By 30% appeared first on VWO Blog.

Original article – 

Structured Approach To Testing Increased This Insurance Provider’s Conversions By 30%

Data-Driven Optimization: How The Moneyball Method Can Deliver Increased Revenues

Whether your current ROI is something to brag about or something to worry about, the secret to making it shine lies in a 2011 award-winning movie starring Brad Pitt.

Do you remember the plot?

The manager of the downtrodden Oakland A’s meets a baseball-loving Yale economics graduate who maintains certain theories about how to assemble a winning team.

His unorthodox methods run contrary to scouting recommendations and are generated by computer analysis models.

Despite the ridicule from scoffers and naysayers, the geek proves his point. His data-driven successes may even have been the secret sauce, fueling Boston’s World Series title in 2004 (true story, and the movie is Moneyball).

img-0_copy

What’s my point?

Being data-driven seemed a geeks’ only game, or a far reach to many, just a few years ago. Today, it’s time to get on the data-driven bandwagon…or get crushed by it.

Let’s briefly look at the situation and the cure.

Being Data-Driven: The Situation

Brand awareness, test-drive, churn, customer satisfaction, and take rate—these are essential nonfinancial metrics, says Mark Jeffery, adjunct professor at the Kellogg School of Management.

Throw in a few more—payback, internal rate of return, transaction conversion rate, and bounce rate—and you’re well on your way to mastering Jeffery’s 15 metric essentials.

Why should you care?

Because Mark echoes the assessment of his peers from other top schools of management:

Organizations that embrace marketing metrics and create a data-driven marketing culture have a competitive advantage that results in significantly better financial performance than that of their competitors. – Mark Jeffery.

You don’t believe in taking marketing and business growth advice from a guy who earned a Ph.D. in theoretical physics? Search “data-driven stats” for a look at the research. Data-centric methods are leading the pack.

Being Data-Driven: The Problem

If learning to leverage data can help the Red Sox win the World Series, why are most companies still struggling to get on board, more than a decade later?

There’s one little glitch in the movement. We’ve quickly moved from “available data” to “abundant data” to “BIG data.”

CMO’s are swamped with information and are struggling to make sense of it all. It’s a matter of getting lost in the immensity of the forest and forgetting about the trees.

We want the fruits of a data-driven culture. We just aren’t sure where or how to pick them.

Data-Driven Marketing: The Cure

I’ve discovered that the answer to big data overload is hidden right in the problem, right there at the source.

Data is produced by scientific means. That’s why academics like Mark are the best interpreters of that data. They’re schooled in the scientific method.

That means I must either hire a data scientist or learn to approach the analytical part of business with the demeanor of a math major.

Turns out that it’s not that difficult to get started. This brings us to the most important aspect, that is, the scientific approach to growth.

Scientific Method of Growth

You’re probably already familiar with the components of the scientific method. Here’s one way of describing it:

  1. Identify and observe a problem, then state it as a question.
  2. Research the topic and then develop a hypothesis that would answer the question.
  3. Create and run an experiment to test the hypothesis.
  4. Go over the findings to establish conclusions.
  5. Continue asking and continue testing.

    Scientific Method of Growth and Optimization

By focusing on one part of the puzzle a time, neither the task nor the data will seem overwhelming. As you are designing the experiment, you can control it.

Here’s an example of how to apply the scientific method to data-driven growth/optimization, as online enterprises would know it.

  1. Question: Say you have a product on your e-commerce site that’s not selling as well as you want. The category manager advises lowering the price. Is that a good idea?
  2. Hypothesis: Research tells you that similar products are selling at an average price that is about the same as yours. You hypothesize that lowering your price will increase sales.
  3. Test: You devise an A/B test that will offer the item at a lower price to half of your e-commerce visitors and at the same price to the other half. You run the test for one week.
  4. Conclusions: Results show that lowering the price did not significantly increase sales.
  5. Action: You create another hypothesis to explain the disappointing sales and test this hypothesis for accuracy.

A/B Testing

You may think that the above example is an oversimplification, but we’ve seen our clients at The Good make impressive gains by arriving at data-driven decisions based on experiments even less complicated.

And the scientific methodology applies to companies both large and small, too. We’ve used the same approach with everyone from Xerox to Adobe.

Big data certainly is big, but it doesn’t have to be scary. Step-by-step analysis on fundamental questions followed by a data-driven optimization plan is enough to get you large gains.

The scientific approach to growth can be best implemented with a platform that is connected and comprehensive. Such a platform, which shows business performance on its goals, from one stage of the funnel to another, can help save a lot of time, effort, and money.

Conclusion

Businesses need to be data-driven in order to optimize for growth, and to achieve business success. The scientific method can help utilize data in the best possible ways to attain larger gains. Take A/B testing, for example. Smart A/B testing is more than just about testing random ideas. It is about following a scientific, data-driven approach. Follow the Moneyball method of data-driven testing and optimization, and you’ll be on your way to the World Series of increased revenues in no time.

Do you agree that a data-driven approach is a must for making your ROI shine? Share your thoughts and feedback in the comments section below.

CTA_FreeTrial_Being_Data_Driven

The post Data-Driven Optimization: How The Moneyball Method Can Deliver Increased Revenues appeared first on VWO Blog.

Excerpt from: 

Data-Driven Optimization: How The Moneyball Method Can Deliver Increased Revenues

[Gifographic] Better Website Testing – A Simple Guide to Knowing What to Test

Note: This marketing infographic is part of KlientBoost’s 25-part series. You can subscribe here to access the entire series of gifographics.


If you’ve ever tested your website, you’ve probably been in the unfortunate situation of running out of ideas on what to test.

But don’t worry – it happens to everybody.

That’s of course, unless you have a website testing plan.

That’s why KlientBoost has teamed up with VWO to bring to you a gifographic that provides a simple guide on knowing the what, how, and why when it comes to testing your website.

21-vwo-website-testing2

Setting Your Testing Goals

Like a New Year’s resolution around getting fitter, if you don’t have any goals tied to your website testing plan, then you may be doing plenty of work, with little results to show.

With your goals in place, you can focus on the website tests that will help you achieve those goals –the fastest.

Testing a button color on your home page when you should be testing your checkout process, is a sure sign that you are heading to testing fatigue or the disappointment of never wanting to run a test again.

But let’s take it one step further.

While it’s easy to improve click-through rates, or CTRs, and conversion rates, the true measure of a great website testing plan comes from its ability to increase revenue.

No optimization efforts matter if they don’t connect to increased revenue in some shape or form.

Whether you improve the site user experience, your website’s onboarding process, or get more conversions from your upsell thank you page, all those improvements compound into incremental revenue gains.

Lesson to be learned?

Don’t pop the cork on the champagne until you know that an improvement in the CTRs or conversion rates would also lead to increased revenue.

Start closest to the money when it comes to your A/B tests.

Knowing What to Test

When you know your goals, the next step is to figure out what to test.

You have two options here:

  1. Look at quantitative data like Google Analytics that show where your conversion bottlenecks may be.
  2. Or gather qualitative data with visitor behavior analysis where your visitors can tell you the reasons for why they’re not converting.

Both types of data should fall under your conversion research umbrella. In addition to this gifographic, we created another one, all around the topic of CRO research.

When you’ve done your research, you may find certain aspects of a page that you’d like to test. For inspiration, VWO has created The Complete Guide To A/B Testing – and in it, you’ll find some ideas to test once you’ve identified which page to test:

  • Headlines
  • Subheads
  • Paragraph Text
  • Testimonials
  • Call-to-Action text
  • Call-to-Action button
  • Links
  • Images
  • Content near the fold
  • Social proof
  • Media mentions
  • Awards and badges

As you can see, there are tons of opportunities and endless ideas to test when you decide what to test and in what order.

website-testing
A quick visual for what’s possible

So now that you know your testing goals and what to test, the last step is forming a hypothesis.

With your hypothesis, you’re able to figure out what you think will have the biggest performance lift with the thought of effort in mind as well (easier to get quicker wins that don’t need heaps of development help).

Running an A/B Test

Alright, so you have your goals, list of things to test, and hypotheses to back these up, the next task now is to start testing.

With A/B testing, you’ll always have at least one variant running against your control.

In this case, your control is your actual website as it is now and your variant is the thing you’re testing.

With proper analytics and conversion tracking along with the goal in place, you can start seeing how each of these two variants (hence the name A/B) is doing.

a_b-testing
Consider this a mock-up of your conversion rate variations

When A/B testing, there are two things you may want to consider before you call winners or losers of a test.

One is statistical significance. Statistical significance gives you the thumbs up or thumbs down around whether your test results can be tied to a random chance. If a test is statistically significant, then the chances of the results are ruled out.

And VWO has created its own calculator so that you can see how your test is doing.

The second one is confidence level. It helps you decide whether you can replicate the results of your test again and again.

A confidence level of 95% tells you that your test will achieve the same results 95% of the time if you run it repeatedly. So, as you can tell, the higher your confidence level, the surer you can be that your test truly won or lost.

You can see the A/B test that increased revenue for Server Density by 114%.

Multivariate Testing for Combination of Variations

Let’s say you have multiple ideas to test, and your testing list is looking way too long.

Wouldn’t it be cool if you could test multiple aspects of your page at once to get faster results?

That’s exactly what multivariate testing is.

Multivariate testing allows you to test which combinations of different page elements affect each other when it comes to CTRs, conversion rates, or revenue gains.
Look at the multivariate pizza example below:

multivariate-testing-example
Different headlines, CTAs, and colors are used

The recipe for multivariate testing is simple and delicious.

multivariate-testing-formula
Different elements increase the combination size

And the best part is that VWO can automatically run through all the different combinations you set so that your multivariate test can be done without the heavy lifting.

If you’re curious about whether you should A/B test or run multivariate tests, then look at this chart that VWO created:

multivariate-testing-software-visual-website-optimizer
Which one makes the most sense for you?

Split URL Testing for Heavier Variations

If you find that your A/B or multivariate tests lead you to the end of the rainbow that shows bigger initiatives in backend development or major design changes are needed, then you’re going to love split URL testing.

As VWO states:

“If your variation is on a different address or has major design changes compared to control, we’d recommend that you create a Split URL Test.”

what-is-split-testing-explained-by-vwo

Split URL testing allows you to host different variations of your website test without changing the actual URL.

As the visual shows above, you can see that the two different variations are set up in a way that the URL is different as well.

URL testing is great when you want to test some major redesigns such as your entire website built from scratch.

By not changing your current website code, you can host the redesign on a different URL and have VWO split the traffic between the control and the variant—giving you clear insight whether your redesign will perform better.

Over to You

Now that you have a clear understanding on different types of website tests to run, the only thing left is to, well, run some tests.

Armored with quantitative and qualitative knowledge of your visitors, focus on the areas that have the biggest and quickest impact to strengthen your business.

And I promise, when you finish your first successful website test, you’ll get hooked on.

I know I was.

0

0 ratings

How will you rate this content?

Please choose a rating

The post [Gifographic] Better Website Testing – A Simple Guide to Knowing What to Test appeared first on VWO Blog.

Continue reading: 

[Gifographic] Better Website Testing – A Simple Guide to Knowing What to Test

15 Conversion Rate Experts Share Why to Step Up from A/B Testing to Conversion Optimization

A/B testing and conversion rate optimization (CRO) are not synonymous, but often confused.

A/B testing is exactly what it says—a test to verify different sets of variations on your website. Conversion rate optimization, however, is much more than just testing.

Conversion optimization is a scientific process that starts with analyzing your business’ leaks, making educated hypotheses to fix them, and then testing those hypotheses.

Conversion optimization is a process that needs to be repeated, but A/B testing is a technique. A formalized conversion optimization process can advance somewhat like this:

  1. Tracking metrics and identifying what parts of the conversion funnel need fixing
  2. Analyzing why visitors are doing what they are doing
  3. Creating and Planning your hypotheses for optimization
  4. Testing the hypotheses against the existing version of the website
  5. Learning from the tests and applying the learning to the subsequent tests

vwo-is-evolving-into-a-conversion-optimization-platform1

To further clear up the air around the two terms, we got in touch with the top in line conversion rate experts and picked their brains on the same. The experts tell us about their experiences with A/B testing and conversion optimization and why you should switch to the latter.

Quotes from Conversion Rate Experts

Chris Goward, Founder and CEO, WiderFunnel

Back in 2007, I could already see that a huge gap was developing among companies that are perfecting a process for conversion optimization and those that are following the easy advice of so many consultants.

Instead of selling top-of-mind advice, I focused WiderFunnel on refining the process of continuous optimization for leading brands. For each of our client engagements, we run a holistic CRO program that builds insights over time to continuously improve our understanding of their unique customer segments. The results speak for themselves.

Ad hoc A/B testing is a tragic use of your limited traffic when you realize how much growth and insights structured optimization program could be delivering. In an example that we published recently, a structured CRO program is exactly what this company needed to double its revenue two years in a row, over the ad hoc testing it was previously doing.

Brian Massey, Founder, Conversion Sciences

The most effective conversion optimization program seeps into the bones of your organization. Decisions that were once exclusively creative in nature gain a data component. Much of the guessing drains from your online marketing. We call this “rigorous creativity,” and it marries your best marketing work with insights about your visitors. It cannot be accomplished by running a few tests, but comes from asking daily, “Do we have some data to help guide us? If not, can we collect it?” The rigorously creative business is good at finding and creating this data and using it to maximize visitor satisfaction and business profit.

Rand Fishkin, Founder and CEO, Moz

Without a strong CRO strategy that encompasses the experience visitors have discovering, using, exploring, and hopefully eventually converting on your site, you’ll always be plugging holes in a leaky bucket rather than building a better container.

The best opportunities to improve conversion usually aren’t from changing individual pages one at a time with a multitude of tests, but rather by crafting a holistic, thoughtful experience that runs throughout the site, then iterating on elements consistently with an eye to learning, and applying knowledge from each test to the site as a whole.

Karl Gilis, Co-founder,  AGConsult

An AB test should come at the end of your homework. If you’re just AB testing, you’re probably gambling. Your tests are based on things you’ve read on the Internet, gut feeling, and opinions. Some of your tests will be winners, most of them losers. Because you’re shooting blanks.

The homework is data analysis and user research. This will reveal the problem areas and why your visitors are leaving or not doing what you want them to do. The better you know the dreams, the hopes, the fears, the barriers, and uncertainties of your users, the better you’ll be able to work out a test that will have a real impact.

In case you’re in doubt, impact seldom comes from design changes. Don’t change the color of your button, change the text on that button. Not randomly, but based on what users want and your knowledge of influencing people.

Don’t focus too much on the design. Focus on your offer, your value proposition, and how you sell your stuff.

Don’t sell the way you like to sell. Sell the way your customers want to buy.

André Scholten, SEO and Site Speed specialist, Google Analytics

Create a strategy that makes your clients happier and don’t focus on the money. Single non-related tests on the conversion funnel follow each other up, based on abandonment rates, judged on their influence on revenue. That’s not a strategy but more an operational process where test after test is conducted without vision. You should create a test culture within your company that tests everything that will make your website a nicer place for your customers. Give them feedback possibilities with feedback or chat tools to learn from these. Take their wishes into account and create tests to verify if their wishes are met. Create a test strategy that focuses on all goals: not only the money, but also information-type goals, contact-goals, etc. It will give you so much to do and to improve. That’s a holistic approach to testing.

Kathryn Aragon, Content Strategist & Consultant, Ahrefs

“Winging it” may work for musicians and cooks; but in marketing, any decision made outside of a holistic CRO program is a bad one. Only through testing will you find the right message, the right audience, and the right offer. And only after you nail these critical elements will you see the profits you need. It doesn’t matter how small or new your business is, take time to test your ideas. You’ll be glad you did.

Joel Harvey, COO & Conversion Optimization Expert, Conversion Sciences

To say an online business is great due to AB Testing is like saying a Football team is great because of their stadium. It is the entire team framework that leads to winning. An optimization framework integrates A/B testing as one component that includes the team, the brand, advertising, and a solid testing strategy. This is how industry-leading websites win year after year.

Rich Page, Conversion Rate Optimization and Web Analytics Expert

Many online businesses make the mistake of thinking that A/B testing is the same as CRO and don’t pay enough attention to the other key aspects of CRO. This usually gives them disappointing results on their conversion rates and online revenue. Web analytics, website usability, visitor feedback, and persuasion techniques are the other key CRO elements that you need to frequently use to gain greatest results.

Gaining an in-depth visitor feedback is a particularly essential part of CRO. This helps you discover your visitor’s main needs and common challenges, and forms high-impact ideas for your A/B tests (rather than just guessing or listening to your HiPPOs). Gaining visitor insights from usability tests and watching recordings of them using your website is particularly revealing.

Peter Sandeen, Value Proposition and Marketing Message Development Expert

Just about every statistic on A/B test results says that most tests don’t create positive results (or any results at all). That’s partly because of the inherent uncertainties of testing. But a big part is the usual lack of a real plan.

Actually, you need two plans.

The first plan, the big picture one, is there to keep you focused on testing the right parts of your marketing. It tells if you should spend most of your energy on testing landing pages, prices, or perhaps webinar content.

The second plan is there to make sure you’re creating impactful differences in your tests. So instead of testing two headlines that mean essentially the same thing (e.g. “Get good at golf fast” and “Improve your golf skills quickly”), you test things that are likely to create a different conversion rate (e.g. “3-hour practice recommended by golf pros”). And when you see increased or decreased conversion rates, you create the next test based on those results.
With good plans, you can get positive results from 50–75% of your tests.

Roger Dooley, Author of Brainfluence

Simple A/B testing often leads to a focus on individual elements of a landing page or campaign – a graphic, a headline, or a call to action. This can produce positive results, but often distracts one from looking at the bigger picture. My emphasis is on using behavior science to improve marketing, and that approach works best when applied to multiple elements of the customer journey.

Jeffrey Eisenberg, CEO, Buyer Legends

Conversion rate (CR) is a measure of your ability to persuade visitors to take action the way you want them to. It’s a reflection of your effectiveness and customer satisfaction. For you to achieve your goals, visitors must first achieve theirs. Conversion rate, as a metric, is a single output. CR is a result of the many inputs that make up a customer experience. That experience has the chance to annoy, satisfy, or delight them. We need to optimize the inputs. Ad hoc A/B tests cannot do this. Companies that provide a superior experience are rewarded with higher conversion rates. Focus on improving customer experience, and you’ll find the results in your P&L, Balance Sheet, and Cash Flow statements.

Jakub Linowski, Founder & Lead Designer, Linowski Interaction Design

Thinking beyond the individual A/B test as optimization is a natural part of gaining experience. We all probably started off by running a handful of ad hoc tests and that’s okay—that’s how we learn. However, as we grow, three things may happen which bring us closer towards becoming more strategic:
1. We become conscious of ways in which we can prioritize our testing ideas.
2. We become conscious of the structure of experiments and how tests can be designed.
3. We think of a series of upcoming tests which may or may not work together to maximize returns.

Here is one example of one test strategy/structure: The Best Shot Test. It aims to maximize the effect size and minimize the testing duration, while doing so at the cost of a blurred cause-effect relationship.

Naomi Niles, Owner, ShiftFWD

Running basic A/B tests based on best practices is okay for a start. But to really get to the next level, it’s important to see how all the pieces of the puzzle fit together. This gives us a better understanding of what exactly we’re testing for and reach for results that fit the specific goals of the organization.

Kristi Hines, Certified Digital Marketer

Depending on your business and the size of your marketing team, you may want to go beyond just testing your website or a landing page. You may want to expand your A/B testing to your entire online presence.

For example, try changing your main thing (keyword phrase, catch phrase, elevator pitch, headline, etc.) not just on your website, but also on all your homepage’s meta description, your social media bios and intros, your email signatures, etc.

Why? Because here’s what’s going to happen. If you have consistent messaging across a bunch of channels that someone follows you on, and all of a sudden, they come to your landing page with an inconsistent message (the variant, if you will), then they may not convert simply because of the inconsistency of your message. Not because it wasn’t a good message, but because it wasn’t the message they were used to receiving from you.

As my own personal case example, when I change my main phrase “Kristi Hines is a freelance writer, business blogger, and certified digital marketer.” I don’t do it just on my website. I do it everywhere. And I don’t do it for just a week. I do it for at least two to three months unless it’s a complete dud (i.e., no leads in the first week at all).

But what I usually find is when I find a good phrase, I’ll start getting leads from all over the place. And usually they will say they went from one channel to the next. Hence, don’t just test. Test consistency across your entire presence, if possible. The results may be astonishing.

Jason Acidre, Co-founder/CEO, Xight Interactive

I do think that Conversion Rate Optimization as a marketing discipline goes beyond just a series of A/B and/or Multivariate tests. As external factors such as your brand and what other people say about the business (reviews and referrals) can also heavily impact how a site can perform in terms of attracting more actions from its intended users/visitors.

For instance, positive social proof (number of people sharing/liking a particular product or a brand on different social networks) can also influence your customer’s buying process. And improving on this aspect of the brand involves a whole different campaign – which would involve a more holistic approach to be integrated to your CRO program. Another factor to consider is the quality of traffic your campaign is getting (through SEO, PPC, paid social campaigns, content marketing, etc.) The more targeted traffic you’re able to acquire, the better your conversions will be.

Your Turn

A full-fledged conversion optimization program goes a long way and is a lot more beneficial than ad hoc testing.

So what are you waiting for? Let stepping up to conversion optimization be your #1 goal in the new year.

cta2

0

0 ratings

How will you rate this content?

Please choose a rating

The post 15 Conversion Rate Experts Share Why to Step Up from A/B Testing to Conversion Optimization appeared first on VWO Blog.

Source: 

15 Conversion Rate Experts Share Why to Step Up from A/B Testing to Conversion Optimization

10 Questions to Ask Yourself When Your Conversion Rates Are Below Average

10-questions-ab-test-blog
Don’t wait until it’s too late. Check and maintain your conversion rates often, just like you would your car. Image via Shutterstock.

A major faux pas I often see with conversion rates is that businesses only seem to to address them when alarms are triggered.

Conversion rates require ongoing maintenance and should be regular focal points in your optimization and marketing efforts. Like a vehicle engine, they should be checked and maintained regularly.

When conversion rates aren’t what you had expected, it’s not uncommon for marketers and business owners to start making knee-jerk tweaks to on-page elements, hoping to lift conversions through A/B testing. While there may be some benefit to tweaking the size of buttons and adjusting landing page headlines and CTAs, there’s a great deal more to conversion optimization.

You must take a scientific approach that includes qualitative and quantitative data, rather than an à la carte strategy of piecing together what you think might be most effective.

Before making any changes to your landing pages, ask yourself these 10 critical questions:

1. Is there an audience/market fit for the product?

Analyzing the market for your product is something you do in the early stages of product development before launching. It’s part of gathering initial research on your audience and what they want or need. When you experience conversion problems, you may want to revisit this.

Use keyword tools, and platforms like Google Trends to discover the volume of interest in your particular product. If the traffic shows a steady or growing interest, then how well does the product in its current form align with the needs of the people searching for it?

Revisit your audience research and review the needs and problems of your customer. Make sure your product addresses those needs and provides a solution. Then look to how you position the product to ensure customers can see the value.

2. How accurate is your audience-targeting strategy?

There’s nothing quite as frustrating as watching hundreds of people visit your product or landing pages, only to be left with empty carts and no opt-ins.

visitors-no-conversions

It’s not easy to figure out what’s holding them back, but one of the first questions you should ask is whether you’re targeting the right people.

You may very well have a great product for the market, but if you’re presenting it to the wrong audience then you’ll never generate significant interest. This holds true for major, established brands as much as new startups.

Don’t start A/B testing without reading this ebook!

Learn how to build, test and optimize your landing pages with The Ultimate Guide to Landing Page Optimization.
By entering your email you’ll receive weekly Unbounce Blog updates and other resources to help you become a marketing genius.

3. Has trust been established?

Asking people to hand over personal and financial information on the web requires a huge leap of faith. You need to establish trust before asking them to add a product to their carts and complete the checkout process or even to give you their email address.

One study from Taylor Nelson Sofres showed that consumers might terminate as many as 70% of online purchases due to a lack of trust. People may really want what you’re selling, but if they don’t trust you, then they’ll never convert.

There are several ways to establish and grow trust, which include:

establish-trust
Testimonials, notable recognitions and brand affiliations help to build trust among prospective customers. Image via ContentMarketer.io.

4. Do customers understand the benefits and value?

For customers, everything comes down to value, which is the foundation of your unique selling positions (USP.) You can’t just convince someone to buy something through conversion tricks like big buttons and snappy graphics. If they don’t understand the product’s value or how it might benefit them, then they have no reason to buy.

You have to communicate the value of your products accurately and succinctly, breaking down what you’re selling to the most basic level so your customer sees the benefits, rather than just the features.

Here’s a great example that I took from Unbounce:

unbounce-lp-benefits

This landing page put a big the value proposition right up front, mixing in high-impact benefit statements that help seat the value with the audience.

5. What is the purchase experience really like?

It’s important to understand the journey your customer has to follow in order to reach the point where they’re willing to convert. While your landing pages or ecommerce site might look clean, the next step toward a conversion could make the whole thing come crashing down.

Providing top-notch user experiences across all devices is imperative, which includes minimizing the number of clicks necessary to complete the transaction.

Complicated site navigation and checkout processes are among the top causes of cart abandonment. Test your conversion paths internally, and consider trying out a service like UserTesting.com to get unbiased consumer feedback on your UX.

6. Where are the leaks in the funnel?

Figuring out where people exit your site can be a good indicator of why people leave —– at the very least, it can help you narrow down where to start your investigation. Working backwards from the exit point can uncover friction points you didn’t even know existed.

Open your analytics and monitor the visitor flow. Pay close attention to where traffic enters, the number of steps users have to take while navigating from page to page, and trace the point where they typically exit.

Chart your own journey through your website while examining the on-page elements and user experience. Be sure to compare visitor behavior with your funnel visualization to determine when a leak is actually a leak.

7. What are the biggest friction points?

Friction in your sales funnel can be defined as anything that gets in the way of a conversion, either by slowing it down or stopping it completely. Some friction points might include:

  • Slow load times
  • Too many form fields
  • Too many clicks to complete an action
  • Hidden or missing information (like withholding shipping or contact information)
  • Poorly written copy and readability issues
  • Stop words
  • Garish design

(For additional insights into possible friction points in your own funnel, this article from Jeremy Smith, posted on Kissmetrics is a wealth of knowledge.)

You can reduce friction on your own site by taking small steps and testing them to see how they alter your conversion rates. Ask as few questions as possible, avoid overwhelming the customer with too many options, aim for clean and pleasing designs and hire a pro copywriter to make a stronger connection through words.

One of the simplest examples of improvement through the removal of friction comes from Expedia.

expedia-split-test
One seemingly insignificant change can have a dramatic impact on conversion. Image source.

By removing the “company name” field — just a single field on the submission form — Expedia made it easier for people to complete the form. That reduction in friction led to a $12 million increase in profit.

Given the size of Expedia and the volume of traffic they see, you could expect to see a lift like this through A/B testing. Changes don’t always being about such dramatic results, but you’ll never know the potential unless you start testing to remove those friction points in your funnel.

8. How do my customers feel about the process?

When you have concerns about your conversion rates, often the best place to turn for insights are the consumers.

Use feedback tools like a consumer survey to reach out to current customers, as well as those who abandoned their carts midway through the shopping experience. Ask them to provide information on why they made a purchase, why they chose not to, difficulties they experienced while on your site, feedback on design, etc.

This approach not only provides quality insight into what could be the likely cause of poor conversions, but also shows customers (and potential customers) that you’re making an effort to improve your site based on their feedback.

9. What does the data say?

Whenever possible, you want to make changes based on the data you’ve accumulated. Don’t focus solely on the conversion metrics of your website; analyze the data from your social ads and insights, visitor flow, bounce rates, time spent on page and more. Let the data drive your actions; otherwise you’re just firing wildly into the dark and hoping to hit your target.

Whether we’re talking about the ROI for content marketing or boosting ecommerce sales, data always matters. When you make changes, measure the new data and monitor those changes against the original. It’s the only way to know if you’re headed in the right direction.

10. How are my competitors selling this?

While I always warn people not to follow their competitors, you should still be aware of what they’re doing to leverage competitive insights garnered from their market research.

If your conversions are plummeting for specific products or services, look to the competition. How are they positioning their products? What are they doing differently to hook and engage the target audience? Draw comparisons and see how they align with the insights you’ve gleaned from your data to determine which elements you should test and improve upon.

Over to you for the questions

Now it’s time to look at your funnel and start asking the tough questions:

  • Do you need to re-verify product/market fit?
  • How accurate is your audience targeting?
  • Does your audience trust you?
  • Do your customers understand the benefits and value?
  • What’s the purchase experience like for the customer?
  • Where are the leaks in the funnel?
  • Are there major friction points killing conversions?
  • What feedback can customers offer about the process?
  • What does your data say about the conversion process?
  • What are your competitors doing right?

Remember to pay close attention to the numbers and make your changes based on data — not assumptions.

Read the article:  

10 Questions to Ask Yourself When Your Conversion Rates Are Below Average

Learn How Experts Derive Insights from A/B Test Results

You conducted an A/B test—great! But what next?

How would you derive valuable insights from the A/B test results? And more importantly, how would you incorporate those insights into subsequent tests?

As Deloitte University Press Research on Industrialized Analytics suggests, acquiring information is just the first step of any robust data analysis program. Transforming that information into insights and eventually, the insights into actions is what yields successful results.

A/B testing Result- Data analytics

This post talks about why and how you should derive insights from your A/B test results and eventually apply them to your conversion rate optimization (CRO) plan.

Analyzing Your A/B Test Results

No matter how the overall result of your A/B test results turned out to be—positive, negative, or inconclusive—it is imperative to delve deeper and gather insights. Not only can this help you to aptly measure the success (or failure) of your A/B test, but also provide you with validations specific to your users.

As Bryan Clayton, CEO of GreenPal puts it, “It amazes me how many organizations conflate the value of A/B testing. They often fail to understand that the value of testing is to get not just a lift but more of learning.

Sure 5% and 10% lifts in conversion are great; however, what you are trying to find out is the learning about what makes your customers say ‘yes’ to your offer.
Only with A/B testing can you close the gap between customer logic and company logic and, gradually, over time, match the internal thought sequence that is going on in your customers’ heads when they are considering your offer on your landing page or within your app.”

Here is what you need to keep in mind while analyzing your A/B test results:

Tracking the Right Metric(s)

When you are analyzing A/B test results, check if you are looking for the correct metric. If multiple metrics (secondary metrics along with the primary) are involved, you need to analyze all of them individually.

Ideally, you should track both micro and macro conversions.

Brandon Seymour, founder of Beymour Consulting rightly points out: “It’s important to never rely on just one metric or data source. When we focus on only one metric at a time, we miss out on the bigger picture. Most A/B tests are designed to improve conversions. But what about other business impacts such as SEO?

It’s important to make an inventory of all metrics that matter to your business, before and after every test that you run. In the case of SEO, it may require you to wait for several months before the impacts surface. The same goes for data sources. Reporting and analytics platforms aren’t accurate 100 percent of the time, so it helps to use different tools to measure performance and engagement. It’s easier to isolate reporting inaccuracies and anomalies when you can compare results across different platforms.”

Most A/B testing platforms have built-in analytics sections to track all the relevant metrics. Moreover, you can also integrate these testing platforms with the most popular website analytics tools such as Google Analytics. To track A/B test results via Google Analytics, you can also refer to this article by ConversionXL.

Conducting Post-Test Segmentation

You should also create different segments from your A/B tests and analyze them separately to gauge a clearer picture of what may be happening. The results you derive from generic nonsegmented testing will provide illusory results that lead to skewed actions.

There are broad types of segmentation that you can create to divide your audience. Here is a set of segmentation approach from Chadwick Martin Bailey:

  • Demographic
  • Attitudinal
  • Geographical
  • Preferential
  • Behavioral
  • Motivational

Post-test segmentation allows you to deploy variation based on a specific user segment. For instance, if you notice that a particular test affected new and returning users differently (and notably), you will want to apply your variation only to that particular user segment.

However, searching through lots of different types of segments after a test means you are assured of seeing a lot of positive results just because of random chance. To avoid that, make sure you have your goal defined clearly.

Delving Deeper into Visitor Behavior Analysis

You should also monitor visitor behavior analysis tools such as  Heatmaps, Scrollmaps, Visitor Recordings and so on to gather further insights into A/B test results. For example, consider a search bar on an eCommerce website. An A/B test on the navigation bar works only if users actually use it. Visitor recordings can reveal if users are finding the navigation bar friendly and engaging. If the bar itself is complex to understand, all variations of it can fail to influence users.

Apart from giving insights on specific pages, visitor recordings can also help you understand user behavior across your entire website (or conversion funnel). You can learn how critical the page on which you are testing, is in your conversion funnel.

Maintaining a Knowledge Repository

After analyzing your A/B tests, it is imperative to document the observations from the tests. This helps you not only in transferring knowledge within the organization but also in using them as reference later.

For instance you are developing a hypothesis for your product page, and want to test the product image size. Using a structured repository, you can easily find similar past tests which could help you estimate patterns on that location.

To maintain a good knowledge base of your past tests, you need to structure it appropriately. You can organize past tests and the associated learning in a matrix, differentiated per their “funnel stage” (ToFu, MoFu or BoFu) and “the elements that were tested.” You can add other customized factors as well to enhance the repository.

Look at how Sarah Hodges, co-founder of Intelligent.ly, maintains track of the A/B test results, “At a previous company, I tracked tests in a spreadsheet on a shared drive that anyone across the organization could access. The document included fields for:

  • Start and end dates
  • Hypotheses
  • Success metrics
  • Confidence level
  • Key takeaways

Each campaign row also linked to a PDF with a full summary of the test hypotheses, campaign creative, and results. This included a high-level overview, as well as detailed charts, graphs, and findings.

At the time of deployment, I sent out a launch email to key stakeholders with a summary of the campaign hypothesis and test details, and attached the PDF. I followed up with a results summary email at the conclusion of each campaign.

Per my experience, concise email summaries were well-received; few users ever took a deep dive into the more comprehensive document.
Earlier, I created PowerPoint decks for each campaign I deployed, but ultimately found that this was time-consuming and impeded the agility of our testing program.”

Applying the Learning to Your Next A/B Test

After you have analyzed the tests and documented them according to a predefined theme, make sure that you visit the knowledge repository before conducting any new test.

The results from past tests shed light on user behavior on a website. With better understanding of the user behavior, your CRO team can have a better idea about building hypotheses. This can help the team create on-page surveys that are contextual to a particular set of site visitors.

Moreover, results from past tests can help your team come up with new hypotheses quickly. The team can identify the areas where the win from a past A/B test can be duplicated. Also, the team can look at failed tests, know the reason for their failure and steer clear of repeating the same mistakes.

Your Thoughts

How do you analyze your A/B test results? Do you base your new test hypothesis on past learning? Write to us in the comments below.

Free-trial CTA

The post Learn How Experts Derive Insights from A/B Test Results appeared first on VWO Blog.

Continued here: 

Learn How Experts Derive Insights from A/B Test Results

A Step-by-Step Approach to Building a Strong A/B Testing Hypothesis

Coming up with an idea that you want to test isn’t tough. Coming up with one that you should test can be.

More often than not, optimizers base their testing on intuitions and best practices, eventually yielding unfavorable results. Some others tend to adopt a myopic approach by keeping only a single aspect of the marketing funnel (acquisition/behavior/outcome) in mind, without sight on the long-term goals.

How winning companies and siloed companies approach optimization
Source

To overcome these most common mistakes with optimization, it is important to structure your optimization as a process. The process involves conducting thorough research, asking the right questions, digging for answers in the problem areas, running smart tests, and eventually deriving valuable results.

According to Econsultancy CRO Report 2015, companies with a structured approach to improving conversions were twice as likely to see a large increase in sales.

For testing to make any sense (and therefore for the result to have any value), you need to first clearly determine what to test and why to test. In this post, we’re going to walk you through precisely that.

Let’s begin.

Determining What to Test

Instead of randomly testing ideas that you ‘feel’ are good, the focus should be on building a solid hypothesis that maximizes chances for winning.

A hypothesis is a proposed explanation or solution to a problem. Think of it as a glue that ties the problem to a solution. For instance, you could hypothesize that adding trust badges to your payment page could cater to the problem of low conversion rates on that page.

As you’d notice, the hypothesis is made up of two variables, namely the cause (the action we want to test) and effect (the outcome we expect).

independent-variable-dependent-variable-control-variable-example-724
Source

A formalized hypothesis makes a strong experiment and is likely to produce highly actionable (positive or negative) result. Conversely, experimentation lacking a well-constructed hypothesis can put you at risk of spending time and energy in the wrong direction.

But how does one begin to start formulating a hypothesis?

Theoretically, there could be two approaches.

You could either follow the inductive approach — i.e. begin with brainstorming a set of ideas and  then look at the data to validate those ideas and form a hypothesis.

Or you could follow the deductive approach of looking at patterns in your observations first, and then deducing a hypothesis for testing.

Either way, the most crucial part in forming a strong hypothesis is the research that goes behind it. Let’s get to the meat and potatoes of the patterns to observe while forming a strong hypothesis.

Formulating Hypothesis using Observations

Observing Data

You may have all the data lurking in your records, but all that data has to be distilled into a logical hypothesis. This is explained by the graph below.

Converting idea to hypothesis

Evidently, putting data into context with a certain level of understanding is the key. The “simple objective facts” or ideas could be transformed into a well-structured hypothesis by understanding your website analytics and aligning it with your business objectives. Here’s how:

Analyzing your Website Analytics

Your analytics data is your first port of call while formulating a hypothesis. With the wealth of data that gets tracked, you could get answers to the most obvious questions related to the current situation of your website.

For instance, the hypothesis of “adding trust badges on payment page” we formed earlier, could have led from a “high exit rate of the page”. The exit rate of the page — along with other metrics — can be found within your website analytics.

Website analytics tools like Google Analytics and Kissmetrics can show you quantitative data on how visitors navigate your website on a site architecture level. Some of the important metrics that you could track to validate an idea and build a hypothesis are:

  • Traffic Report: Metrics like total traffic, total number of visitors (overall and on individual pages) could help you track how many people will the test impact and how long would it take to finish it.
  • Acquisition Report: This could help you determine where your visitors are coming from (your best traffic sources) and how the performance differs between different channels.
  • Landing Page Report: Your top landing and exit pages show how visitors enter and leave the site.
  • Funnel Report: This would give you insights into questions like where your visitors enter into or exit from your marketing funnel, and how do they navigate between the different pages. You could look at this conversion funnel guide by Kissmetrics to set up and analyze your funnel reports.
  • Device Type: This will help you decide whether you should focus on optimizing the experience on a particular device on priority.

For any observation that you come across from analyzing these, ask yourself enough number of “why’s” to form a solid hypothesis.

Analyzing website analytics - asking enough number of whys
Source

Aligning Hypotheses with Business Objectives

Once you’ve set up SMART goals for your business i.e. Specific, Measurable, Attainable, Relevant, and Timely, you should make sure that your hypothesis is in compliance with it too.

Start by figuring out what the most important goals in your business or organization are, and then tie them to a realistic hypothesis.

Observing Behaviors

Now that you’ve gained an understanding of what visitors are doing on your website, you would next need to know why they’re doing it. A number of factors like indistinct design, unclear copy, asking for too much information too early, etc. could contribute to low conversions. Below are the two practices that could help you identify and eliminate the problem:

Heuristic Analysis

This is when usability experts review your website to identify any common usability/design issues. Each review is based on a set of usability best practice principles and/or design consistencies. One of the most popular heuristic analysis frameworks is defined by Jakob Nielsen.

Heuristic Analysis

Visitor Behavior Analysis

Next, examining the behavior of current visitors could help you identify the specific details of the most pressing problem with your conversion process.

Speaking of the trust badge hypothesis that we formed earlier — while the web analytics data showed us how many people were dropping off from the payment page, analyzing Visitor Behavior Analytics tools like Heatmaps, Clickmaps, Mouse recording, etc. could tell which part of the page in specific they spent most time on (or ignored completely).

You could have a look at these use cases for various Visitor Behavior Analytics tools for further understanding.

Observing Opinions

Your analysis (web analysis or visitor behavior analysis) could run the risk of narrative fallacy or confirmation bias while forming a formalized hypothesis. That is where collecting real-time quantitative data via customer surveys could help. Surveys primarily come in two forms:

On-site surveys

On-site surveys enable you to receive feedback from your users via a popup or a layer that the visitor is prompted to fill up. Additionally, the survey can also be triggered by certain user actions (e.g. using the product finder, opening a product detail page, etc.), to collect feedback on specific functions. Here’s how it looks:

vwo-on-page-surveys.gif.pagespeed.ce.208fqkz-kK

It is a great mechanism to find out more about your actual users and validate your hypothesis. You can gather information about the interests, attitudes or preferences, straight from the horse’s mouth.

In general, there could be three things that you want to think about when planning to use on-site surveys:

  • Why ask the question: Clearly outline the end goal you’re conducting the survey for. For instance, whether you want to feedback on website design/content/relevance, etc.
  • When to ask the question: Asking the right question at the right time is important. You could look at your average time on site and/or page view metrics and ask questions to visitors who have engaged enough with your website/page (for qualification reasons).
  • Which questions to ask: Which questions to ask depends heavily on your end goals. If you’re doing a voice-of-customer research to gain insights on copy/design, open-ended questions are gold. If you’re trying to quantify customer experience, measuring Net promoter score (NPS) could do the trick.

Here are some additional tips that would make your website surveys shine.

Off-site Surveys

Off-site surveys also serve the same purpose of gathering feedback but via email or third party survey websites.

Apart from SurveyMonkey, you could also use Fluid Surveys and Confirmit for designing off-site feedback surveys.

To Sum Up

Forming a well-structured hypothesis is a critical piece in the conversion optimization puzzle. It helps you identify and remove the friction along your conversion funnel.

Do you build your tests upon formalized hypothesis? What factors do you typically base your hypothesis on? Tell us in the comments below.

Free-trial CTA

The post A Step-by-Step Approach to Building a Strong A/B Testing Hypothesis appeared first on VWO Blog.

Source:  

A Step-by-Step Approach to Building a Strong A/B Testing Hypothesis

Tips and tactics for A/B testing on AngularJS apps

Reading Time: 8 minutes

Alright, folks, this week we’re getting technical.

This post is geared toward Web Developers who’re working in conversion optimization, specifically those who are testing on AngularJS (or who are trying to test on AngularJS).

Angular, while allowing for more dynamic web applications, presents a problem for optimization on the development side.

It basically throws a wrench in the whole “I’m trying to show you a variation instead of the original webpage without you knowing it’s a variation”-thing for reasons I’ll get into in a minute.

At WiderFunnel, our Dev team has to tackle technical obstacles daily: many different clients means many different frameworks and tools to master.

Recently, the topic of How the heck do you test on Angular came up and Tom Davis, WiderFunnel Front End Developer, was like, “I can help with that.”

So here we go. Here are the tips, tricks, and workarounds we use to test on AngularJS.

Let’s start with the basics:

What is AngularJS?

Angular acts as a Javascript extension to HTML, running in most cases on the client-side (through the browser). Because HTML isn’t a scripting language (it doesn’t run code), it’s limited. Angular allows for more functionality that HTML doesn’t have. It provides a framework to develop apps that are maintainable and extendable, while allowing for features such as single page navigation, rich content, and dynamic functionality.

Note: You can mimic Angular with plain Javascript, however, Angular provides a lot of functionality that a Developer would otherwise have to build themselves.

Why is AngularJS popular?

The real question here is why are JS front-end frameworks and libraries popular? Angular isn’t the only framework you can use, of course: there’s EmberJS, React.js, BackBone etc., and different Developers prefer different frameworks.

But frameworks, in general, are popular because they offer a means of providing a rich user experience that is both responsive and dynamic. Without Angular, a user clicks a button or submits a form on your site, the browser communicates with the server, and the server provides entirely new HTML content that then loads in the browser.

When you’re using Angular, however, a user clicks a button or submits a form and the browser is able to build that content itself, while simultaneously performing server tasks (like database submissions) in the background.

For example, let’s think about form validations.

No Angular:

A user submits a form to create an account on a site. The browser talks to the server and the server says, “There’s a problem. We can’t validate this form because this username already exists.” The server then has to serve up entirely new HTML content and the browser re-renders all of that new content.

This can lead to a laggy, cumbersome user experience, where changes only happen on full page reloads.

With Angular:

A user submits a form to create an account on a site. The browser talks to the server via JSON (a collection of data) and the server says, “There’s a problem. We can’t validate this form because this username already exists.” The browser has already loaded the necessary HTML (on the first load) and then simply fills in the blanks with the data it gets back from the server.

Disclaimer: If you don’t have a basic understanding of web development, the rest of this post may be tough to decipher. There is a Glossary at the end of this post, if you need a quick refresher on certain terms.

Why it can be tricky to test on Angular apps

As mentioned above, Angular acts as an HTML extension. This means that the normal behaviors of the DOM* are being manipulated.

Angular manipulates the DOM using two-way data binding. This means that the content in the DOM is bound to a model. Take a look at the example below:

Testing on Angular_2-way-data-binding

The class “ng-binding” indicates that the H1 element is bound to a model, in this case $scope.helloWorld. In Angular, model data is referred to in an object called $scope. Any changes to the input field value will change helloWorld in the $scope object. This value is then propagated down to the H1 text.

This means that, if you make any changes to the H1 element through jQuery or native JS, they will essentially be overridden by $scope. This is not good in a test environment: you cannot guarantee that your changes will show up when you intend them to, without breaking the original code.

Laymen’s terms: $scope.helloWorld is bound to the H1 tag, meaning if anything in the variable helloWorld changes, the H1 element will change and vice versa. That’s the power of Angular.

Typically, when you’re testing, you’re making changes to the DOM by injecting Javascript after all of the other content has already loaded.

A developer will wait until the page has loaded, hide the content, change elements in the background, and show everything to the user post-change. (Because the page is hidden while these changes are being made, the user is none-the-wiser.)

Tom-Davis

We’re trying to do this switcheroo without anyone seeing it.

– Thomas Davis, Front End Developer, WiderFunnel

In Angular apps, there’s no way to guarantee that all of the content has been rendered before that extra Javascript is injected. At this point, Angular has already initialized the app, meaning any code running after this is outside of Angular’s execution context. This makes it complicated to try to figure out when and how to run the changes that make up your test.

When you’re running a test, the changes that make up Variation A (or B or C) are loaded when the page loads. You can only manipulate what’s in the DOM already. If you can’t guarantee that the content is loaded, how do you ensure that your added Javascript runs at the right time and how do you do this without breaking the code and functionality?

Tom explained that, as a dev trying to do conversion optimization on an Angular application, you find yourself constantly trying to answer this question:

How can I make this change without directly affecting my (or my client’s) built-in functionality? In other words, how can I make sure I don’t break this app?

How to influence Angular through the DOM

Angular makes for a complicated testing environment, but there are ways to test on Angular. Here are a few that we use at WiderFunnel (straight from Tom’s mouth to your eyeballs).

Note: In the examples below, we are working in the Inspector. This is just to prove that the changes are happening outside the context of the app and, therefore, an external script would be able to render the same results.

1. Use CSS wherever possible

When you’re running a test on Angular, use CSS whenever possible to make styling changes.

CSS is simply a set of styling rules that the browser applies to matching elements. Styling will always be applied on repaints regardless of how the DOM is bound to Angular. Everytime something changes within the browser, the browser goes through its list of styling rules and reapplies them to the correct element.

Let’s say, in a variation, you want to hide a banner. You can find the element you want to hide and add a styling tag that has an attribute of display none. CSS will always apply this styling and that element will never be displayed.

Of course, you can’t rely on CSS all of the time. It isn’t a scripting language, so you can’t do logic. For instance, CSS can’t say “If [blank] is true, make the element color green. If [blank] is false, make the element color red.”

In other cases, you may want to try $apply.

2. Using $scope/$apply in the DOM

We’ve established that Angular’s two-way data binding makes it difficult to develop consistent page changes outside of the context of Angular. Difficult…but not impossible.

Say you want to change the value of $scope.helloWorld. You need a way to tell Angular, “Hey, a value has changed — you need to propagate this change throughout the app.”

Angular checks $scope variables for changes whenever an event happens. An event attribute like ng-click or ng-model will force Angular to run the Digest Loop*, where a process called dirty checking* is used to update the whole of the app with any new values.

If you want to change the value of $scope.helloWorld and have it propagated throughout the app, you need to trick Angular into thinking an event has occurred.

But, how?

First step: You’ll need to access the model in the $scope object. You can do this simply by querying it in the DOM.

Testing on Angular_$scope

In this example, you’re looking at the $scope object containing all models available to the H1 element. You’re looking at the helloWorld variable exposed.

Once you have access to helloWorld, you can reassign it. But wait! You’ve probably noticed that the text hasn’t changed in the window… That’s because your code is running outside the context of Angular — Angular doesn’t know that a change has actually been made. You need to tell Angular to run the digest loop, which will apply the change within it’s context.

Fortunately, Angular comes equipped with an $apply function, that can force a $digest, as shown below.

Testing on Angular_$apply

3. Watch for changes

This workaround is a little manual, but very important. If the source code changes a variable or calls a function bound to $scope, you’ll need to be able to detect this change in order to keep your test functional.

That’s where Angular’s $watch function comes in. You can use $watch to listen to $scope and provide a callback when changes happen.

In the example below, $watch is listening to $scope.helloWorld. If helloWorld changes, Angular will run a callback that provides the new value and the old value of helloWorld as parameters.

Testing on Angular_$watch

Custom directives and dependency injection

It’s important that you don’t default to writing jQuery when testing on Angular apps. Remember, you have access to all the functionality of Angular, so use it. For complex experiments, you can use custom directives to manage code structure and make it easy to debug.

To do this, you can implement an injector to apply components in the context of the app that you’re testing on. Here’s a simple example that will alert you if your helloWorld variable changes:

For more details on how to use an injector, click here.

—–

These are just a few of the tactics that the WiderFunnel Dev team uses to run successful conversion optimization on Angular apps. That said, we would love to hear from all of you about how you do CRO on Angular!

Do you use the same tactics described here? Do you know of other workarounds not mentioned here? How do you test successfully on Angular apps? Let us know in the comments!

Glossary

DOM: The Document Object Model (DOM) is a cross-platform and language-independent convention for representing and interacting with objects in HTML, XHTML, and XML documents

$scope: Scope is an object that refers to the application model. It is an execution context for expressions. Scopes are arranged in hierarchical structure which mimic the DOM structure of the application. Scopes can watch expressions and propagate events.

$apply: Apply is used to execute an expression in Angular from outside of the Angular framework. (For example from browser DOM events, setTimeout, XHR or third party libraries).

JSON: (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language, Standard ECMA-262 3rd Edition – December 1999

Two-way data binding: Data-binding in Angular apps is the automatic synchronization of data between the model and view components. The way that Angular implements data-binding allows you to treat the model as the single source of truth in your application.

Digest Loop: There is an internal cycle called $digest that runs through the application and executes watch expressions and compares the value returned with the previous value and if the values do not match then a listener is fired. This $digest cycle keeps looping until no more listeners are fired.

Dirty Checking: Dirty checking is a simple process that boils down to a very basic concept: It checks whether a value has changed that hasn’t yet been synchronized across the app

The post Tips and tactics for A/B testing on AngularJS apps appeared first on WiderFunnel Conversion Optimization.

Visit source: 

Tips and tactics for A/B testing on AngularJS apps

[Infographic] Why a Website Redesign Doesn’t Always Work

(This is a guest post, contributed by PRWD.)


The website redesign.

It is often the big money ticket for a business; the project upon which a lot of faith is placed, and improved numbers are expected across the board.

Ideally, it is expected to provide a significant improvement in sales and leads figures, based on a modern, seamless and intuitive experience for the visitors that is future-proofed for the constantly changing marketplace.

Unfortunately, the numbers from PRWD‘s survey show that this is not always the case. In fact, a few businesses even find a decline in sales after undergoing website redesign.

So if you’re currently in the middle of a website redesign (or planning one in the near future), have a quick look at the infographic below.

Following on from PRWD’s last infographic on the effectiveness of website traffic acquisition strategies, this infographic provides insight into why a website redesign for many of the UK’s biggest businesses don’t provide a jump in sales and leads.

At the end of the infographic, there are key takeaways that can help you get the most out of a website redesign.

Click here to get the full infographic

[Infographic] Why-website-redesign-doesnt-always-work

What Do You Think?

Have you ever done a website redesign? What would you do to ensure that a redesigned website improves the bottom-line?

We’d love to know your thoughts. Post them in the comments section below.

The post [Infographic] Why a Website Redesign Doesn’t Always Work appeared first on VWO Blog.

View article: 

[Infographic] Why a Website Redesign Doesn’t Always Work