All posts by Natasha Wahid

How to create *emotionally relevant* marketing experiences for your shoppers

Marketers have more data than ever before. But even with all of this data, we still aren’t seeing a complete…Read blog postabout:How to create *emotionally relevant* marketing experiences for your shoppers

The post How to create *emotionally relevant* marketing experiences for your shoppers appeared first on WiderFunnel Conversion Optimization.

Continued here:

How to create *emotionally relevant* marketing experiences for your shoppers

How to use pricing psychology to motivate your shoppers: Two test results just in time for Black Friday

Reading Time: 8 minutesBlack Friday, Cyber Monday, holiday sales, and post-Christmas blow-outs: We’re coming up to the biggest buying season of the year….Read blog postabout:How to use pricing psychology to motivate your shoppers: Two test results just in time for Black Friday

The post How to use pricing psychology to motivate your shoppers: Two test results just in time for Black Friday appeared first on WiderFunnel Conversion Optimization.

Originally posted here:  

How to use pricing psychology to motivate your shoppers: Two test results just in time for Black Friday

Your frequently asked conversion optimization questions, answered!

Reading Time: 28 minutes

Got a question about conversion optimization?

Chances are, you’re not alone!

This Summer, WiderFunnel participated in several virtual events. And each one, from full-day summit to hour-long webinar, ended with a TON of great questions from all of you.

So, here is a compilation of 29 of your top conversion optimization questions. From how to get executive buy-in for experimentation, to the impact of CRO on SEO, to the power (or lack thereof) of personalization, you asked, and we answered.

As you’ll notice, many experts and thought-leaders weighed in on your questions, including:

Now, without further introduction…

Your conversion optimization questions

Optimization Strategy

  1. What do you see as the most common mistake people make that has a negative effect on website conversion?
  2. What are the most important questions to ask in the Explore phase?
  3. Is there such a thing as too much testing and / or optimizing?

Personalization

  1. Do you get better results with personalization or A/B testing or any other methods you have in mind?
  2. Is there such a thing as too much personalization? We have a client with over 40 personas, with a very complicated strategy, which makes reporting hard to justify.
  3. With the advance of personalization technology, will we see broader segments disappear? Will we go to 1:1 personalization, or will bigger segments remain relevant?
  4. How do you explain personalization to people who are still convinced that personalization is putting first and last name fields on landing pages?

SEO versus CRO

  1. How do you avoid harming organic SEO when doing conversion optimization?

Getting Buy-in for Experimentation

  1. When you are trying to solicit buy-in from leadership, do you recommend going for big wins to share with the higher ups or smaller wins?
  2. Who would you say are the key stakeholders you need buy-in from, not only in senior leadership but critical members of the team?

CRO for Low Traffic Sites

  1. Do you have any suggestions for success with lower traffic websites?
  2. What would you prioritize to test on a page that has lower traffic, in order to achieve statistical significance?
  3. How far can I go with funnel optimization and testing when it comes to small local business?

Tips from an In-House Optimization Champion

  1. How do you get buy-in from major stakeholders, like your CEO, to go with a conversion optimization strategy?
  2. What has surprised you or stood out to you while doing CRO?

Optimization Across Industries

  1. Do you have any tips for optimizing a website to conversion when the purchase cycle is longer, like 1.5 months?
  2. When you have a longer sales process, getting them to convert is the first step. We have softer conversions (eBooks) and urgent ones like demo requests. Do we need to pick ONE of these conversion options or can ‘any’ conversion be valued?
  3. You’ve mainly covered websites that have a particular conversion goal, for example, purchasing a product, or making a donation. What would you say can be a conversion metric for a customer support website?
  4. Do you find that results from one client apply to other clients? Are you learning universal information, or information more specific to each audience?
  5. For companies that are not strictly e-commerce and have multiple business units with different goals, can you speak to any challenges with trying to optimize a visible page like the homepage so that it pleases all stakeholders? Is personalization the best approach?
  6. Do you find that testing strategies differ cross-culturally?

Experiment Design & Setup

  1. How do you recommend balancing the velocity of experimentation with quality, or more isolated design?
  2. I notice that you often have multiple success metrics, rather than just one? Does this ever lead to cherry-picking a metric to make sure that the test you wanted to win seem like it’s the winner?
  3. When do you make the call for A/B tests for statistical significance? We run into the issue of varying test results depending on part of the week we’re running a test. Sometimes, we even have to run a test multiple times.
  4. Is there a way to conclusively tell why a test lost or was inconclusive?
  5. How many visits do you need to get to statistically relevant data from any individual test?
  6. We are new to optimization. Looking at your Infinity Optimization Process, I feel like we are doing a decent job with exploration and validation – for this being a new program to us. Our struggle seems to be your orange dot… putting the two sides together – any advice?
  7. When test results are insignificant after lots impressions, how do you know when to ‘call it a tie’ and stop that test and move on?

Testing and technology

  1. There are tools meant to increase testing velocity with pre-built widgets and pre-built test variations, even – what are your thoughts on this approach?

Your questions, answered

Q: What do you see as the most common mistake people make that has a negative effect on website conversion?

Chris Goward: I think the most common mistake is a strategic one, where marketers don’t create or ensure they have a great process and team in place before starting experimentation.

I’ve seen many teams get really excited about conversion optimization and bring it into their company. But they are like kids in a candy store: they’re grabbing at a bunch of ideas, trying to get quick wins, and making mistakes along the way, getting inconclusive results, not tracking properly, and looking foolish in the end.

And this burns the organizational momentum you have. The most important resource you have in an organization is the support from your high-level executives. And you need to be very careful with that support because you can quickly destroy it by doing things the wrong way.

It’s important to first make sure you have all of the right building blocks in place: the right process, the right team, the ability to track and the right technology. And make sure you get a few wins, perhaps under the radar, so that you already have some support equity to work with.

Further reading:

Back to Top

Q: What are the most important questions to ask in the Explore phase?

Chris Goward: During Explore, we are looking for your visitors’ barriers to conversion. It’s a general research phase. (It’s called ‘Explore’ for a reason). In it, we are looking for insights about what questions to ask and validate. We are trying to identify…

  • What are the barriers to conversion?
  • What are the motivational triggers for your audience?
  • Why are people buying from you?

And answering those questions comes through the qualitative and quantitative research that’s involved in Explore. But it’s a very open-ended process. It’s an expansive process. So the questions are more about how to identify opportunities for testing.

Whereas Validate is a reductive process. During Validate, we know exactly what questions we are trying to answer, to determine whether the insights gained in Explore actually work.

Further reading:

  • Explore is one of two phases in the Infinity Optimization Process – our framework for conversion optimization. Read about the whole process, here.

Back to Top

Q: Is there such a thing as too much testing and / or optimizing?

Chris Goward: A lot of people think that if they’re A/B testing, and improving an experience or a landing page or a website…they can’t improve forever. The question many marketers have is, how do I know how long to do this? Is there going to be diminishing returns? By putting in the same effort will I get smaller and smaller results?

But we haven’t actually found this to be true. We have yet to find a company that we have over-A/B tested. And the reason is that visitor expectations continue to increase, your competitors don’t stop improving, and you continuously have new questions to ask about your business, business model, value proposition, etc.

So my answer is…yes, you will run out of opportunities to test, as soon as you run out of business questions. When you’ve answered all of the questions you have as a business, then you can safely stop testing.

Of course, you never really run out of questions. No business is perfect and understands everything. The role of experimentation is never done.

Case Study: DMV.org has been running an optimization program for 4+ years. Read about how they continue to double revenue year-over-year in this case study.

Back to Top

Q: Do you get better results with personalization or A/B testing or any other methods you have in mind?

Chris Goward: Personalization is a buzzword right now that a lot of marketers are really excited about. And personalization is important. But it’s not a new idea. It’s simply that technology and new tools are now available, and we have so much data that allows us to better personalize experiences.

I don’t believe that personalization and A/B testing are mutually exclusive. I think that personalization is a tactic that you can test and validate within all your experiences. But experimentation is more strategic.

At the highest level of your organization, having an experimentation ethos means that you’ll test anything. You could test personalization, you could test new product lines, or number of products, or types of value proposition messaging, etc. Everything is included under the umbrella of experimentation, if a company is oriented that way.

Personalization is really a tactic. And the goal of personalization is to create a more relevant experience, or a more relevant message. And that’s the only thing it does. And it does it very well.

Further Reading: Are you evaluating personalization at your company? Learn how to create the most effective personalization strategy with our 4-step roadmap.

Back to Top

Q: Is there such a thing as too much personalization? We have a client with over 40 personas, with a very complicated strategy, which makes reporting hard to justify.

Chris Goward: That’s an interesting question. Unlike experimentation, I believe there is a very real danger of too much personalization. Companies are often very excited about it. They’ll use all of the features of the personalization tools available to create (in your client’s case) 40 personas and a very complicated strategy. And they don’t realize that the maintenance cost of personalization is very high. It’s important to prove that a personalization strategy actually delivers enough business value to justify the increase in cost.

When you think about it, every time you come out with a new product, a new message, or a new campaign, you would have to create personalized experiences against 40 different personas. And that’s 40 times the effort of having a generic message. If you haven’t tested from the outset, to prove that all of those personas are accurate and useful, you could be wasting a lot of time and effort.

We always start a personalization strategy by asking, ‘what are the existing personas?’, and proving out whether those existing personas actually deliver distinct value apart from each other, or whether they should be grouped into a smaller number of personas that are more useful. And then, we test the messaging to see if there are messages that work better for each persona. It’s a step by step process that makes sure we are only creating overhead where it’s necessary and will create value.

Further Reading: Are you evaluating personalization at your company? Learn how to create the most effective personalization strategy with our 4-step roadmap.

Back to Top

Q: With the advance of personalization technology, will we see broader segments disappear? Will we go to 1:1 personalization, or will bigger segments remain relevant?

Chris Goward: Broad segments won’t disappear; they will remain valid. With things like multi-threaded personalization, you’ll be able to layer on some of the 1:1 information that you have, which may be product recommendations or behavioral targeting, on top of a broader segment. If a user falls into a broad segment, they may see that messaging in one area, and 1:1 messaging may appear in another area.

But if you try to eliminate broad segments and only create 1:1 personalization, you’ll create an infinite workload for yourself in trying to sustain all of those different content messaging segments. And it’s almost impossible for a marketing department practically to create infinite marketing messages.

Hudson Arnold: You are absolutely going to need both. I think there’s a different kind of opportunity, and a different kind of UX solution to those questions. Some media and commerce companies won’t have to struggle through that content production, because their natural output of 1:1 personalization will be showing a specific product or a certain article, which they don’t have to support from a content perspective.

What they will be missing out on is that notion of, what big segments are we missing? Are we not targeting moms? Newly married couples? CTOs vs. sales managers? Whatever the distinction is, that segment-level messaging is going to continue to be critical, for the foreseeable future. And the best personalization approach is going to balance both.

Back to Top

Q: How do you explain personalization to people who are still convinced that personalization is putting first and last name fields on landing pages?

A PANEL RESPONSE

André Morys: I compare it to the experience people have in a real store. If you go to a retail store, and you want to buy a TV, the salesperson will observe how you’re speaking, how you’re walking, how you’re dressed, and he will tailor his sales pitch to the type of person you are. He will notice if you’ve brought your family, if it’s your first time in a shop, or your 20th. He has all of these data points in his mind.

Personalization is the art of transporting this knowledge of how to talk to people on a 1:1 level to your website. And it’s not always easy, because you may not have all of the data. But you have to find out which data you can use. And if you can do personalization properly, you can get big uplift.

John Ekman: On the other hand, I heard a psychologist once say that people have more in common than what separates them. If you are looking for very powerful persuasion strategies, instead of thinking of the different individual traits and preferences that customers might have, it may be better to think about what they have in common. Because you’ll reach more people with your campaigns and landing pages. It will be interesting to see how the battle between general persuasion techniques and individual personalization techniques will result.

Chris Goward: It’s a good point. I tend to agree that the nirvana of 1:1 personalization may not be the right goal in some cases, because there are unintended consequences of that.

One is that it becomes more difficult to find generalized understanding of your positioning, of your value proposition, of your customers’ perspectives, if everything is personalized. There are no common threads.

The other is that there is significant maintenance cost in having really fine personalization. If you have 1:1 personalization with 1,000 people, and you update your product features, you have to think about how that message gets customized across 1,000 different messages rather than just updating one. So there is a cost to personalization. You have to validate that your approach to personalization pays off, and that is has enough benefit to balance out your cost and downside.

David Darmanin: [At Hotjar], we aren’t personalizing, actually. It’s a powerful thing to do, but there is a time to deploy it. If personalization adds too much complexity and slows you down, then obviously that can be a challenge. Like most things that can be complex, I think that they are the most valuable, when you have a high ticket price or very high value, where that touch of personalization has a big impact.

With Hotjar, we’re much more volume and lower price points, so it’s not yet a priority for us. Having said that, we have looked at it. But right now, we’re a startup, at the stage where speed is everything. And having many common threads is as important as possible, so we don’t want to add too much complexity now. But if you’re selling very expensive things, and you’re at a more advanced stage as a company, it would be crazy not to leverage personalization.

Video Resource: This panel response comes from the Growth & Conversion Virtual Summit held this Spring. You can still access all of the session recordings for free, here.

Back to Top

Q: How do you avoid harming organic SEO when doing conversion optimization?

Chris Goward: A common question! WiderFunnel was actually one of Google’s first authorized consultants for their testing tool, and Google told us is that they support optimization fully. They do not penalize companies for running A/B tests, if they are set up properly and the company is using a proper tool.

On top of that, what we’ve found is that the principles of conversion optimization parallel the principles of good SEO practice.

If you create a better experience for your users, and more of them convert, it actually sends a positive signal to Google that you have higher quality content.

Google looks at pogo-sticking, where people land on the SERP, find a result, and then return back to the SERP. Pogo-sticking signals to Google that this is not quality content. If a visitor lands on your page and converts, they are not going to come back to the SERP, which sends Google a positive signal. And we’ve actually never seen an example where SEO has been harmed by a conversion optimization program.

Video Resource: Watch SEO Wizard Rand Fishkin’s talk from CTA Conf 2017, “Why We Can’t Do SEO without CRO

Back to Top

Q:When you are trying to solicit buy-in from leadership do you recommend going for big wins to share with the higher ups or smaller wins?

Chris Goward: Partly, it depends on how much equity you have to burn up front. If you are in a situation where you don’t have a lot of confidence from higher-ups about implementing an optimization program, I would recommend starting with more under the radar tests. Try to get momentum, get some early wins, and then share your success with the executives to show the potential. This will help you get more buy-in for more prominent areas.

This is actually one of the factors that you want to consider when prioritizing where to test. The “PIE Framework” shows you the three factors to help you prioritize.

PIE framework for A/B testing prioritization.
A sample PIE prioritization analysis.

One of them is Ease. Potential, Importance, and Ease. And one of the important aspects within Ease is political ease. So you want to look for areas that have political ease, which means there might not be as much sensitivity around them (so maybe not the homepage). Get those wins first, and create momentum, and then you can start sharing that throughout the organization to build that buy-in.

Further Reading: Marketers from ASICS’ global e-commerce team weigh in on evangelizing optimization at a global organization in this post, “A day in the life of an optimization champion

Back to Top

Q: Who would you say are the key stakeholders you need buy-in from, not only in senior leadership but critical members of the team?

Nick So: Besides the obvious senior leadership and key decision-makers as you mention, we find getting buy-in from related departments like branding, marketing, design, copywriters and content managers, etc., can be very helpful.

Having these teams on board can not only help with the overall approval process, but also helps ensure winning tests and strategies are aligned with your overall business and marketing strategy.

You should also consider involving more tangentially-related teams like customer support. This makes them a part of the process and testing culture, but your customer-facing teams can also be a great source for business insights and test ideas as well!

Back to Top

Q: Do you have any suggestions for success with lower traffic websites?

Nick So: In our testing experience, we find we get the most impactful results when we feel we have a strong understanding of the website’s visitors. In the Infinity Optimization Process, this understanding is gained through a balanced approach of Exploratory research, and Validated insights and results.

infinity optimization process
The Infinity Optimization Process is iterative and leads to continuous growth and insights.

When a site’s traffic is low, the ability to Validate is decreased, and so we try to make up for it by increasing the time spent and work done in the Explore phase.

We take those yet-to-be-validated insights found in the Explore phase, and build a larger, more impactful single variation, and test the cluster of changes. (This variation is generally more drastic than we would create for a higher-traffic client, since we can validate those insights easily through multiple tests.)

Because of the more drastic changes, the variation should have a larger impact on conversion rate (and hopefully gain statistical significance with lower traffic). And because we have researched evidence to support these changes, there is a higher likelihood that they will perform better than a standard re-design.

If a site does not have enough overall primary conversions, but you definitely, absolutely MUST test, then I would look for a secondary metric further ‘upstream’ to optimize for. These should be goals that indicate or guide the primary conversion (e.g. clicks to form > form submission, add to cart > transaction). However with this strategy, stakeholders have to be aware that increases in this secondary goal may not be tied directly to increases of the primary goal at the same rate.

Back to Top

Q: What would you prioritize to test on a page that has lower traffic, in order to achieve statistical significance?

Chris Goward: The opportunities that are going to make the most impact really depend on the situation and the context. So if it’s a landing page or the homepage or a product page, they’ll have different opportunities.

But with any area, start by trying to understand your customers. If you have a low-traffic site, you’ll need to spend more time on the qualitative research side, really trying to understand: what are the opportunities, the barriers your visitors might be facing, and drilling into more of their perspective. Then you’ll have a more powerful test setup.

You’ll want to test dramatically. Test with fewer variations, make more dramatic changes with the variations, and be comfortable with your tests running longer. And while they are running and you are waiting for results, go talk to your customers. Go and run some more user testing, drill into your surveys, do post-purchase surveys, get on the phone and get the voice of customer. All of these things will enrich your ability to imagine their perspective and come up with more powerful insights.

In general, the things that are going to have the most impact are value proposition changes themselves. Trying to understand, do you have the right product-market fit, do you have the right description of your product, are you leading with the right value proposition point or angle?

Back to Top

 

Q: How far can I go with funnel optimization and testing when it comes to small local business?

A PANEL RESPONSE

David Darmanin: What do you mean by small local business? If you’re a startup just getting started, my advice would be to stop thinking about optimization and focus on failing fast. Get out there, change things, get some traction, get growth and you can optimize later. Whereas, if you’re a small but established local business, and you have traffic but it’s low, that’s different. In the end, conversion optimization is a traffic game. Small local business with a lot of traffic, maybe. But if traffic is low, focus on the qualitative, speak to your users, spend more time understanding what’s happening.

John Ekman:

If you can’t test to significance, you should turn to qualitative research.

That would give you better results. If you don’t have the traffic to test against the last step in your funnel, you’ll end up testing at the beginning of your funnel. You’ll test for engagement or click through, and you’ll have to assume that people who don’t bounce and click through will convert. And that’s not always true. Instead, go start working with qualitative tools to see what the visitors you have are actually doing on your page and start optimizing from there.

André Morys: Testing with too small a sample size is really dangerous because it can lead to incorrect assumptions if you are not an expert in statistics. Even if you’re getting 10,000 to 20,000 orders per month, that is still a low number for A/B testing. Be aware of how the numbers work together. We’ve had people claiming 70% uplift, when the numbers are 64 versus 27 conversions. And this is really dangerous because that result is bull sh*t.

Video Resource: This panel response comes from the Growth & Conversion Virtual Summit held this Spring. You can still access all of the session recordings for free, here.

Back to Top

Q: How do you get buy-in from major stakeholders, like your CEO, to go with an evolutionary, optimized redesign approach vs. a radical redesign?

Jamie Elgie: It helps when you’ve had a screwup. When we started this process, we had not been successful with the radical design approach. But my advice for anyone championing optimization within an organization would be to focus on the overall objective.

For us, it was about getting our marketing spend to be more effective. If you can widen the funnel by making more people convert on your site, and then chase the people who convert (versus people who just land on your site) with your display media efforts, your social media efforts, your email efforts, and with all your paid efforts, you are going to be more effective. And that’s ultimately how we sold it.

It really sells itself though, once the process begins. It did not take long for us to see really impactful results that were helping our bottom line, as well as helping that overall strategy of making our display media spend, and all of our media spend more targeted.

Video Resource: Watch this webinar recording and discover how Jamie increased his company’s sales by more than 40% with evolutionary site redesign and conversion optimization.

Back to Top

Q: What has surprised you or stood out to you while doing CRO?

Jamie Elgie: There have been so many ‘A-ha!’s, and that’s the best part. We are always learning. Things that we are all convinced we should change on our website, or that we should change in our messaging in general, we’ll test them and actually find out.

We have one test running right now, and it’s failing, which is disappointing. But our entire emphasis as a team is changing, because we are learning something. And we are learning it without a huge amount of risk. And that, to me, has been the greatest thing about optimization. It’s not just the impact to your marketing funnel, it’s also teaching us. And it’s making us a better organization because we’re learning more.

One of the biggest benefits for me and my team has been how effective it is just to be able to say, ‘we can test that’.

If you have a salesperson who feels really strongly about something, and you feel really strongly that they’re wrong, the best recourse is to put it out on the table and say, ok, fine, we’ll go test that.

It enables conversations to happen that might not otherwise happen. It eliminates disputes that are not based on objective data, but on subjective opinion. It actually brings organizations together when people start to understand that they don’t need to be subjective about their viewpoints. Instead, you can bring your viewpoint to a test, and then you can learn from it. It’s transformational not just for a marketing organization, but for the entire company, if you can start to implement experimentation across all of your touch points.

Case Study: Read the details of how Jamie’s company, weBoost, saw a 100% lift in year-over-year conversion rate with and optimization program.

Back to Top

Q: Do you have any tips for optimizing a website to conversion when the purchase cycle is longer, like 1.5 months?

Chris Goward: That’s a common challenge in B2B or with large ticket purchases for consumers. The best way to approach this is to

  1. Track your leads and opportunities to the variation,
  2. Then, track them through to the sale,
  3. And then look at whether average order value changes between the variations, which implies the quality of the leads.

Because it’s easy to measure lead volume between variations. But if lead quality changes, then that makes a big impact.

We actually have a case study about this with Magento. We asked the question, “Which of these calls-to-action is actually generating the most valuable leads?”. And ran an experiment to try to find out. We tracked the leads all the way through to sale. This helped Magento optimize for the right calls-to-action going forward. And that’s an important question to ask near the beginning of your optimization program, which is, am I providing the right hook for my visitor?

Case Study: Discover how Magento increased lead volume and lead quality in the full case study.

Back to Top

Q: When you have a longer sales process, getting visitors to convert is the first step. We have softer conversions (eBooks) and urgent ones like demo requests. Do we need to pick ONE of these conversion options or can ‘any’ conversion be valued?

Nick So: Each test variation should be based on a single, primary hypothesis. And each hypothesis should be based on a single, primary conversion goal. This helps you keep your hypotheses and strategy focused and tactical, rather than taking a shotgun approach to just generally ‘improve the website’.

However, this focused approach doesn’t mean you should disregard all other business goals. Instead, count these as secondary goals and consider them in your post-test results analysis.

If a test increases demo requests by 50%, but cannibalizes ebook downloads by 75%, then, depending on the goal values of the two, a calculation has to be made to see if the overall net benefit of this tradeoff is positive or negative.

Different test hypotheses can also have different primary conversion goals. One test can focus on demos, but the next test can be focused on ebook downloads. You just have to track any other revenue-driving goals to ensure you aren’t cannibalizing conversions and having a net negative impact for each test.

Back to Top

Q: You’ve mainly covered websites that have a particular conversion goal, for example, purchasing a product, or making a donation. What would you say can be a conversion metric for a customer support website?

Nick So: When we help a client determine conversion metrics…

…we always suggest following the money.

Find the true impact that customer support might have on your company’s bottom line, and then determine a measurable KPI that can be tracked.

For example, would increasing the usefulness of the online support decrease costs required to maintain phone or email support lines (conversion goal: reduction in support calls/submissions)? Or, would it result in higher customer satisfaction and thus greater customer lifetime value (conversion goal: higher NPS responses via website poll)?

Back to Top

Q: Do you find that results from one client apply to other clients? Are you learning universal information, or information more specific to each audience?

Chris Goward: That question really gets at the nub of where we have found our biggest opportunity. When I started WiderFunnel in 2007, I thought that we would specialize in an industry, because that’s what everyone was telling us to do. They said, you need to specialize, you need to focus and become an expert in an industry. But I just sort of took opportunities as they came, with all kinds of different industries. And what I found is the exact opposite.

We’ve specialized in the process of optimization and personalization and creating powerful test design, but the insights apply to all industries.

What we’ve found is people are people, regardless of whether they’re shopping for a server, or shopping for socks, or donating to third-world countries, they go through the same mental process in each case.

The tactics are a bit different, sometimes. But often, we’re discovering breakthrough insights because we’re able to apply principles from one industry to another. For example, taking an e-commerce principle and identifying where on a B2B lead generation website we can apply that principle because someone is going through the same step in the process.

Most marketers spend most of their time thinking about their near-field competitors rather than in different industries, because it’s overwhelming to look at all of the other opportunities. But we are often able to look at an experience in a completely different way, because we are able to look at it through the lens of a different industry. That is very powerful.

Back to Top

Q: For companies that are not strictly e-commerce and have multiple business units with different goals, can you speak to any challenges with trying to optimize a visible page like the homepage so that it pleases all stakeholders? Is personalization the best approach?

Nick So: At WiderFunnel, we often work with organizations that have various departments with various business goals and agendas. We find the best way to manage this is to clearly quantify the monetary value of the #1 conversion goal of each stakeholder and/or business unit, and identify areas of the site that have the biggest potential impact for each conversion goal.

In most cases, the most impactful test area for one conversion goal will be different for another conversion goal (e.g. brand awareness on the homepage versus checkout for e-commerce conversions).

When there is a need to consider two different hypotheses with differing conversion goals on a single test area (like the homepage), teams can weigh the quantifiable impact + the internal company benefits in their decision and make that negotiation of prioritization and scheduling between teams.

I would not recommend personalization for this purpose, as that would be a stop-gap compromise that would limit the creativity and strategy of hypotheses, as well as create a disjointed experience for visitors, which would generally have a negative impact overall.

If you HAVE to run opposing strategies simultaneously on an area of the site, you could run multiple variations for different teams and measure different goals. Or, run mutually exclusive tests (keeping in mind these tactics would reduce test velocity, and would require more coordination between teams).

Back to Top

 

Q: Do you find testing strategies differ cross-culturally? Do conversion rates vary drastically across different countries / languages when using these strategies?

Chris Goward: We have run tests for many clients outside of the USA, such as in Israel, Sweden, Australia, UK, Canada, Japan, Korea, Spain, Italy and for the Olympics store, which is itself a global e-commerce experience in one site!

There are certainly cultural considerations and interesting differences in tactics. Some countries don’t have widespread credit card use, for example, and retailers there are accustomed to using alternative payment methods. Website design preferences in many Asian countries would seem very busy and overly colorful to a Western European visitor. At WiderFunnel, we specialize in English-speaking and Western-European conversion optimization and work with partner optimization companies around the world to serve our global and international clients.

Back to Top

Q: How do you recommend balancing the velocity of experimentation with quality, or more isolated design?

Chris Goward: This is where the art of the optimization strategist comes into play. And it’s where we spend the majority of our effort – in creating experiment plans. We look at all of the different options we could be testing, and ruthlessly narrow them down to the things that are going to maximize the potential growth and the potential insights.

And there are frameworks we use to do that. Its all about prioritization. There are hundreds of ideas that we could be testing, so we need to prioritize with as much data as we can. So, we’ve developed some frameworks to do that. The PIE Framework allows you to prioritize ideas and test areas based on the potential, importance, and ease. The potential for improvement, the importance to the business, and the ease of implementation. And sometimes these are a little subjective, but the more data you can have to back these up, the better your focus and effort will be in delivering results.

Further Reading:

Back to Top

Q: I notice that you often have multiple success metrics, rather than just one? Does this ever lead to cherry-picking a metric to make sure that the test you wanted to win seem like it’s the winner?

Chris Goward: Good question! We actually look for one primary metric that tells us what the business value of a winning test is. But we also track secondary metrics. The goal is to learn from the other metrics, but not use them for decision-making. In most cases, we’re looking for a revenue-driving primary metric. Revenue-per-visitor, for example, is a common metric we’ll use. But the other metrics, whether conversion rate or average order value or downloads, will tell us more about user behavior, and lead to further insights.

There are two steps in our optimization process that pair with each other in the Validate phase. One is design of experiments, and the other is results analysis. And if the results analysis is done correctly, all of the metrics that you’re looking at in terms of variation performance, will tell you more about the variations. And if the design of experiments has been done properly, then you’ll gather insights from all of the different data.

But you should be looking at one metric to tell you whether or not a test won.

Further Reading: Learn more about proper design of experiments in this blog post.

Back to Top

 

Q: When do you make the call for A/B tests for statistical significance? We run into the issue of varying test results depending on part of the week we’re running a test. Sometimes, we even have to run a test multiple times.

Chris Goward: It sounds like you may be ending your tests or trying to analyze results too early. You certainly don’t want to be running into day-of-the-week seasonality. You should be running your tests over at least a week, and ideally two weekends to iron out that seasonality effect, because your test will be in a different context on different days of the week, depending on your industry.

So, run your tests a little bit longer and aim for statistical significance. And you want to use tools that calculate statistical significance reliably, and help answer the real questions that you’re trying to ask with optimization. You should aim for that high level of statistical significance, and iron out that seasonality. And sometimes you’ll want to look at monthly seasonality as well, and retest questionable things within high and low urgency periods. That, of course, will be more relevant depending on your industry and whether or not seasonality is a strong factor.

Further Reading: You can’t make business decisions based on misleading A/B test results. Learn how to avoid the top 3 mistakes that make your A/B test results invalid in this post.

Back to Top

Q: Is there a way to conclusively tell why a test lost or was inconclusive? To know what the hidden gold is?

Chris Goward: Developing powerful hypotheses is dependent on having workable theories. Seeking to determine the “Why” behind the results is some of the most interesting part of the work.

The only way to tell conclusively is to infer a potential reason, then test again with new ways to validate that inference. Eventually, you can form conversion optimization theories and then test based on those theories. While you can never really know definitively the “why” behind the “what”, when you have theories and frameworks that work to predict results, they become just as useful.

As an example, I was reviewing a recent test for one of our clients and it didn’t make sense based on our LIFT Model. One of the variations was showing under-performance against another variation, but I believed strongly that it should have over-performed. I struggled for some time to align this performance with our existing theories and eventually discovered the conversion rate listed was a typo! The real result aligned perfectly with our existing framework, which allowed me to sleep at night again!

Back to Top

Q: How many visits do you need to get to statistically relevant data from any individual test?

Chris Goward: The number of visits is just one of the variables that determines statistical significance. The conversion rate of the Control and conversion rate delta between the variations are also part of the calculation. Statistical significance is achieved when there is enough traffic (i.e. sample size), enough conversions, and the conversion rate delta is great enough.

Here’s a handy Excel test duration calculator. Fortunately, today’s testing tools calculate statistical significance automatically, which simplifies the conversion champion’s decision-making (and saves hours of manual calculation!)

When planning tests, it’s helpful to estimate the test duration, but it isn’t an exact science. As a rule-of-thumb, you should plan for smaller isolation tests to run longer, as the impact on conversion rate may be less. The test may require more conversions to potentially achieve confidence.

Larger, more drastic cluster changes would typically run for a shorter period of time, as they have more potential to have a greater impact. However, we have seen that isolations CAN have the potential to have big impact. If the evidence is strong enough, test duration shouldn’t hinder you from trying smaller, more isolated changes as they can lead to some of the biggest insights.

Often, people that are new to testing become frustrated with tests that never seem to finish. If you’ve run a test with more than 30,000 to 50,000 visitors and one variation is still not statistically significant over another, then your test may not ever yield a clear winner and you should revise your test plan or reduce the number of variations being tested.

Further Reading: Do you have to wait for each test to reach statistical significance? Learn more in this blog post: “The more tests, the better!” and other A/B testing myths, debunked

Back to Top

Q: We are new to optimization (had a few quick wins with A/B testing and working toward a geo targeting project). Looking at your Infinity Optimization Process, I feel like we are doing a decent job with exploration and validation – for this being a new program to us. Our struggle seems to be your orange dot… putting the two sides together – any advice?

Chris Goward: If you’re getting insights from your Exploratory research, those insights should tie into the Validate tests that you’re running. You should be validating the insights that you’re getting from your Explore phase. If you started with valid insights, the results that you get really should be generating growth, and they should be generating insights.

Part of it is your Design of Experiments (DOE). DOE is how you structure your hypotheses and how you structure your variations to generate both growth and insights, and those are the two goals of your tests.

If you’re not generating growth, or you’re not generating insights, then your DOE may be weak, and you need to go back to your strategy and ask, why am I testing this variation? Is it just a random idea? Or, am I really isolating it against another variation that’s going to teach me something as well as generate lift? If you’re not getting the orange dot right, then you probably need to look at researching more about Design of Experiments.

Q: When test results are insignificant after lots impressions, how do you know when to ‘call it a tie’ and stop that test and move on?

Chris Goward: That’s a question that requires a large portion of “it depends.” It depends on whether:

  • You have other tests ready to run with the same traffic sources
  • The test results are showing high volatility or have stabilized
  • The test insights will be important for the organization

There’s an opportunity cost to every test. You could always be testing something else and need to constantly be asking whether this is the best test to be running now vs. the cost and potential benefit of the next test in your conversion strategy.

Back to Top

 

Q: There are tools meant to increase testing velocity with pre-built widgets and pre-built test variations, even – what are your thoughts on this approach?

A PANEL RESPONSE

John Ekman: Pre-built templates provide a way to get quick wins and uplift. But you won’t understand why it created an uplift. You won’t understand what’s going on in the brain of your users. For someone who believes that experimentation is a way to look in the minds of whoever is in front of the screen, I think these methods are quite dangerous.

Chris Goward: I’ll take a slightly different stance. As much as I talk about understanding the mind of the customer, asking why, and testing based on hypotheses, there is a tradeoff. A tradeoff between understanding the why and just getting growth. If you want to understand the why infinitely, you’ll do multivariate testing and isolate every potential variable. But in practice, that can’t happen. Very few have enough traffic to multivariate test everything.

But if you don’t have tons of traffic and you want to get faster results, maybe you don’t want to know the why about anything, and you just want to get lift.

There might be a time to do both. Maybe your website performance is really bad, or you just want to try a left-field variation, just to see if it works…if you get a 20% lift in your revenue, that’s not a failure. That’s not a bad thing to do. But then, you can go back and isolate all of the things to ask yourself: Well, I wonder why that won, and start from there.

The approach we usually take at WiderFunnel is to reserve 10% of the variations for ‘left-field’ variations. As in, we don’t know why this will work, but we’re just going to test something crazy and see if it sticks.

David Darmanin: I agree, and disagree. We’re living in an era when technology has become so cheap, that I think it’s dangerous for any company to try to automate certain things, because they’re going to just become one of many.

Creating a unique customer experience is going to become more and more important.

If you are using tools like a platform, where you are picking and choosing what to use so that it serves your strategy and the way you want to try to build a business, that makes sense to me. But I think it’s very dangerous to leave that to be completely automated.

Some software companies out there are trying to build a completely automated conversion rate optimization platform that does everything. But that’s insane. If many sites are all aligned in the same way, if it’s pure AI, they’re all going to end up looking the same. And who’s going to win? The other company that pops up out of nowhere, and does everything differently. That isn’t fully ‘optimized’ and is more human.

Optimization, in itself, if it’s too optimized, there is a danger. If we eliminate the human aspect, we’re kind of screwed.

Video Resource: This panel response comes from the Growth & Conversion Virtual Summit held this Spring. You can still access all of the session recordings for free, here.

Back to Top

What conversion optimization questions do you have?

Add your questions in the comments section below!

The post Your frequently asked conversion optimization questions, answered! appeared first on WiderFunnel Conversion Optimization.

View article:

Your frequently asked conversion optimization questions, answered!

Your mobile website optimization guide (or, how to stop frustrating your mobile users)

Reading Time: 15 minutes

One lazy Sunday evening, I decided to order Thai delivery for dinner. It was a Green-Curry-and-Crispy-Wonton kind of night.

A quick google search from my iPhone turned up an ad for a food delivery app. In that moment, I wanted to order food fast, without having to dial a phone number or speak to a human. So, I clicked.

From the ad, I was taken to the company’s mobile website. There was a call-to-action to “Get the App” below the fold, but I didn’t want to download a whole app for this one meal. I would just order from the mobile site.

Dun, dun, duuuun.

Over the next minute, I had one of the most frustrating ordering experiences of my life. Labeless hamburger menus, the inability to edit my order, and an overall lack of guidance through the ordering process led me to believe I would never be able to adjust my order from ‘Chicken Green Curry’ to ‘Prawn Green Curry’.

After 60 seconds of struggling, I gave up, utterly defeated.

I know this wasn’t a life-altering tragedy, but it sure was an awful mobile experience. And I bet you have had a similar experience in the last 24 hours.

Let’s think about this for a minute:

  1. This company paid good money for my click
  2. I was ready to order online: I was their customer to lose
  3. I struggled for about 30 seconds longer than most mobile users would have
  4. I gave up and got a mediocre burrito from the Mexican place across the street.

Not only was I frustrated, but I didn’t get my tasty Thai. The experience left a truly bitter taste in my mouth.

10 test ideas for optimizing your mobile website!

Get this checklist of 10 experiment ideas you should test on your mobile website.




Why is mobile website optimization important?

In 2017, every marketer ‘knows’ the importance of the mobile shopping experience. Americans spend more time on mobile devices than any other. But we are still failing to meet our users where they are on mobile.

Americans spend 54% of online time on mobile devices. Source: KPCB.

For most of us, it is becoming more and more important to provide a seamless mobile experience. But here’s where it gets a little tricky…

Conversion optimization”, and the term “optimization” in general, often imply improving conversion rates. But a seamless mobile experience does not necessarily mean a high-converting mobile experience. It means one that meets your user’s needs and propels them along the buyer journey.

I am sure there are improvements you can test on your mobile experience that will lift your mobile conversion rates, but you shouldn’t hyper-focus on a single metric. Instead, keep in mind that mobile may just be a step within your user’s journey to purchase.

So, let’s get started! First, I’ll delve into your user’s mobile mindset, and look at how to optimize your mobile experience. For real.

You ready?

What’s different about mobile?

First things first: let’s acknowledge that your user is the same human being whether they are shopping on a mobile device, a desktop computer, a laptop, or in-store. Agreed?

So, what’s different about mobile? Well, back in 2013, Chris Goward said, “Mobile is a state of being, a context, a verb, not a device. When your users are on mobile, they are in a different context, a different environment, with different needs.”

Your user is the same person when she is shopping on her iPhone, but she is in a different context. She may be in a store comparing product reviews on her phone, or she may be on the go looking for a good cup of coffee, or she may be trying to order Thai delivery from her couch.

Your user is the same person on mobile, but in a different context, with different needs.

This is why many mobile optimization experts recommend having a mobile website versus using responsive design.

Responsive design is not an optimization strategy. We should stop treating mobile visitors as ‘mini-desktop visitors’. People don’t use mobile devices instead of desktop devices, they use it in addition to desktop in a whole different way.

– Talia Wolf, Founder & Chief Optimizer at GetUplift

Step one, then, is to understand who your target customer is, and what motivates them to act in any context. This should inform all of your marketing and the creation of your value proposition.

(If you don’t have a clear picture of your target customer, you should re-focus and tackle that question first.)

Step two is to understand how your user’s mobile context affects their existing motivation, and how to facilitate their needs on mobile to the best of your ability.

Understanding the mobile context

To understand the mobile context, let’s start with some stats and work backwards.

  • Americans spend more than half (54%) of their online time on mobile devices (Source: KPCB, 2016)
  • Mobile accounts for 60% of time spent shopping online, but only 16% of all retail dollars spent (Source: ComScore, 2015)

Insight: Americans are spending more than half of their online time on their mobile devices, but there is a huge gap between time spent ‘shopping’ online, and actually buying.

  • 29% of smartphone users will immediately switch to another site or app if the original site doesn’t satisfy their needs (Source: Google, 2015)
  • Of those, 70% switch because of lagging load times and 67% switch because it takes too many steps to purchase or get desired information (Source: Google, 2015)

Insight: Mobile users are hypersensitive to slow load times, and too many obstacles.

So, why the heck are our expectations for immediate gratification so high on mobile? I have a few theories.

We’re reward-hungry

Mobile devices provide constant access to the internet, which means a constant expectation for reward.

“The fact that we don’t know what we’ll find when we check our email, or visit our favorite social site, creates excitement and anticipation. This leads to a small burst of pleasure chemicals in our brains, which drives us to use our phones more and more.” – TIME, “You asked: Am I addicted to my phone?

If non-stop access has us primed to expect non-stop reward, is it possible that having a negative mobile experience is even more detrimental to our motivation than a negative experience in another context?

When you tap into your Facebook app and see three new notifications, you get a burst of pleasure. And you do this over, and over, and over again.

So, when you tap into your Chrome browser and land on a mobile website that is difficult to navigate, it makes sense that you would be extra annoyed. (No burst of fun reward chemicals!)

A mobile device is a personal device

Another facet to mobile that we rarely discuss is the fact that mobile devices are personal devices. Because our smartphones and wearables are with us almost constantly, they often feel very intimate.

In fact, our smartphones are almost like another limb. According to research from dscout, the average cellphone user touches his or her phone 2,167 times per day. Our thumbprints are built into them, for goodness’ sake.

Just think about your instinctive reaction when someone grabs your phone and starts scrolling through your pictures…

It is possible, then, that our expectations are higher on mobile because the device itself feels like an extension of us. Any experience you have on mobile should speak to your personal situation. And if the experience is cumbersome or difficult, it may feel particularly dissonant because it’s happening on your mobile device.

User expectations on mobile are extremely high. And while you can argue that mobile apps are doing a great job of meeting those expectations, the mobile web is failing.

If yours is one of the millions of organizations without a mobile app, your mobile website has got to work harder. Because a negative experience with your brand on mobile may have a stronger effect than you can anticipate.

Even if you have a mobile app, you should recognize that not everyone is going to use it. You can’t completely disregard your mobile website. (As illustrated by my extremely negative experience trying to order food.)

You need to think about how to meet your users where they are in the buyer journey on your mobile website:

  1. What are your users actually doing on mobile?
  2. Are they just seeking information before purchasing from a computer?
  3. Are they seeking information on your mobile site while in your actual store?

The great thing about optimization is that you can test to pick off low-hanging fruit, while you are investigating more impactful questions like those above. For instance, while you are gathering data about how your users are using your mobile site, you can test usability improvements.

Usability on mobile websites

If you are looking take get a few quick wins to prove the importance of a mobile optimization program, usability is a good place to begin.

The mobile web presents unique usability challenges for marketers. And given your users’ ridiculously high expectations, your mobile experience must address these challenges.

mobile website optimization - usability
This image represents just a few mobile usability best practices.

Below are four of the core mobile limitations, along with recommendations from the WiderFunnel Strategy team around how to address (and test) them.

Note: For this section, I relied heavily on research from the Nielsen Norman Group. For more details, click here.

1. The small screen struggle

No surprise, here. Compared to desktop and laptop screens, even the biggest smartphone screen is smaller―which means they display less content.

“The content displayed above the fold on a 30-inch monitor requires 5 screenfuls on a small 4-inch screen. Thus mobile users must (1) incur a higher interaction cost in order to access the same amount of information; (2) rely on their short-term memory to refer to information that is not visible on the screen.” – Nielsen Norman Group, “Mobile User Experience: Limitations and Strengths

Strategist recommendations:

Consider persistent navigation and calls-to-action. Because of the smaller screen size, your users often need to do a lot of scrolling. If your navigation and main call-to-action aren’t persistent, you are asking your users to scroll down for information, and scroll back up for relevant links.

Note: Anything persistent takes up screen space as well. Make sure to test this idea before implementing it to make sure you aren’t stealing too much focus from other important elements on your page.

2. The touchy touchscreen

Two main issues with the touchscreen (an almost universal trait of today’s mobile devices) are typing and target size.

Typing on a soft keyboard, like the one on your user’s iPhone, requires them to constantly divide their attention between what they are typing, and the keypad area. Not to mention the small keypad and crowded keys…

Target size refers to a clickable target, which needs to be a lot larger on a touchscreen than it is does when your user has a mouse.

So, you need to make space for larger targets (bigger call-to-action buttons) on a smaller screen.

Strategist recommendations:

Test increasing the size of your clickable elements. Google provides recommendations for target sizing:

You should ensure that the most important tap targets on your site—the ones users will be using the most often—are large enough to be easy to press, at least 48 CSS pixels tall/wide (assuming you have configured your viewport properly).

Less frequently-used links can be smaller, but should still have spacing between them and other links, so that a 10mm finger pad would not accidentally press both links at once.

You may also want to test improving the clarity around what is clickable and what isn’t. This can be achieved through styling, and is important for reducing ‘exploratory clicking’.

When a user has to click an element to 1) determine whether or not it is clickable, and 2) determine where it will lead, this eats away at their finite motivation.

Another simple tweak: Test your call-to-action placement. Does it match with the motion range of a user’s thumb?

3. Mobile shopping experience, interrupted

As the term mobile implies, mobile devices are portable. And because we can use ‘em in many settings, we are more likely to be interrupted.

“As a result, attention on mobile is often fragmented and sessions on mobile devices are short. In fact, the average session duration is 72 seconds […] versus the average desktop session of 150 seconds.”Nielsen Norman Group

Strategist recommendations:

You should design your mobile experience for interruptions, prioritize essential information, and simplify tasks and interactions. This goes back to meeting your users where they are within the buyer journey.

According to research by SessionM (published in 2015), 90% of smartphone users surveyed used their phones while shopping in a physical store to 1) compare product prices, 2) look up product information, and 3) check product reviews online.

You should test adjusting your page length and messaging hierarchy to facilitate your user’s main goals. This may be browsing and information-seeking versus purchasing.

4. One window at a time

As I’m writing this post, I have 11 tabs open in Google Chrome, split between two screens. If I click on a link that takes me to a new website or page, it’s no big deal.

But on mobile, your user is most likely viewing one window at a time. They can’t split their screen to look at two windows simultaneously, so you shouldn’t ask them to. Mobile tasks should be easy to complete in one app or on one website.

The more your user has to jump from page to page, the more they have to rely on their memory. This increases cognitive load, and decreases the likelihood that they will complete an action.

Strategist recommendations:

Your navigation should be easy to find and it should contain links to your most relevant and important content. This way, if your user has to travel to a new page to access specific content, they can find their way back to other important pages quickly and easily.

In e-commerce, we often see people “pogo-sticking”—jumping from one page to another continuously—because they feel that they need to navigate to another page to confirm that the information they have provided is correct.

A great solution is to ensure that your users can view key information that they may want to confirm (prices / products / address) on any page. This way, they won’t have to jump around your website and remember these key pieces of information.

Implementing mobile website optimization

As I’m sure you’ve noticed by now, the phrase “you should test” is peppered throughout this post. Because understanding the mobile context, and reviewing usability challenges and recommendations are first steps.

If you can, you should test any recommendation made in this post. Which brings us to mobile website optimization. At WiderFunnel, we approach mobile optimization just like we would desktop optimization: with process.

You should evaluate and prioritize mobile web optimization in the context of all of your marketing. If you can achieve greater Return on Investment by optimizing your desktop experience (or another element of your marketing), you should start there.

But assuming your mobile website ranks high within your priorities, you should start examining it from your user’s perspective. The WiderFunnel team uses the LIFT Model framework to identify problem areas.

The LIFT Model allows us to identify barriers to conversion, using the six factors of Value Proposition, Clarity, Relevance, Anxiety, Distraction, and Urgency. For more on the LIFT Model, check out this blog post.

A LIFT illustration

I asked the WiderFunnel Strategy team to do a LIFT analysis of the food delivery website that gave me so much grief that Sunday night. Here are some of the potential barriers they identified on the checkout page alone:

Mobile website LIFT analysis
This wireframe is based on the food delivery app’s checkout page. Each of the numbered LIFT points corresponds with the list below.
  1. Relevance: There is valuable page real estate dedicated to changing the language, when a smartphone will likely detect your language on its own.
  2. Anxiety: There are only 3 options available in the navigation: Log In, Sign Up, and Help. None of these are helpful when a user is trying to navigate between key pages.
  3. Clarity: Placing the call-to-action at the top of the page creates disjointed eyeflow. The user must scan the page from top to bottom to ensure their order is correct.
  4. Clarity: The “Order Now” call-to-action and “Allergy & dietary information links” are very close together. Users may accidentally tap one, when they want to tap the other.
  5. Anxiety: There is no confirmation of the delivery address.
  6. Anxiety: There is no way to edit an order within the checkout. A user has to delete items, return to the menu and add new items.
  7. Clarity: Font size is very small making the content difficult to read.
  8. Clarity: The “Cash” and “Card” icons have no context. Is a user supposed to select one, or are these just the payment options available?
  9. Distraction: The dropdown menus in the footer include many links that might distract a user from completing their order.

Needless to say, my frustrations were confirmed. The WiderFunnel team ran into the same obstacles I had run into, and identified dozens of barriers that I hadn’t.

But what does this mean for you?

When you are first analyzing your mobile experience, you should try to step into your user’s shoes and actually use your experience. Give your team a task and a goal, and walk through the experience using a framework like LIFT. This will allow you to identify usability issues within your user’s mobile context.

Every LIFT point is a potential test idea that you can feed into your optimization program.

Case study examples

This wouldn’t be a WiderFunnel blog post without some case study examples.

This is where we put ‘best mobile practices’ to the test. Because the smallest usability tweak may make perfect sense to you, and be off-putting to your users.

In the following three examples, we put our recommendations to the test.

Mobile navigation optimization

In mobile design in particular, we tend to assume our users understand ‘universal’ symbols.

Aritzia Hamburger Menu
The ‘Hamburger Menu’ is a fixture on mobile websites. But does that mean it’s a universally understood symbol?

But, that isn’t always the case. And it is certainly worth testing to understand how you can make the navigation experience (often a huge pain point on mobile) easier.

You can’t just expect your users to know things. You have to make it as clear as possible. The more you ask your user to guess, the more frustrated they will become.

– Dennis Pavlina, Optimization Strategist, WiderFunnel

This example comes from an e-commerce client that sells artwork. In this experiment, we tested two variations against the original.

In the first, we increased font and icon size within the navigation and menu drop-down. This was a usability update meant to address the small, difficult to navigate menu. Remember the conversation about target size? We wanted to tackle the low-hanging fruit first.

With variation B, we dug a little deeper into the behavior of this client’s specific users.

Qualitative Hotjar recordings had shown that users were trying to navigate the mobile website using the homepage as a homebase. But this site actually has a powerful search functionality, and it is much easier to navigate using search. Of course, the search option was buried in the hamburger menu…

So, in the second variation (built on variation A), we removed Search from the menu and added it right into the main Nav.

Mobile website optimization - navigation
Wireframes of the control navigation versus our variations.

Results

Both variations beat the control. Variation A led to a 2.7% increase in transactions, and a 2.4% increase in revenue. Variation B decreased clicks to the menu icon by -24%, increased transactions by 8.1%, and lifted revenue by 9.5%.

Never underestimate the power of helping your users find their way on mobile. But be wary! Search worked for this client’s users, but it is not always the answer, particularly if what you are selling is complex, and your users need more guidance through the funnel.

Mobile product page optimization

Let’s look at another e-commerce example. This client is a large sporting goods store, and this experiment focused on their product detail pages.

On the original page, our Strategists noted a worst mobile practice: The buttons were small and arranged closely together, making them difficult to click.

There were also several optimization blunders:

  1. Two calls-to-action were given equal prominence: “Find in store” and “+ Add to cart”
  2. “Add to wishlist” was also competing with “Add to cart”
  3. Social icons were placed near the call-to-action, which could be distracting

We had evidence from an experiment on desktop that removing these distractions, and focusing on a single call-to-action, would increase transactions. (In that experiment, we saw transactions increase by 6.56%).

So, we tested addressing these issues in two variations.

In the first, we de-prioritized competing calls-to-action, and increased the ‘Size’ and ‘Qty’ fields. In the second, we wanted to address usability issues, making the color options, size options, and quantity field bigger and easier to click.

mobile website optimization - product page variations
The control page versus our variations.

Results

Both of our variations lost to the Control. I know what you’re thinking…what?!

Let’s dig deeper.

Looking at the numbers, users responded in the way we expected, with significant increases to the actions we wanted, and a significant reduction in the ones we did not.

Visits to “Reviews”, “Size”, “Quantity”, “Add to Cart” and the Cart page all increased. Visits to “Find in Store” decreased.

And yet, although the variations were more successful at moving users through to the next step, there was not a matching increase in motivation to actually complete a transaction.

It is hard to say for sure why this result happened without follow-up testing. However, it is possible that this client’s users have different intentions on mobile: Browsing and seeking product information vs. actually buying. Removing the “Find in Store” CTA may have caused anxiety.

This example brings us back to the mobile context. If an experiment wins within a desktop experience, this certainly doesn’t guarantee it will win on mobile.

I was shopping for shoes the other day, and was actually browsing the store’s mobile site while I was standing in the store. I was looking for product reviews. In that scenario, I was information-seeking on my phone, with every intention to buy…just not from my phone.

Are you paying attention to how your unique users use your mobile experience? It may be worthwhile to take the emphasis off of ‘increasing conversions on mobile’ in favor of researching user behavior on mobile, and providing your users with the mobile experience that best suits their needs.

Note: When you get a test result that contradicts usability best practices, it is important that you look carefully at your experiment design and secondary metrics. In this case, we have a potential theory, but would not recommend any large-scale changes without re-validating the result.

Mobile checkout optimization

This experiment was focused on one WiderFunnel client’s mobile checkout page. It was an insight-driving experiment, meaning the focus was on gathering insights about user behavior rather than on increasing conversion rates or revenue.

Evidence from this client’s business context suggested that users on mobile may prefer alternative payment methods, like Apple Pay and Google Wallet, to the standard credit card and PayPal options.

To make things even more interesting, this client wanted to determine the desire for alternative payment methods before implementing them.

The hypothesis: By adding alternative payment methods to the checkout page in an unobtrusive way, we can determine by the percent of clicks which new payment methods are most sought after by users.

We tested two variations against the Control.

In variation A, we pulled the credit card fields and call-to-action higher on the page, and added four alternative payment methods just below the CTA: PayPal, Apple Pay, Amazon Payments, and Google Wallet.

If a user clicked on one of the four alternative payment methods, they would see a message:

“Google Wallet coming soon!
We apologize for any inconvenience. Please choose an available deposit method.
Credit Card | PayPal”

In variation B, we flipped the order. We featured the alternative payment methods above the credit card fields. The focus was on increasing engagement with the payment options to gain better insights about user preference.

mobile website optimization - checkout page
The control against variations testing alternative payment methods.

Note: For this experiment, iOS devices did not display the Google Wallet option, and Android devices did not display Apple Pay.

Results

On iOS devices, Apple Pay received 18% of clicks, and Amazon Pay received 12%. On Android devices, Google Wallet received 17% of clicks, and Amazon Pay also received 17%.

The client can use these insights to build the best experience for mobile users, offering Apple Pay and Google Wallet as alternative payment methods rather than PayPal or Amazon Pay.

Unexpectedly, both variations also increased transactions! Variation A led to an 11.3% increase in transactions, and variation B led to an 8.5% increase.

Because your user’s motivation is already limited on mobile, you should try to create an experience with the fewest possible steps.

You can ask someone to grab their wallet, decipher their credit card number, expiration date, and ccv code, and type it all into a small form field. Or, you can test leveraging the digital payment options that may already be integrated with their mobile devices.

The future of mobile website optimization

Imagine you are in your favorite outdoor goods store, and you are ready to buy a new tent.

You are standing in front of piles of tents: 2-person, 3-person, 4-person tents; 3-season and extreme-weather tents; affordable and pricey tents; light-weight and heavier tents…

You pull out your smartphone, and navigate to the store’s mobile website. You are looking for more in-depth product descriptions and user reviews to help you make your decision.

A few seconds later, a store employee asks if they can help you out. They seem to know exactly what you are searching for, and they help you choose the right tent for your needs within minutes.

Imagine that while you were browsing products on your phone, that store employee received a notification that you are 1) in the store, 2) looking at product descriptions for tent A and tent B, and 3) standing by the tents.

Mobile optimization in the modern era is not about increasing conversions on your mobile website. It is about providing a seamless user experience. In the scenario above, the in-store experience and the mobile experience are inter-connected. One informs the other. And a transaction happens because of each touch point.

Mobile experiences cannot live in a vacuum. Today’s buyer switches seamlessly between devices [and] your optimization efforts must reflect that.

Yonny Zafrani, Mobile Product Manager, Dynamic Yield

We wear the internet on our wrists. We communicate via chat bots and messaging apps. We spend our leisure time on our phones: streaming, gaming, reading, sharing.

And while I’m not encouraging you to shift your optimization efforts entirely to mobile, you must consider the role mobile plays in your customers’ lives. The online experience is mobile. And your mobile experience should be an intentional step within the buyer journey.

What does your ideal mobile shopping experience look like? Where do you think mobile websites can improve? Do you agree or disagree with the ideas in this post? Share your thoughts in the comments section below!

The post Your mobile website optimization guide (or, how to stop frustrating your mobile users) appeared first on WiderFunnel Conversion Optimization.

More:

Your mobile website optimization guide (or, how to stop frustrating your mobile users)

Your mobile website optimization guide (or, how to stop pissing off your mobile users)

Reading Time: 15 minutes

One lazy Sunday evening, I decided to order Thai delivery for dinner. It was a Green-Curry-and-Crispy-Wonton kind of night.

A quick google search from my iPhone turned up an ad for a food delivery app. In that moment, I wanted to order food fast, without having to dial a phone number or speak to a human. So, I clicked.

From the ad, I was taken to the company’s mobile website. There was a call-to-action to “Get the App” below the fold, but I didn’t want to download a whole app for this one meal. I would just order from the mobile site.

Dun, dun, duuuun.

Over the next minute, I had one of the most frustrating ordering experiences of my life. Labeless hamburger menus, the inability to edit my order, and an overall lack of guidance through the ordering process led me to believe I would never be able to adjust my order from ‘Chicken Green Curry’ to ‘Prawn Green Curry’.

After 60 seconds of struggling, I gave up, utterly defeated.

I know this wasn’t a life-altering tragedy, but it sure was an awful mobile experience. And I bet you have had a similar experience in the last 24 hours.

Let’s think about this for a minute:

  1. This company paid good money for my click
  2. I was ready to order online: I was their customer to lose
  3. I struggled for about 30 seconds longer than most mobile users would have
  4. I gave up and got a mediocre burrito from the Mexican place across the street.

Not only was I frustrated, but I didn’t get my tasty Thai. The experience left a truly bitter taste in my mouth.

10 test ideas for optimizing your mobile website!

Get this checklist of 10 experiment ideas you should test on your mobile website.




Why is mobile website optimization important?

In 2017, every marketer ‘knows’ the importance of the mobile shopping experience. Americans spend more time on mobile devices than any other. But we are still failing to meet our users where they are on mobile.

Americans spend 54% of online time on mobile devices. Source: KPCB.

For most of us, it is becoming more and more important to provide a seamless mobile experience. But here’s where it gets a little tricky…

Conversion optimization”, and the term “optimization” in general, often imply improving conversion rates. But a seamless mobile experience does not necessarily mean a high-converting mobile experience. It means one that meets your user’s needs and propels them along the buyer journey.

I am sure there are improvements you can test on your mobile experience that will lift your mobile conversion rates, but you shouldn’t hyper-focus on a single metric. Instead, keep in mind that mobile may just be a step within your user’s journey to purchase.

So, let’s get started! First, I’ll delve into your user’s mobile mindset, and look at how to optimize your mobile experience. For real.

You ready?

What’s different about mobile?

First things first: let’s acknowledge that your user is the same human being whether they are shopping on a mobile device, a desktop computer, a laptop, or in-store. Agreed?

So, what’s different about mobile? Well, back in 2013, Chris Goward said, “Mobile is a state of being, a context, a verb, not a device. When your users are on mobile, they are in a different context, a different environment, with different needs.”

Your user is the same person when she is shopping on her iPhone, but she is in a different context. She may be in a store comparing product reviews on her phone, or she may be on the go looking for a good cup of coffee, or she may be trying to order Thai delivery from her couch.

Your user is the same person on mobile, but in a different context, with different needs.

This is why many mobile optimization experts recommend having a mobile website versus using responsive design.

Responsive design is not an optimization strategy. We should stop treating mobile visitors as ‘mini-desktop visitors’. People don’t use mobile devices instead of desktop devices, they use it in addition to desktop in a whole different way.

– Talia Wolf, Founder & Chief Optimizer at GetUplift

Step one, then, is to understand who your target customer is, and what motivates them to act in any context. This should inform all of your marketing and the creation of your value proposition.

(If you don’t have a clear picture of your target customer, you should re-focus and tackle that question first.)

Step two is to understand how your user’s mobile context affects their existing motivation, and how to facilitate their needs on mobile to the best of your ability.

Understanding the mobile context

To understand the mobile context, let’s start with some stats and work backwards.

  • Americans spend more than half (54%) of their online time on mobile devices (Source: KPCB, 2016)
  • Mobile accounts for 60% of time spent shopping online, but only 16% of all retail dollars spent (Source: ComScore, 2015)

Insight: Americans are spending more than half of their online time on their mobile devices, but there is a huge gap between time spent ‘shopping’ online, and actually buying.

  • 29% of smartphone users will immediately switch to another site or app if the original site doesn’t satisfy their needs (Source: Google, 2015)
  • Of those, 70% switch because of lagging load times and 67% switch because it takes too many steps to purchase or get desired information (Source: Google, 2015)

Insight: Mobile users are hypersensitive to slow load times, and too many obstacles.

So, why the heck are our expectations for immediate gratification so high on mobile? I have a few theories.

We’re reward-hungry

Mobile devices provide constant access to the internet, which means a constant expectation for reward.

“The fact that we don’t know what we’ll find when we check our email, or visit our favorite social site, creates excitement and anticipation. This leads to a small burst of pleasure chemicals in our brains, which drives us to use our phones more and more.” – TIME, “You asked: Am I addicted to my phone?

If non-stop access has us primed to expect non-stop reward, is it possible that having a negative mobile experience is even more detrimental to our motivation than a negative experience in another context?

When you tap into your Facebook app and see three new notifications, you get a burst of pleasure. And you do this over, and over, and over again.

So, when you tap into your Chrome browser and land on a mobile website that is difficult to navigate, it makes sense that you would be extra annoyed. (No burst of fun reward chemicals!)

A mobile device is a personal device

Another facet to mobile that we rarely discuss is the fact that mobile devices are personal devices. Because our smartphones and wearables are with us almost constantly, they often feel very intimate.

In fact, our smartphones are almost like another limb. According to research from dscout, the average cellphone user touches his or her phone 2,167 times per day. Our thumbprints are built into them, for goodness’ sake.

Just think about your instinctive reaction when someone grabs your phone and starts scrolling through your pictures…

It is possible, then, that our expectations are higher on mobile because the device itself feels like an extension of us. Any experience you have on mobile should speak to your personal situation. And if the experience is cumbersome or difficult, it may feel particularly dissonant because it’s happening on your mobile device.

User expectations on mobile are extremely high. And while you can argue that mobile apps are doing a great job of meeting those expectations, the mobile web is failing.

If yours is one of the millions of organizations without a mobile app, your mobile website has got to work harder. Because a negative experience with your brand on mobile may have a stronger effect than you can anticipate.

Even if you have a mobile app, you should recognize that not everyone is going to use it. You can’t completely disregard your mobile website. (As illustrated by my extremely negative experience trying to order food.)

You need to think about how to meet your users where they are in the buyer journey on your mobile website:

  1. What are your users actually doing on mobile?
  2. Are they just seeking information before purchasing from a computer?
  3. Are they seeking information on your mobile site while in your actual store?

The great thing about optimization is that you can test to pick off low-hanging fruit, while you are investigating more impactful questions like those above. For instance, while you are gathering data about how your users are using your mobile site, you can test usability improvements.

Usability on mobile websites

If you are looking take get a few quick wins to prove the importance of a mobile optimization program, usability is a good place to begin.

The mobile web presents unique usability challenges for marketers. And given your users’ ridiculously high expectations, your mobile experience must address these challenges.

mobile website optimization - usability
This image represents just a few mobile usability best practices.

Below are four of the core mobile limitations, along with recommendations from the WiderFunnel Strategy team around how to address (and test) them.

Note: For this section, I relied heavily on research from the Nielsen Norman Group. For more details, click here.

1. The small screen struggle

No surprise, here. Compared to desktop and laptop screens, even the biggest smartphone screen is smaller―which means they display less content.

“The content displayed above the fold on a 30-inch monitor requires 5 screenfuls on a small 4-inch screen. Thus mobile users must (1) incur a higher interaction cost in order to access the same amount of information; (2) rely on their short-term memory to refer to information that is not visible on the screen.” – Nielsen Norman Group, “Mobile User Experience: Limitations and Strengths

Strategist recommendations:

Consider persistent navigation and calls-to-action. Because of the smaller screen size, your users often need to do a lot of scrolling. If your navigation and main call-to-action aren’t persistent, you are asking your users to scroll down for information, and scroll back up for relevant links.

Note: Anything persistent takes up screen space as well. Make sure to test this idea before implementing it to make sure you aren’t stealing too much focus from other important elements on your page.

2. The touchy touchscreen

Two main issues with the touchscreen (an almost universal trait of today’s mobile devices) are typing and target size.

Typing on a soft keyboard, like the one on your user’s iPhone, requires them to constantly divide their attention between what they are typing, and the keypad area. Not to mention the small keypad and crowded keys…

Target size refers to a clickable target, which needs to be a lot larger on a touchscreen than it is does when your user has a mouse.

So, you need to make space for larger targets (bigger call-to-action buttons) on a smaller screen.

Strategist recommendations:

Test increasing the size of your clickable elements. Google provides recommendations for target sizing:

You should ensure that the most important tap targets on your site—the ones users will be using the most often—are large enough to be easy to press, at least 48 CSS pixels tall/wide (assuming you have configured your viewport properly).

Less frequently-used links can be smaller, but should still have spacing between them and other links, so that a 10mm finger pad would not accidentally press both links at once.

You may also want to test improving the clarity around what is clickable and what isn’t. This can be achieved through styling, and is important for reducing ‘exploratory clicking’.

When a user has to click an element to 1) determine whether or not it is clickable, and 2) determine where it will lead, this eats away at their finite motivation.

Another simple tweak: Test your call-to-action placement. Does it match with the motion range of a user’s thumb?

3. Mobile shopping experience, interrupted

As the term mobile implies, mobile devices are portable. And because we can use ‘em in many settings, we are more likely to be interrupted.

“As a result, attention on mobile is often fragmented and sessions on mobile devices are short. In fact, the average session duration is 72 seconds […] versus the average desktop session of 150 seconds.”Nielsen Norman Group

Strategist recommendations:

You should design your mobile experience for interruptions, prioritize essential information, and simplify tasks and interactions. This goes back to meeting your users where they are within the buyer journey.

According to research by SessionM (published in 2015), 90% of smartphone users surveyed used their phones while shopping in a physical store to 1) compare product prices, 2) look up product information, and 3) check product reviews online.

You should test adjusting your page length and messaging hierarchy to facilitate your user’s main goals. This may be browsing and information-seeking versus purchasing.

4. One window at a time

As I’m writing this post, I have 11 tabs open in Google Chrome, split between two screens. If I click on a link that takes me to a new website or page, it’s no big deal.

But on mobile, your user is most likely viewing one window at a time. They can’t split their screen to look at two windows simultaneously, so you shouldn’t ask them to. Mobile tasks should be easy to complete in one app or on one website.

The more your user has to jump from page to page, the more they have to rely on their memory. This increases cognitive load, and decreases the likelihood that they will complete an action.

Strategist recommendations:

Your navigation should be easy to find and it should contain links to your most relevant and important content. This way, if your user has to travel to a new page to access specific content, they can find their way back to other important pages quickly and easily.

In e-commerce, we often see people “pogo-sticking”—jumping from one page to another continuously—because they feel that they need to navigate to another page to confirm that the information they have provided is correct.

A great solution is to ensure that your users can view key information that they may want to confirm (prices / products / address) on any page. This way, they won’t have to jump around your website and remember these key pieces of information.

Implementing mobile website optimization

As I’m sure you’ve noticed by now, the phrase “you should test” is peppered throughout this post. Because understanding the mobile context, and reviewing usability challenges and recommendations are first steps.

If you can, you should test any recommendation made in this post. Which brings us to mobile website optimization. At WiderFunnel, we approach mobile optimization just like we would desktop optimization: with process.

You should evaluate and prioritize mobile web optimization in the context of all of your marketing. If you can achieve greater Return on Investment by optimizing your desktop experience (or another element of your marketing), you should start there.

But assuming your mobile website ranks high within your priorities, you should start examining it from your user’s perspective. The WiderFunnel team uses the LIFT Model framework to identify problem areas.

The LIFT Model allows us to identify barriers to conversion, using the six factors of Value Proposition, Clarity, Relevance, Anxiety, Distraction, and Urgency. For more on the LIFT Model, check out this blog post.

A LIFT illustration

I asked the WiderFunnel Strategy team to do a LIFT analysis of the food delivery website that gave me so much grief that Sunday night. Here are some of the potential barriers they identified on the checkout page alone:

Mobile website LIFT analysis
This wireframe is based on the food delivery app’s checkout page. Each of the numbered LIFT points corresponds with the list below.
  1. Relevance: There is valuable page real estate dedicated to changing the language, when a smartphone will likely detect your language on its own.
  2. Anxiety: There are only 3 options available in the navigation: Log In, Sign Up, and Help. None of these are helpful when a user is trying to navigate between key pages.
  3. Clarity: Placing the call-to-action at the top of the page creates disjointed eyeflow. The user must scan the page from top to bottom to ensure their order is correct.
  4. Clarity: The “Order Now” call-to-action and “Allergy & dietary information links” are very close together. Users may accidentally tap one, when they want to tap the other.
  5. Anxiety: There is no confirmation of the delivery address.
  6. Anxiety: There is no way to edit an order within the checkout. A user has to delete items, return to the menu and add new items.
  7. Clarity: Font size is very small making the content difficult to read.
  8. Clarity: The “Cash” and “Card” icons have no context. Is a user supposed to select one, or are these just the payment options available?
  9. Distraction: The dropdown menus in the footer include many links that might distract a user from completing their order.

Needless to say, my frustrations were confirmed. The WiderFunnel team ran into the same obstacles I had run into, and identified dozens of barriers that I hadn’t.

But what does this mean for you?

When you are first analyzing your mobile experience, you should try to step into your user’s shoes and actually use your experience. Give your team a task and a goal, and walk through the experience using a framework like LIFT. This will allow you to identify usability issues within your user’s mobile context.

Every LIFT point is a potential test idea that you can feed into your optimization program.

Case study examples

This wouldn’t be a WiderFunnel blog post without some case study examples.

This is where we put ‘best mobile practices’ to the test. Because the smallest usability tweak may make perfect sense to you, and be off-putting to your users.

In the following three examples, we put our recommendations to the test.

Mobile navigation optimization

In mobile design in particular, we tend to assume our users understand ‘universal’ symbols.

Aritzia Hamburger Menu
The ‘Hamburger Menu’ is a fixture on mobile websites. But does that mean it’s a universally understood symbol?

But, that isn’t always the case. And it is certainly worth testing to understand how you can make the navigation experience (often a huge pain point on mobile) easier.

You can’t just expect your users to know things. You have to make it as clear as possible. The more you ask your user to guess, the more frustrated they will become.

– Dennis Pavlina, Optimization Strategist, WiderFunnel

This example comes from an e-commerce client that sells artwork. In this experiment, we tested two variations against the original.

In the first, we increased font and icon size within the navigation and menu drop-down. This was a usability update meant to address the small, difficult to navigate menu. Remember the conversation about target size? We wanted to tackle the low-hanging fruit first.

With variation B, we dug a little deeper into the behavior of this client’s specific users.

Qualitative Hotjar recordings had shown that users were trying to navigate the mobile website using the homepage as a homebase. But this site actually has a powerful search functionality, and it is much easier to navigate using search. Of course, the search option was buried in the hamburger menu…

So, in the second variation (built on variation A), we removed Search from the menu and added it right into the main Nav.

Mobile website optimization - navigation
Wireframes of the control navigation versus our variations.

Results

Both variations beat the control. Variation A led to a 2.7% increase in transactions, and a 2.4% increase in revenue. Variation B decreased clicks to the menu icon by -24%, increased transactions by 8.1%, and lifted revenue by 9.5%.

Never underestimate the power of helping your users find their way on mobile. But be wary! Search worked for this client’s users, but it is not always the answer, particularly if what you are selling is complex, and your users need more guidance through the funnel.

Mobile product page optimization

Let’s look at another e-commerce example. This client is a large sporting goods store, and this experiment focused on their product detail pages.

On the original page, our Strategists noted a worst mobile practice: The buttons were small and arranged closely together, making them difficult to click.

There were also several optimization blunders:

  1. Two calls-to-action were given equal prominence: “Find in store” and “+ Add to cart”
  2. “Add to wishlist” was also competing with “Add to cart”
  3. Social icons were placed near the call-to-action, which could be distracting

We had evidence from an experiment on desktop that removing these distractions, and focusing on a single call-to-action, would increase transactions. (In that experiment, we saw transactions increase by 6.56%).

So, we tested addressing these issues in two variations.

In the first, we de-prioritized competing calls-to-action, and increased the ‘Size’ and ‘Qty’ fields. In the second, we wanted to address usability issues, making the color options, size options, and quantity field bigger and easier to click.

mobile website optimization - product page variations
The control page versus our variations.

Results

Both of our variations lost to the Control. I know what you’re thinking…what?!

Let’s dig deeper.

Looking at the numbers, users responded in the way we expected, with significant increases to the actions we wanted, and a significant reduction in the ones we did not.

Visits to “Reviews”, “Size”, “Quantity”, “Add to Cart” and the Cart page all increased. Visits to “Find in Store” decreased.

And yet, although the variations were more successful at moving users through to the next step, there was not a matching increase in motivation to actually complete a transaction.

It is hard to say for sure why this result happened without follow-up testing. However, it is possible that this client’s users have different intentions on mobile: Browsing and seeking product information vs. actually buying. Removing the “Find in Store” CTA may have caused anxiety.

This example brings us back to the mobile context. If an experiment wins within a desktop experience, this certainly doesn’t guarantee it will win on mobile.

I was shopping for shoes the other day, and was actually browsing the store’s mobile site while I was standing in the store. I was looking for product reviews. In that scenario, I was information-seeking on my phone, with every intention to buy…just not from my phone.

Are you paying attention to how your unique users use your mobile experience? It may be worthwhile to take the emphasis off of ‘increasing conversions on mobile’ in favor of researching user behavior on mobile, and providing your users with the mobile experience that best suits their needs.

Note: When you get a test result that contradicts usability best practices, it is important that you look carefully at your experiment design and secondary metrics. In this case, we have a potential theory, but would not recommend any large-scale changes without re-validating the result.

Mobile checkout optimization

This experiment was focused on one WiderFunnel client’s mobile checkout page. It was an insight-driving experiment, meaning the focus was on gathering insights about user behavior rather than on increasing conversion rates or revenue.

Evidence from this client’s business context suggested that users on mobile may prefer alternative payment methods, like Apple Pay and Google Wallet, to the standard credit card and PayPal options.

To make things even more interesting, this client wanted to determine the desire for alternative payment methods before implementing them.

The hypothesis: By adding alternative payment methods to the checkout page in an unobtrusive way, we can determine by the percent of clicks which new payment methods are most sought after by users.

We tested two variations against the Control.

In variation A, we pulled the credit card fields and call-to-action higher on the page, and added four alternative payment methods just below the CTA: PayPal, Apple Pay, Amazon Payments, and Google Wallet.

If a user clicked on one of the four alternative payment methods, they would see a message:

“Google Wallet coming soon!
We apologize for any inconvenience. Please choose an available deposit method.
Credit Card | PayPal”

In variation B, we flipped the order. We featured the alternative payment methods above the credit card fields. The focus was on increasing engagement with the payment options to gain better insights about user preference.

mobile website optimization - checkout page
The control against variations testing alternative payment methods.

Note: For this experiment, iOS devices did not display the Google Wallet option, and Android devices did not display Apple Pay.

Results

On iOS devices, Apple Pay received 18% of clicks, and Amazon Pay received 12%. On Android devices, Google Wallet received 17% of clicks, and Amazon Pay also received 17%.

The client can use these insights to build the best experience for mobile users, offering Apple Pay and Google Wallet as alternative payment methods rather than PayPal or Amazon Pay.

Unexpectedly, both variations also increased transactions! Variation A led to an 11.3% increase in transactions, and variation B led to an 8.5% increase.

Because your user’s motivation is already limited on mobile, you should try to create an experience with the fewest possible steps.

You can ask someone to grab their wallet, decipher their credit card number, expiration date, and ccv code, and type it all into a small form field. Or, you can test leveraging the digital payment options that may already be integrated with their mobile devices.

The future of mobile website optimization

Imagine you are in your favorite outdoor goods store, and you are ready to buy a new tent.

You are standing in front of piles of tents: 2-person, 3-person, 4-person tents; 3-season and extreme-weather tents; affordable and pricey tents; light-weight and heavier tents…

You pull out your smartphone, and navigate to the store’s mobile website. You are looking for more in-depth product descriptions and user reviews to help you make your decision.

A few seconds later, a store employee asks if they can help you out. They seem to know exactly what you are searching for, and they help you choose the right tent for your needs within minutes.

Imagine that while you were browsing products on your phone, that store employee received a notification that you are 1) in the store, 2) looking at product descriptions for tent A and tent B, and 3) standing by the tents.

Mobile optimization in the modern era is not about increasing conversions on your mobile website. It is about providing a seamless user experience. In the scenario above, the in-store experience and the mobile experience are inter-connected. One informs the other. And a transaction happens because of each touch point.

Mobile experiences cannot live in a vacuum. Today’s buyer switches seamlessly between devices [and] your optimization efforts must reflect that.

Yonny Zafrani, Mobile Product Manager, Dynamic Yield

We wear the internet on our wrists. We communicate via chat bots and messaging apps. We spend our leisure time on our phones: streaming, gaming, reading, sharing.

And while I’m not encouraging you to shift your optimization efforts entirely to mobile, you must consider the role mobile plays in your customers’ lives. The online experience is mobile. And your mobile experience should be an intentional step within the buyer journey.

What does your ideal mobile shopping experience look like? Where do you think mobile websites can improve? Do you agree or disagree with the ideas in this post? Share your thoughts in the comments section below!

The post Your mobile website optimization guide (or, how to stop pissing off your mobile users) appeared first on WiderFunnel Conversion Optimization.

See original: 

Your mobile website optimization guide (or, how to stop pissing off your mobile users)

‘Get past personas’, and other takeaways from CTA Conf 17

Reading Time: 7 minutes

This week, I spent two jam-packed days at Unbounce’s fourth-ever Call To Action Conference. The one-track event featured some of today’s most influential digital marketing speakers like Mitch Joel, Kindra Hall, and Rand Fishkin.

WiderFunnel team at CTA Conf 17
The WiderFunnel team and I having a ball at the CTA Conf afterparty.

Session topics ranged from integrity in marketing, to performance marketing success, to the marriage of SEO and conversion optimization. But most shared a common theme: Don’t forget about the real person behind that click.

Knowledge bombs were dropped, important conversations were had, and actionable insights were shared. So, in today’s post, I’m going to share some of my most important takeaways from CTA Conf.

If you attended the conference, please share your favorite takeaways in the comments below!

1. Don’t be trendy, be data-driven

Featured Speaker: Oli Gardner

Unbounce Co-Founder, Oli Gardner, kicked things off on the first day.

Fun fact: Due to technical difficulties, Oli ended up acting out his entire opening video sequence (and most of the subsequent videos in his presentation). He handled the hiccup like a pro, of course, and launched into a great session on data-driven design.

One of the strongest points that Oli made was that digital marketing trends self-perpetuate, regardless of whether or not they are helpful to a user.

I know we, as data-driven marketers, ‘know’ this fact. We complain about ‘best practices’, and buzzwords, and yet we still get totally caught up in trends.

Remember when explainer videos became the end-all, be-all for homepages?

WiderFunnel CTA Conf Recap Oli Gardner
Oli pointing out the flaws in an old Unbounce explainer video at CTA Conf.

What happened? Hundreds of blog posts were written about explainer videos, and hundreds of explainer videos were produced to talk about how great explainer videos are. And then, every homepage on the internet featured an explainer video.

But…were all of those explainer videos really what customers needed? In some cases, but certainly not in all.

Instead, Oli spoke about the need to “mend trends”, and make design decisions based on data, rather than the most popular trend at the time.

We hold the same view at WiderFunnel. You can A/B test explainer video after explainer video. But to create truly impactful experiences, you have to go back to the research phase.

Use the data you have to drill into what you think are you most important business problems. And test hypotheses that attempt to solve for those problems.

2. Choose people, not personas

I’m not a big fan of personas. I’ve never kicked it with a persona.

– Wil Reynolds

But, without personas, how do I write the right copy for my customers at the right time?!

Don’t panic.

Focus on motivation instead

Featured Speaker: Joel Klettke

Conversion copywriter extraordinaire, Joel Klettke, spoke about how to read your customer’s mind. He emphasized the need to get past user personas and keywords, and focus on customer motivation instead.

Joel Klettke at CTA Conf
Joel Klettke on stage at CTA Conf.

We get stuck behind our screens, and start writing about ‘synergies’ and features that our customers really don’t care about.

– Joel Klettke

He outlined a framework for getting your customers to tell you about their pain points, anxieties, desired outcomes, and priorities, in their own words:

  1. Ask
  2. Record
  3. Analyze
  4. Feed
  5. Measure

Note: I didn’t dig too deeply into the framework, here. But Joel put together a resource for CTA Conf attendees, and graciously gave me the green light to share it. Check it out here!

Jobs To Be Done vs. Personas

Featured Speaker: Claire Suellentrop

On Day 2, Claire Suellentrop built on this idea of the dated persona.

She explained that marketers collect many data points about our prospects, like…

  • Gender, age, location
  • Title, company, industry
  • Married, no kids, one puppy

…but asked whether or not all of that data actually helps us determine why a real human being just bought a new backpack from Everlane.

As an alternative, she suggested the Jobs To Be Done framework. JTBD refers to your customer’s struggle to make progress on something. When your customer overcomes that struggle, the job is done, and they have made progress.

The framework looks a little something like this:

“When ____________ (event that triggers the struggle), help me ______ (struggle / job) so I can __________ (better life / done).”

To identify your customers’ struggle, Claire suggests actually asking your customers. She outlined several sample questions:

  • “Take me back to life before [product]. What was it like?”
  • “What happened that compelled you to start looking for something different?”
  • “What happened when you tried [product] that made you confident it was right for you?”
  • “What can you do now that you couldn’t do before?”

3. Tell the story, don’t just allude to it

Featured Speaker: Kindra Hall

One of my favorite speakers on Day 1 of CTA Conf was Kindra Hall. (Not surprising, as she is the storytelling expert).

Kindra dug into strategic storytelling in marketing. According to her, you should use a story every time you need to communicate value in your marketing.

Kindra Hall at CTA Conf
Kindra Hall sets out to define storytelling in marketing.

Storytelling is powerful because real life humans are attracted to great stories. (And marketers talk to people after all).

Stories, according to Kindra, stick with us and make us do stuff because storytelling is a co-creative process.

“As I am telling you my story, you are creating your own in your mind. I am giving you my words, but you are meeting me half way, and we are creating a shared memory,” Kindra explained.

The most powerful moment in her talk came when she challenged the audience with the biggest storytelling mistake:

Too often, we allude to the story, but don’t actually tell it.

– Kindra Hall

She showed two example videos to illustrate her point. In the first, a company founder almost told her compelling story about losing both of her parents, but glossed over the details. The result was a pretty video, with pretty music that almost created feeling.

In the second video, the founder told her full story, explaining how losing her parents shaped her company and product. The difference in emotional impact was kind of incredible.

And making your customers feel is a huge part of making your customers act. Because we — consumers, people, humans — don’t buy products or services…we buy feelings.

4. Pay attention to people signals

For goodness’ sake, solve the searcher’s problem

Featured Speaker: Wil Reynolds

Founder of Seer Interactive, Wil Reynolds, danced his way onto the stage, and delivered a really strong talk on SEO, conversion optimization, and the importance of people signals.

Wil Reynolds at CTA Conf
Wil remembers when he f*ed up, and forgot about the HUMAN element.

He didn’t mince words, explaining that marketers too often put conversions before customers. We ask “how do I get?” when we should be asking, “how do I help my customer get what they need?”

When you do an amazing job on search, you get to help people who are lost solve their problems.

– Wil Reynolds

Wil painted a picture of how we, as marketers, are letting our own wants override solving our customers’ problems. In the world of search, Wil pointed out that Google rewards pages that solve the searchers’ query. So solve the searchers’ query!

Much like we allude to stories, but often don’t tell them, we talk about listening to our customers, but often don’t really listen.

Instead of showing them product comparisons when they search “best CRM platform”, we pay to show them a landing page that claims “My product is the best! Get in my funnel!”

This isn’t just an issue in search or performance. In conversion optimization, there is an emphasis on velocity over user research. There is pressure to test more, and test faster.

But, we must take the time to do the research. To get as close to our customers’ problem, and tailor our marketing experience to their needs.

Win at SEO and CRO with a long-term vision

Featured Speaker: Rand Fishkin

Building on Wil’s session on Day 1, SEO wizard, Rand Fishkin, gave the audience actionable tips around how to optimize for searcher intent.

Rand pointed to conversion optimization.

At its core, conversion optimization is about getting into your customers’ minds, and testing changes to get closer to the best possible customer experience. To give your customer what they need, you must soothe their pain points, and provide a solution.

You can apply this same concept to SEO: If you 1) gain a deep understanding of what searchers are seeking, and 2) determine why some searchers come away unsatisfied, you can optimize for searcher task accomplishment.

Rand Fishkin at CTA Conf
Rand demonstrates how establishing trust leads to ROI.

Unfortunately, Rand pointed out, there is still a conflict between SEO and CRO, because conversion rate and searcher satisfaction are sometimes in direct opposition.

For example, let’s say you want to get more blog subscriptions, so you add a pop-up to your blog post. This may lead to a higher conversion rate on the page, but lower searcher satisfaction. Some readers might bounce, which may lead to lower organic traffic.

But, Rand ended on a high note:

You can win with long-term thinking. By always asking, ‘are we building a brand that’s helping people succeed?’

– Rand Fishkin

5. Don’t fear disruption. Own it.

Featured Speaker: Mitch Joel

One of the final speakers on Day 1 was marketing thought-leader, Mitch Joel, who shook things up a bit. Mitch spoke about what it means to be disruptive (and how to not fear disruption).

Mitch Joel at CTA Conf
“For a seed to achieve its greatest expression, it must come completely undone.”

When I ask C-Suite marketers to define disruption, the definition is never consistent. In fact, I often don’t get a definition of disruption, I get a definition of destruction.

– Mitch Joel

He asked, if disruption is the big bad wolf, who are the heroes in this marketing story?

Well, like the three little pigs, Mitch discussed three ways to be disruptive rather than be disrupted:

  1. Transformation: Business transformation is not your products or services, etc. It’s inside out. And it starts with technology. You need to be using the same tech, same form of communication that your customers are using.
  2. Innovative marketing: Innovation is not re-allocation of resources. It isn’t investing more in Google Adwords versus another channel. Real innovation is about making and creating new products and experiences that we can use to market with.
  3. Micro-transactions: Marketers and businesses get caught up in the macro transaction, in the purchase. But we live in a world of micro-transactions. This is the customer journey, and it is extremely important to understand.

Mitch Joel emphasized the fact that if you can apply these ‘three little pigs’ to your business model, you will be in a great place, though he recognized that it’s not always easy.

But nothing great is ever easy.

6. Be bold enough to be wrong

Featured Speaker: Michael Aagaard

Senior Conversion Optimizer at Unbounce, Michael Aagaard, closed out the two-day conference. His message was a simple but powerful warning against the trap of confirmation bias.

We, as humans, are not interested in information, but confirmation.

– Michael Aagaard

Confirmation bias refers to our tendency to search for and recall information in ways that confirm our existing beliefs, hypotheses, and expectations. And it is a threat to data-driven marketing.

Michael Aagaard at CTA Conf
Michael takes us back to ye olde London to make a point about the enduring power of confirmation bias.

When you A/B test, you are searching for objectivity. You are trying to figure out which variation your users prefer, outside of your own opinions and beliefs about what works best.

But it’s rarely that simple, even if you are a pro.

Michael showed us a landing page that he analyzed for a client, featuring a stock photo hero image. He said he had railed against the photo, and shown the client examples of the hundreds of other stock photos featuring the same model.

But, when he tested the landing page, he found that the original version, featuring the ‘terrible’ stock photo, was the clear winner.

“Maybe,” he said, “users don’t spend hours scouring the internet for stock photo sinners like I do.”

He urged the audience to be bold enough to be wrong, to challenge our hypotheses, and get out of the marketing bubble when we are trying to solve problems.

If we don’t get out of the marketing bubble, we end up making assumptions, and designing experiences for ourselves.

– Michael Aagaard

Go hang out with your customer success teams and sales teams; get outsider input on your ‘great’ ideas. Go find your own natural skeptic, and challenge your hypotheses.

Were you at CTA Conf 17? What were your most important takeaways? Who were your favorite speakers, and why? Let us know in the comments!

The post ‘Get past personas’, and other takeaways from CTA Conf 17 appeared first on WiderFunnel Conversion Optimization.

More – 

‘Get past personas’, and other takeaways from CTA Conf 17

How do ad agencies win a Cannes Lion award?

Reading Time: 2 minutes

As the Cannes Lions Festival is wrapping up this week, we’re seeing the annual breathless, self-congratulatory statements coming out of agencies with photos of their awards and sun-tanned creative teams sipping champagne.

Cannes Lions
Thanks for the trip to the south of France, clients!
We’d like to thank the little people who made this possible.

They should feel proud. They’ve achieved a huge accomplishment that has been the recognized stamp of credibility for advertising creativity since 1954.

How do agencies win at the Cannes Lions festival?

When I worked at the big ad agencies, I was often shocked at how they used clients’ budgets for the purpose of winning awards and self-promotion.

I’ve seen ad agency executives planning how to maximize their billings for minimal work and use their clients’ budgets to submit campaigns for awards.

I vividly remember, shortly before I walked away from my ad agency career, being part of a team that created a poster to promote a lightbulb.

It involved an elaborate set rental, professional photography shoot, intensive image editing, and ultimately cost the client $17,000. For a poster.

It did nothing to communicate the benefits of the lightbulb for consumers. And there was not a single conversation at the agency about how we should measure results, or even what the goal was for the poster.

Was it a failed poster campaign?

It certainly didn’t achieve the goals in the official creative brief.

But, it did win a prestigious award for that agency and the creative director.

It was certainly a clever (if not esoteric) concept with beautiful, subtle photography, but it was entirely useless as an ad.

I watched as the client contacts turned a blind eye to the waste, knowing that they would be repaid with lavish expense account dinners in exchange for handing over their company’s cash.

CMOs are turning against award-obsessed agencies

That’s why today’s CMO’s are rejecting traditional award-seeking agencies. They know those agencies don’t care about their clients. Much less their clients’ customers.

Today’s CMOs know award-seeking agencies don’t care about their clients. Much less their clients’ customers.

They know that too-clever ads often don’t achieve results. Their digital transformation is changing their priorities. Data-informed ad campaigns are now revealing how ineffective the old gut-feeling approach can be.

They are seeking alternatives, and finding them in the Zen Marketing approach that balances intuition with data, big ideas with bold experiments, inspiration with rigorous validation.

The alternative to cleverness is customer insights that are validated by robust data.

The alternative to awards for cleverness is measurable results lift.

I firmly believe that creativity is still required for advertising. And a rigorous experimentation program is enabling today’s marketing innovation.

I’m reminded again, in this Cannes Lions Festival season, of why I started WiderFunnel to be the “anti-agency.” And again, why we will never make a recommendation if we haven’t tested its ability to lift the client’s revenue.

So, the next time you’re in an agency pitch where they’re bragging about their awards, don’t walk; run away from hiring them. They’re telling you they don’t care about you.

Why we will never win a Cannes Lion award

Short answer: Because we will never submit for one.

The post How do ad agencies win a Cannes Lion award? appeared first on WiderFunnel Conversion Optimization.

View original post here – 

How do ad agencies win a Cannes Lion award?

[Case Study] Ecwid sees 21% lift in paid plan upgrades in one month

Reading Time: 2 minutes

What would you do with 21% more sales this month?

I bet you’d walk into your next meeting with your boss with an extra spring in your step, right?

Well, when you implement a strategic marketing optimization program, results like this are not only possible, they are probable.

In this new case study, you’ll discover how e-commerce software supplier, Ecwid, ran one experiment for four weeks, and saw a 21% increase in paid upgrades.

Get the full Ecwid case study now!

Download a PDF version of the Ecwid case study, featuring experiment details, supplementary takeaways and insights, and a testimonial from Ecwid’s Sr. Director, Digital Marketing.



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.

A little bit about Ecwid

Ecwid provides easy-to-use online store setup, management, and payment solutions. The company was founded in 2009, with the goal of enabling business-owners to add online stores to their existing websites, quickly and without hassle.

The company has a freemium business model: Users can sign up for free, and unlock more features as they upgrade to paid packages.

Ecwid’s partnership with WiderFunnel

In November 2016, Ecwid partnered with WiderFunnel with two primary goals:

  1. To increase initial signups for their free plan through marketing optimization, and
  2. To increase the rate of paid upgrades, through platform optimization

This case study focuses on a particular experiment cycle that ran on Ecwid’s step-by-step onboarding wizard.

The methodology

Last Winter, the WiderFunnel Strategy team did an initial LIFT Analysis of the onboarding wizard, and identified several potential barriers to conversion. (Both in terms of completing steps to setup a new store, and in terms of upgrading to a paid plan.)

The lead WiderFunnel Strategist for Ecwid, Dennis Pavlina, decided to create an A/B cluster test to 1) address the major barriers simultaneously, and 2) to get major lift for Ecwid, quickly.

The overarching goal was to make the onboarding process smoother. The WiderFunnel and Ecwid optimization teams hoped that enhancing the initial user experience, and exposing users to the wide range of Ecwid’s features, would result in more users upgrading to paid plans.

Dennis Pavlina

Ecwid’s two objectives ended up coming together in this test. We thought that if more new users interacted with the wizard and were shown the whole ‘Ecwid world’ with all the integrations and potential it has, they would be more open to upgrading. People needed to be able to see its potential before they would want to pay for it.

Dennis Pavlina, Optimization Strategist, WiderFunnel

The Results

This experiment ran for four weeks, at which point the variation was determined to be the winner with 98% confidence. The variation resulted in a 21.3% increase in successful paid account upgrades for Ecwid.

Read the full case study for:

  • The details on the initial barriers to conversion
  • How this test was structured
  • Which secondary metrics we tracked, and
  • The supplementary takeaways and customer insights that came from this test

The post [Case Study] Ecwid sees 21% lift in paid plan upgrades in one month appeared first on WiderFunnel Conversion Optimization.

See original article:

[Case Study] Ecwid sees 21% lift in paid plan upgrades in one month

Capturing supermarket magic and providing the ideal customer experience

Reading Time: 6 minutes

The customer-centric focus

Over the past few years, one message has been gaining momentum within the marketing world: customer experience is king.

Customer experience” (CX) refers to your customer’s perception of her relationship with your brand—both conscious and subconscious—based on every interaction she has with your brand during her customer life cycle.

Customer experience is king
How do your customers feel about your brand?

Companies are obsessing over CX, and for good reason(s):

  • It is 6-7x more expensive to attract a new customer than it is to retain an existing customer
  • 67% of consumers cite ‘bad experiences’ as reason for churn
  • 66% of consumers who switch brands do so because of poor service

Across sectors, satisfied customers spend more, exhibit deeper loyalty to companies, and create conditions that allow companies to have lower costs and higher levels of employee engagement.

As conversion optimization specialists, we test in pursuit of the perfect customer experience, from that first email subject line, to the post-purchase conversation with a customer service agent.

We test because it is the best way to listen, and create ideal experiences that will motivate consumers to choose us over our competitors in the saturated internet marketplace.

Create the perfect personalized customer experience!

Your customers are unique, and their ideal experiences are unique. Create the perfect customer experience with this 4-step guide to building the most effective personalization strategy.



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.


Which leads me to the main question of this post: Which companies are currently providing the best customer experiences, and how can you apply their strategies in your business context?

Each year, the Tempkin Group releases a list of the best and worst US companies, by customer experience rating. The list is based on survey responses from 10,000 U.S. consumers, regarding their recent experiences with companies.

And over the past few years, supermarkets have topped that list: old school, brick-and-mortar, this-model-has-been-around-forever establishments.

Customer experience - brick-mortar vs. ecommerce
What are supermarkets doing so right, and how can online retailers replicate it?

In the digital world, we often focus on convenience, usability, efficiency, and accessibility…but are there elements at the core of a great customer experience that we may be missing?

A quick look at the research

First things first: Let’s look at how the Tempkin Group determines their experience ratings.

Tempkin surveys 10,000 U.S. consumers, asking them to rate their recent (past 60 days) interactions with 331 companies across 20 industries. The survey questions cover Tempkin’s three components of experience:

  1. Success: Were you, the consumer, able to accomplish what you wanted to do?
  2. Effort: How easy was it for you to interact with the company?
  3. Emotion: How did you feel about those interactions?

Respondents answer questions on a scale of 1 (worst) to 7 (best), and researchers score each company accordingly. For more details on how the research was conducted, you can download the full report, here.

In this post, I am going to focus on one supermarket that has topped the list for the past three years: Publix. Not only does Publix top the Tempkin ratings, it also often tops the supermarket rankings compiled by the American Customer Satisfaction Index.

Long story short: Publix is winning the customer experience battle.

WiderFunnel Customer Experience Ratings Tempkin 2017
2017 Customer Experience ratings from Tempkin.
WiderFunnel Customer Experience Ratings Tempkin 2016
2016 Customer Experience ratings from Tempkin.

So, what does Publix do right?

Publix growth - WiderFunnel customer experience
Publix growth trends (Source).

If you don’t know it, Publix Super Markets, Inc. is an American supermarket chain headquartered in Florida. Founded in 1930, Publix is a private corporation that is wholly owned by present and past employees; it is considered the largest employee-owned company in the world.

In an industry that has seen recent struggles, Publix has seen steady growth over the past 10 years. So, what is this particular company doing so very right?

1. World-class customer service

Publix takes great care to provide the best possible customer service.

From employee presentation (no piercings, no unnatural hair color, no facial hair), to the emphasis on “engaging the customer”, to the bread baked fresh on-site every day, the company’s goal is to create the most pleasurable shopping experience for each and every customer.

When you ask “Where is the peanut butter?” at another supermarket, an employee might say, “Aisle 4.” But at Publix, you will be led to the peanut butter by a friendly helper.

The store’s slogan: “Make every customer’s day a little bit better because they met you.”

2. The most motivated employees

Publix associates are famously “pleased-as-punch, over-the-moon, [and] ridiculously contented”.

Note the term “associates”: Because Publix is employee-owned, employees are not referred to as employees, but associates. As owners, associates share in the store’s success: If the company does well, so do they.

Our culture is such that we believe if we take care of our associates, they in turn will take care of our customers. Associate ownership is our secret sauce,” said Publix spokeswoman, Maria Brous. “Our associates understand that their success is tied to the success of our company and therefore, we must excel at providing legendary service to our customers.

3. Quality over quantity

While Publix is one of the largest food retailers in the country by revenue, they operate a relatively small number of stores: 1,110 stores across six states in the southeastern U.S. (For context, Wal-Mart operates more than 4,000 stores).

Each of Publix’s store locations must meet a set of standards. From the quality of the icing on a cake in the bakery, to the “Thanks for shopping at Publix. Come back and see us again soon!” customer farewell, customers should have a delightful experience at every Publix store.

4. An emotional shopping experience

In the Tempkin Experience Ratings, emotion was the weakest component for the 331 companies evaluated. But, Publix was among the few organizations to receive an “excellent” emotion rating. (In fact, they are ranked top 3 in this category.)

widerfunnel customer delight
Are you creating delight for the individuals who are your customers?

They are able to literally delight their customers. And, as a smart marketer, I don’t have to tell you how powerful emotion is in the buying process.

Great for Publix. What does this mean for me?

As marketers, we should be changing the mantra from ‘always be closing’ to ‘always be helping’.

– Jonathan Lister, LinkedIn

In the digital marketing world, it is easy to get lost in acronyms: UX, UI, SEO, CRO, PPC…and forget about the actual customer experience. The experience that each individual shopper has with your brand.

Beyond usability, beyond motivation tactics, beyond button colors and push notifications, are you creating delight?

To create delight, you need to understand your customer’s reality. It may be time to think about how much you spend on website traffic, maintenance, analytics, and tools vs. how much you spend to understand your customers…and flip the ratio.

It’s important to understand the complexity of how your users interact with your website. We say, ‘I want to find problems with my website by looking at the site itself, or at my web traffic’. But that doesn’t lead to results. You have to understand your user’s reality.

– André Morys, Founder & CEO, WebArts

Publix is winning with their customer-centric approach because they are fully committed to it. While the tactics may be different with a brick-and-mortar store and an e-commerce website, the goals overlap:

1. Keep your customer at the core of every touch point

From your Facebook ad, to your product landing page, to your product category page, checkout page, confirmation email, and product tracking emails, you have an opportunity to create the best experience for your customers at each step.

customer service and customer experience
Great customer service is one component of a great customer experience.

2. Make your customers feel something.

Humans don’t buy things. We buy feelings. What are you doing to make your shoppers feel? How are you highlighting the intangible benefits of your value proposition?

3. Keep your employees motivated.

Happy, satisfied employees, deliver happy, satisfying customer experiences, whether they’re creating customer-facing content for your website, or speaking to customers on the phone. For more on building a motivated, high performance marketing team, read this post!

Testing to improve your customer experience

Of course, this wouldn’t be a WiderFunnel blog post if I didn’t recommend testing your customer experience improvements.

If you have an idea for how to inject emotion into the shopping experience, test it. If you believe a particular tweak will make the shopping experience easier and your shoppers more successful, test it.

Your customers will show you what an ideal customer experience looks like with their actions, if you give them the opportunity.

Here’s an example.

During our partnership with e-commerce platform provider, Magento, we ran a test on the product page for the company’s Enterprise Edition software, meant to improve the customer experience.

The main call-to-action on this page was “Get a free demo”—a universal SaaS offering. The assumption was that potential customers would want to experience and explore the platform on their own (convenient, right?), before purchasing the platform.

Magento_CTA_Get
The original Magento Enterprise Edition homepage featuring the “Get a free demo”.

Looking at click map data, however, our Strategists noticed that visitors to this page were engaging with informational tabs lower on the page. It seemed that potential customers needed more information to successfully accomplish their goals on the page.

Unfortunately, once visitors had finished browsing tabs, they had no option other than trying the demo, whether they were ready or not.

So, our Strategists tested adding a secondary “Talk to a specialist” call-to-action. Potential customers could connect directly with a Magento sales representative, and get answers to all of their questions.

Magento_CTA
Today’s Magento Enterprise Edition homepage features a “Talk to a specialist” CTA.

This call-to-action hadn’t existed prior to this test, so the literal infinite conversion rate lift Magento saw in qualified sales calls was not surprising.

What was surprising was the phone call we received six months later: Turns out the “Talk to a specialist” leads were 8x more valuable than the “Get a free demo” leads.

After several subsequent test rounds, “Talk to a specialist” became the main call-to-action on that product page. Magento’s most valuable prospects had demonstrated that the ideal customer experience included the opportunity to get more information from a specialist.

While Publix’s success reminds us of the core components of a great customer experience, actually creating a great customer experience can be tricky.

You might be wondering:

  • What is most important to my customers: Success, Effort, or Emotion?
  • What improvements should I make first?
  • How will I know these improvements are actually working?

A test-and-learn strategy will help you answer these questions, and begin working toward a truly great customer experience.

Don’t get lost in the guesswork of tweaks, fixes, and best practices. Get obsessed with understanding your customer, instead.

How do you create the ideal customer experience?

Please share your thoughts in the comments section below!

The post Capturing supermarket magic and providing the ideal customer experience appeared first on WiderFunnel Conversion Optimization.

Visit source: 

Capturing supermarket magic and providing the ideal customer experience

How to do server-side testing for single page app optimization

Reading Time: 5 minutes

Gettin’ technical.

We talk a lot about marketing strategy on this blog. But today, we are getting technical.

In this post, I team up with WiderFunnel front-end developer, Thomas Davis, to cover the basics of server-side testing from a web development perspective.

The alternative to server-side testing is client-side testing, which has arguably been the dominant testing method for many marketing teams, due to ease and speed.

But modern web applications are becoming more dynamic and technically complex. And testing within these applications is becoming more technically complex.

Server-side testing is a solution to this increased complexity. It also allows you to test much deeper. Rather than being limited to testing images or buttons on your website, you can test algorithms, architectures, and re-brands.

Simply put: If you want to test on an application, you should consider server-side testing.

Let’s dig in!

Note: Server-side testing is a tactic that is linked to single page applications (SPAs). Throughout this post, I will refer to web pages and web content within the context of a SPA. Applications such as Facebook, Airbnb, Slack, BBC, CodeAcademy, eBay, and Instagram are SPAs.


Defining server-side and client-side rendering

In web development terms, “server-side” refers to “occurring on the server side of a client-server system.”

The client refers to the browser, and client-side rendering occurs when:

  1. A user requests a web page,
  2. The server finds the page and sends it to the user’s browser,
  3. The page is rendered on the user’s browser, and any scripts run during or after the page is displayed.
Static app server
A basic representation of server-client communication.

The server is where the web page and other content live. With server-side rendering, the requested web page is sent to the user’s browser in final form:

  1. A user requests a web page,
  2. The server interprets the script in the page, and creates or changes the page content to suit the situation
  3. The page is sent to the user in final form and then cannot be changed using server-side scripting.

To talk about server-side rendering, we also have to talk a little bit about JavaScript. JavaScript is a scripting language that adds functionality to web pages, such as a drop-down menu or an image carousel.

Traditionally, JavaScript has been executed on the client side, within the user’s browser. However, with the emergence of Node.js, JavaScript can be run on the server side. All JavaScript executing on the server is running through Node.js.

*Node.js is an open-source, cross-platform JavaScript runtime environment, used to execute JavaScript code server-side. It uses the Chrome V8 JavaScript engine.

In laymen’s (ish) terms:

When you visit a SPA web application, the content you are seeing is either being rendered in your browser (client-side), or on the server (server-side).

If the content is rendered client-side, JavaScript builds the application HTML content within the browser, and requests any missing data from the server to fill in the blanks.

Basically, the page is incomplete upon arrival, and is completed within the browser.

If the content is being rendered server-side, your browser receives the application HTML, pre-built by the server. It doesn’t have to fill in any blanks.

Why do SPAs use server-side rendering?

There are benefits to both client-side rendering and server-side rendering, but render performance and page load time are two huge pro’s for the server side.

(A 1 second delay in page load time can result in a 7% reduction in conversions, according to Kissmetrics.)

Server-side rendering also enables search engine crawlers to find web content, improving SEO; and social crawlers (like the crawlers used by Facebook) do not evaluate JavaScript, making server-side rendering beneficial for social searching.

With client-side rendering, the user’s browser must download all of the application JavaScript, and wait for a response from the server with all of the application data. Then, it has to build the application, and finally, show the complete HTML content to the user.

All of which to say, with a complex application, client-side rendering can lead to sloooow initial load times. And, because client-side rendering relies on each individual user’s browser, the developer only has so much control over load time.

Which explains why some developers are choosing to render their SPAs on the server side.

But, server-side rendering can disrupt your testing efforts, if you are using a framework like Angular or React.js. (And the majority of SPAs use these frameworks).

The disruption occurs because the version of your application that exists on the server becomes out of sync with the changes being made by your test scripts on the browser.

NOTE: If your web application uses Angular, React, or a similar framework, you may have already run into client-side testing obstacles. For more on how to overcome these obstacles, and successfully test on AngularJS apps, read this blog post.


Testing on the server side vs. the client side

Client-side testing involves making changes (the variation) within the browser by injecting Javascript after the original page has already loaded.

The original page loads, the content is hidden, the necessary elements are changed in the background, and the ‘new’ version is shown to the user post-change. (Because the page is hidden while these changes are being made, the user is none-the-wiser.)

As I mentioned earlier, the advantages of client-side testing are ease and speed. With a client-side testing tool like VWO, a marketer can set up and execute a simple test using a WYSIWYG editor without involving a developer.

But for complex applications, client-side testing may not be the best option: Layering more JavaScript on top of an already-bulky application means even slower load time, and an even more cumbersome user experience.

A Quick Hack

There is a workaround if you are determined to do client-side testing on a SPA application. Web developers can take advantage of features like Optimizely’s conditional activation mode to make sure that testing scripts are only executed when the application reaches a desired state.

However, this can be difficult as developers will have to take many variables into account, like location changes performed by the $routeProvider, or triggering interaction based goals.

To avoid flicker, you may need to hide content until the front-end application has initialized in the browser, voiding the performance benefits of using server-side rendering in the first place.

WiderFunnel - client side testing activation mode
Activation Mode waits until the framework has loaded before executing your test.



When you do server-side testing, there are no modifications being made at the browser level. Rather, the parameters of the experiment variation (‘User 1 sees Variation A’) are determined at the server route level, and hooked straight into the javascript application through a service provider.

Here is an example where we are testing a pricing change:

“Ok, so, if I want to do server-side testing, do I have to involve my web development team?”

Yep.

But, this means that testing gets folded into your development team’s work flow. And, it means that it will be easier to integrate winning variations into your code base in the end.

If yours is a SPA, server-side testing may be the better choice, despite the work involved. Not only does server-side testing embed testing into your development workflow, it also broadens the scope of what you can actually test.

Rather than being limited to testing page elements, you can begin testing core components of your application’s usability like search algorithms and pricing changes.

A server-side test example!

For web developers who want to do server-side testing on a SPA, Tom has put together a basic example using Optimizely SDK. This example is an illustration, and is not functional.

In it, we are running a simple experiment that changes the color of a button. The example is built using Angular Universal and express JS. A global service provider is being used to fetch the user variation from the Optimizely SDK.

Here, we have simply hard-coded the user ID. However, Optimizely requires that each user have a unique ID. Therefore, you may want to use the user ID that already exists in your database, or store a cookie through express’ Cookie middleware.

Are you currently doing server-side testing?

Or, are you client-side testing on a SPA application? What challenges (if any) have you faced? How have you handled them? Do you have any specific questions? Let us know in the comments!

The post How to do server-side testing for single page app optimization appeared first on WiderFunnel Conversion Optimization.

Continue reading – 

How to do server-side testing for single page app optimization