All posts by Nitin Deshdeep

How Agencies Should Approach Conversion Optimization for eCommerce | An Interview with AWA Digital

Going ahead with our interview series, this time we are in conversation with Johann Van Tonder from AWA Digital.

Johann, is the COO at AWA Digital, a leading international Conversion Optimization (CRO) agency, specializing in eCommerce.

He is also the coauthor of the book E-commerce Optimization, out in January 2017. He speaks about how they practice conversion optimization for different verticals of their eCommerce clients.

After reading this post, agencies will learn the nuances of CRO when applied to different eCommerce verticals such as Fashion, Homeware, and Consumer Electronics. They’ll learn CRO strategies that will help them make a stronger case for adopting CRO for their prospective eCommerce clients.

Introduction

1) How important do you think Conversion Optimization (CRO) is for eCommerce enterprises? Why?

CRO is one of the best growth strategies available to eCommerce firms. Turnover is not something you influence directly. It is the outcome of activities performed in other areas. The rate at which people buy from you and how much they spend when they buy, are within your control. In turn, these will increase the revenue and ultimately profit. This is what CRO is about.

On average, for every £92 spent on getting traffic to UK websites, only £1 is spent on improving the conversion rate. If you improve the ability of your site to generate money, your acquisition dollars stretch further as more of the visitors are converted to buyers.

2) Is there a difference in your approach to CRO for different eCommerce verticals? If yes, how?

Not really. We always follow an evidence-led approach informed by research, data analysis, and testing. That said, our implementation will not be the same on two projects as we are guided by the opportunities specific to that particular website.

As long as you follow the scientific method, which we outline in our book E-commerce Optimization (Kogan Page), the same approach can generally be applied across different verticals. Broadly speaking, it’s a system of generating and prioritizing relevant ideas, and a mechanism by which to test those ideas.

3) Which are the major eCommerce verticals that you have worked with?

We have extensive experience in the fashion retail industry, having worked with top clothing and footwear brands from different countries. Furniture and homeware are two other categories we are well-known for.

Other big verticals for us include consumer electronics, flowers, gardening, gifting, health products, and outdoor travel gear. Our entire portfolio ranges from bathroom fittings to wearable technology.

Conversion Optimization for Different eCommerce Verticals

4) Do your CRO goals (micro and macro) differ for Fashion, Homeware and Consumer electronics based eCommerce businesses?

Our philosophy is to optimize for revenue, so in almost all cases, the primary metric is Revenue Per Visitor (RPV). If it’s an eCommerce business, Conversion Rate simply doesn’t give you the complete picture.

Secondary metrics, aligned with micro goals, vary widely. These are typically determined by the context of the experiment, rather than the vertical. For example, on a product detail page (PDP), you might want to track clicks on “add to basket” and engagement with product images. It helps to interpret the outcome of the test.

Sometimes we track key performance indicators (KPIs) outside of the testing environment. For example, experimenting with free delivery for a fashion client, we tracked product returns and married this data manually with test results.

5) What are the main “conversion funnels” for these different eCommerce websites? Do you see a difference in major traffic sources for the websites?

It’s not uncommon to see organic search being the major source of traffic for known brands. Often, the lion’s share of that is branded search terms, so in a way, it’s an extension of direct traffic. When a business is still establishing its brand, you’d expect to see more from paid search and other channels.

Many agencies limit optimization efforts to the website, which is a mistake. Social is an exciting area for some businesses, often rich with opportunities. Email consistently delivers good results for many of our clients and therefore, any gains in this arena can have a significant impact on overall business results.

Omni-channel, where we have a lot of experience, adds different dynamics. Not only do you see more direct traffic at the top of the funnel, but a large group of website visitors tend to browse with the intention to buy in-store. Or they may buy online, but only after visiting a store to “touch and feel” the product.

It’s important for the optimizer to take into consideration the entire journey, mapping out how the various touch points contribute to the purchase decision.

6)  Which persuasion principles (scarcity, social proof, reciprocity, etc.) do you use in optimizing different eCommerce vertical websites?

We regularly use social proof, liking, authority, and scarcity. It depends entirely on the situation. We don’t actively look for ways to plug them in. Instead, we examine the data and use a principle if it seems like a relevant solution. For example, one of our clients sells plants by catalogue and online. A common sales objection was whether the flowers would look as good in the customer’s garden as they are in the product images. This prompted us to invite customers to submit pictures of products in their gardens, invoking the social proof principle.

Once we’ve decided to use a principle, we may run a few tests to find the best implementation.

If a principle is already present on the website, there could be ways of making it more persuasive. In some cases, a message can be distracting in one part of the funnel yet very effective in another area of the site.

7) Which are the common conversion killers for these different eCommerce enterprises?

Some are universal, for example, delivery. Not only do consumers generally resist paying for shipping, but long waiting periods put them off. If you charge for it, you have to treat it like a product with its own compelling value proposition.

In the fashion industry, it’s size and fitting. Will these boots fit me? How will this shirt hang on me? Is your size 8 the same as another manufacturer’s size 8? These are the common themes. Typical concerns in the furniture and homeware space are material composition, dimensions, and perspective.

Sometimes we’re surprised by what we uncover. One of our clients, a gifting site, had a great returns policy. Obviously this was messaged clearly on the website. However, we discovered that it actually turned out to be a conversion killer for them. Why? Many of the buyers were grandparents who didn’t want to contemplate the prospect of their grandchildren returning their gifts.

8) The buying cycle for each eCommerce vertical website varies. Does your CRO strategy take this into account?

Definitely. The buying cycle is something we map out carefully.

For us, it is crucial to get under the skin of the customer. We want to understand exactly what goes into a sale being made or lost.

It can also inform our approach with testing. Normally we’d run an experiment for two weeks. However, if the purchase cycle is longer than that for the majority of customers, we may extend the test duration.

9) Does seasonality have an effect on your CRO strategy for different eCommerce verticals?

Clearly, some verticals are affected by it more than others. Seasonality is partly a reflection of customer needs. It is easier to deal with if you have a solid understanding of the core needs. In some verticals like gardening, it might be a good idea to conduct user research in low and high seasons.

Some clients are loath to run tests during peak trading periods like Christmas sales. Our view is that it is essential to optimize the site for those periods, especially if they contribute to annual turnover in any significant way.

10) On which eCommerce websites do you employ upsell/cross-sell strategies mostly?

Because our primary metric is usually Revenue Per Visitor rather than Conversion Rate, a driving question for us is how to increase Average Order Value. Upselling and cross-selling strategies are, therefore, almost always on our radar. We have had great success, for example, by optimizing the presentation and algorithms of popular product recommendation tools.

Upselling and cross-selling may be thought of as “tricking” the customer into spending more money. However, we’ve seen how frustrated customers become, having to hunt for items related to the one they are considering. It actually improves user experience, which is then reflected in an increase in revenue.

11) What CRO strategies do you apply on product pages of different eCommerce vertical websites (for instance, on product descriptions, product images, etc.)?

On most eCommerce sites, the product detail pages, or PDPs, have the highest drop-off rates on the site.

Exit rates in the region of 90%, and even higher are not uncommon. It is where the visitor is asked to make the decision. This is where questions about the product itself, as well as the buying process, are often most prominent.

We don’t have a checklist of tactics to apply to PDPs. Our test ideas emerge from research and analysis. If you understand the customer and what goes into the purchase decision, you’ll know what to test. Think of it as optimizing the sales conversation. It’s all about how you influence what plays out in the visitor’s mind.

  • Product description

If the visitors engage with product description, they may be closer to making a buying decision. Often this decision is based on the image, and reading the copy serves only to rationalize the purchase. Perhaps they are checking a few details or looking to answer a question about the product. The starting point for writing a good copy is knowing the customers and understanding their motivations and sales objections in relation to the product.

  • Product images

Likely to be the focus of most attention on the PDP. Often a substitute for reading product descriptions, so make sure you have a good selection of images that will answer key questions. On a lantern page, customers might wonder about the light patterns on their wall. Show them! Images appeal to System 1 thinking, which means purchase decisions are made instantly without thinking it over. Good images help the customer imagine themselves using the item, which can be quite persuasive.

Over to You

Do you have anything to add or suggest to this interview? Share with us what you think in the comments below.

CTA CRO program

The post How Agencies Should Approach Conversion Optimization for eCommerce | An Interview with AWA Digital appeared first on VWO Blog.

This article is from: 

How Agencies Should Approach Conversion Optimization for eCommerce | An Interview with AWA Digital

How RuneScape Leveled Up Revenue Through Process-Driven CRO

The following is a case study about how RuneScape followed a structured conversion optimization (CRO) program to increase revenue on its website.

About RuneScape

RuneScape is a fantasy massively multiplayer online role-playing game (MMORPG). It was developed by Jagex and launched in January 2001.

The popularity of the game is enormous. RuneScape has welcomed over 250 million players to its world since its release. More than 2 million users play every month, and millions more watch avidly through social channels.

RuneScape has consistently strived to deliver a great experience to its users—not just limited to the game but also on its website. After all, it’s the website where users find forums and game guides, and buy in-game items.

The CRO Team

Rob Marfleet, UX Specialist at Jagex, takes care of User Experience and CRO across the payment flow on the website (the payment gateway and its preceding pages). Dave Parrott, Payments Services Director at Jagex, and Nastassja Gilmartin, Payments Manager at Jagex, help Rob in identifying testing opportunities and analyzing test results.

Rob Marfleet, UX Specialist at Jagex, takes care of User Experience and CRO across the payment flow on the website.

Rob works with teams of designers and developers that help facilitate implementation of winning test variants on the RuneScape website.

Additionally, Rob takes help from Disha Ahuja, Client Success Manager at VWO, to utilize the VWO platform to its full potential.

About the Case

About 50% of users on the RuneScape website arrive as direct traffic. The other half of the traffic consists of users from referrals, social media, and email marketing campaigns.

Rob adds, “This is mainly down to RuneScape enjoying a very loyal user base, with many players having played for several years.”

The CRO team aims to optimize high-potential pages, that is, pages that are closest to the payment gateway and require minimum effort in optimization. The Treasure Hunter page on the website is one such high-potential page that the team chose to optimize.

The Treasure Hunter page lets users buy keys to unlock treasure chests in the game. The treasure chests contain items that can be used within RuneScape.

Rob explains, “Treasure Hunter activity is an optional mini-game within RuneScapekeys are earned through play, but can also be gathered in bundles that are purchasable on the site.

This is how the original page looked like:

RuneScape Treasure Hunter control page for A/B TestOn clicking Continue on the Treasure Hunter page, users are directed to a Payment page where they can choose from multiple treasure chest packages.

RuneScape payment page
Payment page

The RuneScape CRO team thoroughly analyzed the Treasure Hunter page and identified optimization opportunities. Next, the team used VWO to capitalize on these opportunities.

Optimization Process

The CRO team followed the following process to improve conversions on the RuneScape website:

  • Setting a Goal
  • Finding Opportunities for Optimization
  • Creating Hypothesis
  • Developing Variation
  • Analyzing Test Results

Setting a Goal

The goal of the optimization campaign was to grow revenue by increasing the number of purchases.

Finding Opportunities for Optimization

The team at RuneScape studied a heatmap of the Treasure Hunter page. The heatmap showed that a significant number of users were clicking the Get Keys section on the page—a section which was not clickable. Users perhaps either wanted a direct access to the keys or wanted to search for further information.

Heatmap of RUneScape original page before A/B test
Heatmap of the original page

Next, the team watched visitor recording sessions on the page and observed that a lot of visitors on the Payment page returned to the Treasure Hunter page. The team realized that the Treasure Hunter page probably did not offer sufficient information about the treasure chest packages to users.

Creating Hypothesis

The team hypothesized that providing details about treasure chest packages on the Treasure Hunter page will lead to greater conversions on the Payment page.

Developing Variation

Based on the hypothesis, the team created a variation of the Treasure Hunter page. The variation included a new section highlighting four treasure chest packages. Here’s how it looked:

RuneScape Treasure Hunter variation page

An A/B test was run to find the better performing version between the original page and the variation.

Analyzing Test Results

The test ran for a month from August 15–September 13, 2016. The variation outperformed the control and increased the number of purchases by almost 10 percent.

RuneScape A/B Test analysis - Report
Test result report on VWO

Rob shares his learning from the A/B test:

I think one of the more important aspects to take note of here is that the page variation actually resulted in less traffic to the payment page, but increased the amount of purchases made. Effectively, we can say pretty confidently that by giving the users package information upfront, we created higher quality traffic to the next stage, simply through transparency, and informed the user before going forwardusers who went to the purchase page already knew what they were after.

This is incredibly useful when considering other areas of the payment flowif the effect can be replicated, it can potentially translate to more wins.

Next Steps

The CRO team did not stop after it found success with the A/B test. The team felt that the variation can be optimized even further.

The team realized that the offer of four treasure chest packages can possibly leave the users spoiled for choice. The team hypothesized that recommending one of the packages to users will help them choose better and, consequently, increase conversions.

Based on this hypothesis, the following variation was created:

RuneScape follow-up A/B Test variation

The variation featured a Recommended package. This variation was pitted against the winning page from the first A/B test.

The variation won and further increased the number of purchases by almost 6%.

Experience Using VWO

Rob shares, “As a hands-on user of VWO, I’ve personally experienced how quickly it allows prototyping and testing of new ideas, features and content. The ability to push changes, without having to involve multiple teams to relaunch areas of the site can’t be praised highly enough, and the ability to reverse those same changes instantaneously is equally as useful. It’s allowed me to run a number of campaigns straight away that would normally have to be scheduled further down the line, at a more opportune moment, and that’s pretty invaluable.

Using the actual software is very straightforward and easy to understand—campaigns can be built in a short period of time, and having Disha available any time to help determine the best testing practices has definitely helped me find wins—she’s super friendly and eager to help, and I’ve already implemented several testing campaigns that have been borne out of collaboration between her and myself, one of which, is in the process of being fully implemented on the site.”

What Do You Think?

Do you have any recommendations on how RuneScape can further improve user experience and conversions on its website? Did you get any conversion optimization ideas for your own online enterprise? Tell us using the comments section below.

0

0 ratings

How will you rate this content?

Please choose a rating

The post How RuneScape Leveled Up Revenue Through Process-Driven CRO appeared first on VWO Blog.

Originally from:

How RuneScape Leveled Up Revenue Through Process-Driven CRO

Key Pillars of a Successful CRO Program | Latest eBook by VWO

Conversion Rate Optimization (CRO) has gradually become a known concept across enterprises, in recent years. The popularity of CRO can be owed to its ability to have a direct and significant impact on the bottom line.

However, application of CRO in most organizations has been far from optimal, which inadvertently causes them to leave money on the table. This money can be reclaimed by having a structured approach to CRO. Further, suboptimal optimization programs lead organizations to draw wrong conclusions, or statistically unsupported decisions, from their A/B tests and damage what’s not broken. Add to it the time and resources that are spent on CRO, and the real cost of an ill-structured CRO program begins to pinch where it hurts—ROI.

An optimal or structured CRO program requires certain key elements that ensure its success. The elements include tools, skills, processes, goals, and culture.

VWO’s latest eBook “Key Pillars of a Successful Conversion Optimization Program” covers all these elements in detail. This eBook features insights from industry experts and provides actionable takeaways to help you set up a CRO program or improve an existing one.

Key CRO Pillars eBOok Landing Page CTA

The eBook contains the following chapters:

1) Culture of CRO

The culture of CRO can be established in an organization with a two-step process. It starts with getting complete buy-in for a CRO program from the top management. Next, the process involves acceptance of the CRO approach by teams across the organization.

The chapter offers the following takeaways:

  • How to get top management buy-in for a CRO program
  • How to ensure that CRO is accepted by multiple teams in an organization

2) Competent Tools

A big part of the success of an organization’s CRO program depends on the set of tools being used. The tools that are essential for a CRO program can be clubbed into the following:

  • Website Analytics
  • User Behavior Analysis
  • A/B and Multivariate Testing

The chapter lists these tools and explores how they can be used to their full potential.

3) Key Goals

The effectiveness of a CRO program is heavily dependent on the goals you set. Without proper goals, a CRO program can lack direction.

This chapter offers the following points of learning:

  • The importance of different goals in a CRO program—micro and macro conversions
  • The appropriate time to use micro or macro conversions as the goal of a CRO campaign

4) People with the Relevant Skill Set

A CRO program involves a wide range of tools and activities. It demands a team of professionals that can coordinate effectively and make the most out of available resources.

This chapter highlights the following:

  • The skills that are critical for proper functioning of a CRO program
  • Job descriptions of different members of a CRO team

5) Documented Process

Last but not the least, a successful CRO program requires a well-defined process. It helps enterprises in identifying the most critical issues in their conversion funnel and treating those issues on priority.

This chapter offers insights on the following:

  • Setting up a long-term calendar for a CRO program
  • Prioritizing A/B testing hypotheses based on key attributes
  • Building a knowledge repository of learning from the past CRO campaigns

Conclusion

For a CRO program to be successful, it needs to be structured and process-driven. There are different key elements that contribute toward it—people, tools, process, goals, and culture.

When all these elements need to be fully optimized, a CRO program can run at its full potential.

Key CRO Pillars eBOok Landing Page CTA

The post Key Pillars of a Successful CRO Program | Latest eBook by VWO appeared first on VWO Blog.

Continue reading here: 

Key Pillars of a Successful CRO Program | Latest eBook by VWO

Why CRO and UX Are a Match Made in Heaven

alt : http://feedproxy.google.com/~r/ILoveSplitTesting/~5/LClAaoFWRhc/vba-session-recording02.mp4http://feedproxy.google.com/~r/ILoveSplitTesting/~5/LClAaoFWRhc/vba-session-recording02.mp4

As a practice, Conversion Rate Optimization (CRO) and User Experience (UX) have a lot in common. Both CRO and UX aim to help users get things done with their minimum efforts. They both involve the use of certain tasks such as user feedback and usability testing.

The question that “Is CRO a part of UX, or UX a part of CRO?” can still lead to a debate among designers and marketers.

However, this post is not about that debate.

This post talks about how CRO and UX complement each other. It highlights how CRO and UX teams can achieve business goals effectively while working together.

UX Provides Long-Term Benefits to CRO

The approach to UX is simple: You ensure that every task on your website is intuitive and the complete flow across the website is per the expectations of users. Moreover, you actively identify areas of friction on your website and try to fix those.

Here is the definition of UX design from Wikipedia: “It is the process of enhancing user satisfaction by improving the usability, accessibility, and pleasure provided in the interaction between the user and the product.”

This approach assists CRO in the following ways:

CRO Team Doesn’t Test Random Ideas

Following the approach to UX, CRO team studies user behavior on a website extensively and creates A/B testing hypotheses based on the learning. The team steers clear of building hypotheses per “best practices” or “HiPPO.”

The UX way of studying user behavior on a website includes a range of methods. The popular methods include user surveys, eye-tracking studies, usability lab studies, and customer feedback. For instance, the iconic eye-tracking study “F-Shaped Pattern For Reading Web Content” by NN Group delivered highly actionable insights on web user behavior. The study concluded that the gaze of web users travels in an F-shaped pattern.

F-shaped reading pattern to uncover User experience insights
Eye-tracking study that reveals the F-shaped reading pattern of web users

Based on this study, a CRO team could build a hypothesis that “placing critical elements of a website on the left side would lead to higher engagement and conversions.”

Similarly, CRO teams can take help of website surveys to gather user feedback and create hypotheses based on what users want.

A/B tests run on ideas that are generated from UX research methods have a higher chance of delivering results than tests based on a spray and pray approach.

CRO Team Prioritizes Major Issues

The approach to UX allows a CRO team to focus on areas that have a profound impact on the conversions on a website. The team can also treat this as one of the factors while prioritizing multiple testing hypotheses.

Sure, optimizing an element such as an add-to-cart button on an eCommerce product page can increase conversions. But you should make sure that if there is a bigger issue, it needs to be addressed first. What if the product page requires larger product images? Or what if the home page doesn’t lead a sufficient number of visitors to the product page and it’s the home page that needs optimization.

Kuba Koziej, CEO and Co-founder at Uptowork, explains how the UX approach helped him in conversion optimization:

CRO is all about user experience. Personally, I have never been able to make a huge impact by making changes that were only visual by nature.

You can increase sign-ups by tweaking your copy or “making a bigger promise.” But you will never be able to make a significant impact if user experience is not your primary focus.

We recently restructured the sign-up process for our “resume builder” web app. Originally, the user was asked to sign up at the beginning of the process. We spent some time tweaking the sign-up form; we tried a modal window, welcome screen, and other similar features. The results were modest. But that was until we focused on user experience.

Before getting personal information from the users, we provided them with something —the fun part. Users could customize their resume by choosing the template and color from a range of choices. Moving these two steps to the beginning of the funnel increased our sign-ups by over 90% and the sales went up by 17%.”

CRO Team Understands the User Journey

UX is not just about improving the usability of web pages but of the entire path that users take to reach the goal on a website—the user journey.

The UX approach involves recognizing various user journey paths, or conversion funnels, available on a website. The next step is analyzing how users navigate through these paths. This helps a CRO team identify friction-areas and exit points across user journey paths on a website.

The friction often exists on certain pages because users do not find what they expect. Consider a travel website for example: When users search for hotel rooms and encounter high prices and low vacancy across all hotels, there exists friction. A CRO team could recognize this and build hypotheses aimed at removing (or at least reducing) the friction. Travelocity, for instance, gets around this issue by offering a helpful suggestion to users right on the search results page.

Improving user experience on Travel website (CRO)

Similar to the above example, CRO teams can study user journey paths and build hypotheses to eliminate friction points. The hypotheses should be built considering users’ anticipations and what motivates them.

Of course, the hypotheses then need to be put to test through A/B testing.

CRO Makes UX Team More Effective

CRO can help UX teams do better user behavior research and validate their ideas.

UX Team Gets Useful Tools

UX teams have traditionally carried out research on user behavior through a certain set of tools. (A few of them have been mentioned earlier in this post.) An elaborate list of such tools has been compiled by the NN Group.

User experience research methods

Interestingly, with the advent of CRO, numerous other user behavior research tools have surfaced. These tools have found their place in the suite of CRO tools and can be effectively used by UX teams as well.

Some of the tools worth mentioning are visitor recordings, form analysis, and website funnels.

Visitor recordings, for instance, let you playback sessions of users on a website. When compared with usability lab studies, visitor recordings help you monitor behaviors of users on a website who have a higher probability of converting as customers. Here is a sample user session captured using VWO:

Form Analysis, on the other hand, lets you analyze how users interact with your web forms. You can monitor and compare how each form-field performs. You can identify the form fields that create maximum friction for users. With the “refill-rate” of a form field, you can realize if it is confusing the users. The “drop-rate” would let you highlight the exit point on the form.

Example of form analysis by VWO (CRO)
Example of form analysis using VWO

UX Teams Can Validate Their Ideas

When a UX team has ideas to improve the usability of a website, how does the team know if the ideas would work? One way is to conduct usability testing on the newer version of the website. The other is A/B Testing.

A/B testing lies at the heart of CRO, but it also proves to be an effective tool for validating UX  ideas. A/B testing helps UX teams understand how users respond to website changes. Whether it is a winning A/B test or a failed one, it always leads to insights on user behavior.

Furthermore, a UX team can take help from the CRO team to know about past A/B tests. Results from past A/B tests can prove to be a gold mine of insights on the user behavior on a website. Additionally, UX teams can also avoid creating ideas that have already been tested by CRO teams.

CRO and UX Teams Need to Work in a Collaborative Manner

It is an ideal situation when both CRO and UX teams assist each other and share their learning.

Kieron Woodhouse, head of UX at MVF, adds to this:
“UX and CRO are intrinsically linked. As both disciplines grow in breadth, it is impossible to champion one over the other. Instead of making one a part of the other, the best approach is to have an open dialogue among teams and ensure that each department is learning from the other all the time and passing on learning and new developments. At MVF, our teams have open Slack channels to discuss findings as they go and UX and CRO are very much seen as complementary disciplines which work together to get the most out of each other’s expertise.”

Kieron shared five tips on how a UX team can maintain synergy with a CRO team in his post “The Importance of CRO as a Research and Validation Tool to UX”:

  1. Become a part of each other’s processes and share knowledge regularly.
  2. Learn how to read CRO data and results.
  3. Use this data to enhance your designs and proposals.
  4. Stay away from universal “best practices.”
  5. Base your UX spec on a hypothesis-driven process.

What Do You Think?

How closely do the CRO and UX teams work in your organization? Would you like to add anything to this post? Please use the comments section below.

Free Trial CTA (CRO Program)

The post Why CRO and UX Are a Match Made in Heaven appeared first on VWO Blog.

Original link – 

Why CRO and UX Are a Match Made in Heaven

All You Need to Know About eCommerce Conversion Optimization | An Interview with Tomasz Mazur

The following is an interview with Tomasz Mazur, a Conversion Rate Optimization expert, currently working as a consultant with Peaks & Pies.

Tomasz has several years of experience working on eCommerce conversion optimization and UX. He has previously worked with Zalando, Europe’s leading online fashion website.

Tomasz Mazur, Conversion Rate Optimization expert at Peaks & Pies

Tomasz answers our questions about conversion rate optimization from the perspective of eCommerce professionals.

Regarding the CRO Team and Sponsors

1) What does your ideal eCommerce CRO team consist of?

The team composition varies with the company. However, there are a few key members that every CRO team consists of (or should consist of):

  • A CRO manager who looks over the entire program—from strategy creation to website analysis and A/B testing
  • A design professional
  • A developer who can create functional test variations

The scale of the work of a CRO team mainly depends on website traffic. The objective is to make use of the entire website traffic always. Therefore, the greater the traffic, the larger a CRO team needs to be. Zalando, for instance, had a sizeable CRO team. I cannot disclose the exact number but there was a large number of CRO managers working on the entire website traffic. The web analytics team was also huge, working  on the minutest of details on the website such as product sorting algorithms. There was also a user research team that was in charge of qualitative research; the team used methods like usability labs, prototyping, focus groups, user interviews, and so on. As Zalando was operating in 15 markets and had a dedicated mobile website, the large size of the CRO team was justified.

2) Who should be the owner of a CRO program in an eCommerce organization?

The ownership of a CRO program should remain with someone from the higher management who understands the business impact of CRO. This person is the sponsor of the program. With understanding of the business impact of CRO, the sponsor doesn’t necessarily have to exhibit knowledge of all the nuances of CRO. The sponsor could be the VP of Marketing, or the VP of Product.

Sponsorship is necessary to ensure proper delivery of resources to a CRO team.

Regarding Coordination with Other Teams

3) Does a CRO team require help from other teams in an eCommerce organization?

One team that works closely with a CRO team is Marketing (especially, the customer acquisition department). The KPIs of the marketing and CRO teams are often aligned. The input of the marketing team is quite valuable—they know the product and the behavior of the customers.

The customer support team also plays a crucial role in highlighting pain points across a website. This team acts as a channel for customer feedback and helps a great deal in developing hypotheses.

A CRO team also needs to have the product team on its side. The product ownership mostly lies with the product team and their buy-in is essential. I know a few organizations where the product team has a “Conversion Lead” who acts as a bridge with the CRO team.

Regarding the CRO Process

4) What is the CRO process that you follow?

The process typically consists of the following steps:

  • Studying the quantitative and qualitative data of a website
  • Identifying problem areas on the website
  • Developing hypotheses that aim to address the problem areas
  • Creating variations per the hypotheses
  • A/B testing the variations
  • Analyzing the test results and sharing it with the team

Here is a graphic illustrating the process we follow in Peaks & Pies:

Screen Shot 2016-08-31 at 7.51.58 pm copy

5) How do you build a hypothesis?

I use website analytics tools such as Google Analytics to look for optimization opportunities across a website, for example catalog pages, product pages, and the checkout page.

When certain webpages are identified for optimization, these undergo rigorous analysis. I first check if a webpage has the essential eCommerce features (such as a recommendation engine on the home page) or not. If a certain feature is missing, we have got an opportunity for optimization. I use heatmaps and click-tracking tools to find elements or a functionality that require optimization. Gathering user feedback, taking help of usability testing labs, and checking out competitors’ websites are other ways of creating strong hypotheses.

I first check if a webpage has the essential eCommerce features (such as a recommendation engine on the home page) or not.

6) How do you prioritize A/B testing hypotheses?

I think a simple prioritization model like PIE—Potential, Importance, and then Ease—can work well for beginners. I personally like the model that is based on Potential, Confidence, and then Ease. This model gives a chance to the CRO team to take into account its past experience. A more sophisticated model can also include “political impact” as a criterion.

Moreover, prioritization needs to be a collaborative task to be successful. To estimate the “potential” of an A/B test, you might require the help of your marketing team. Similarly, the developers and designers can help you estimate the “ease” part.

7) What is the next step after hypothesis creation?

The next step is the design and development of a variation per the hypothesis.

However, before the variation is tested against a control, it must go through a thorough “quality assurance.” I think quality assurance is crucial for highly effective testing, but it is not emphasized enough in the CRO industry. You must make sure that all variations of an A/B test are free of bugs and issues.

Consider this: You are trying to improve a page that is already very optimized. You aim to achieve a humble 5% improvement in the conversion rate. If the variation doesn’t work for 5% of your traffic (because you forgot to optimize for mobile users or Firefox users), your test will invariably fail.

I think quality assurance is crucial for highly effective testing, but it is not emphasized enough in the CRO industry.

8) Is sales or revenue always the primary goal of your tests? Or do you look at micro conversions?

I would say the metrics or KPIs you are tracking should always answer your hypothesis.

In some cases, it is quite simple to track the revenue metrics (for instance, while adding new payment options on the checkout page). But in most cases, you have to track a combination of micro and macro metrics.

9) How do you analyze your A/B test results?

To derive valuable learning from a test, we need to conduct a thorough analysis.

I look at both micro and macro goals to get a better context of the results. I also dig deeper by analyzing the test results for different traffic segments. For instance, I compare test results for new visitors versus returning visitors. You need to deal with a concept called novelty effect. Your returning visitors, when encountering a test variation, will recognize the changes on the page and might hold strong feelings about it (similar to how people strongly respond to major changes happening on Facebook or Instagram). However, new visitors will be unbiased with your test variation and would interact without any prejudices. Another set of segments that is relevant to eCommerce is mobile and desktop visitors. The behavior of both kinds of users can vary significantly.

I dig deeper by analyzing the test results for different traffic segments.

An analysis is always followed by summarizing the test results and recommending a plan of action. You check if the hypothesis was valid and whether you need to implement the winning variation or run a follow-up test.

10) If you find that conversions increased for new visitors but decreased for returning visitors, what do you do?

You need to derive learning from the test and realize why the difference in user behavior exists. For example, this could be happening because of an offer for your new visitors such as “10% off for new visitors.” While the new visitors would be encouraged to shop on the website, the returning visitors might feel like they are not being offered the best deal and feel cheated. Your next step would be to set up a system to identify new visitors and display the “10% off on first order” deal exclusively for them. You can additionally have a loyalty program in place for your returning visitors.

The goal of CRO is to not just have winning A/B tests, but also gain more knowledge about your users so that you can provide them a superior experience. Think about the bigger picture—the entire eCommerce ecosystem. If you find that your close-up product pictures work better than zoomed-out ones, you can communicate this idea to your display marketing team and help them create better ads. If you find that the “free delivery” offer works better than a “10$ free voucher,” you can share the knowledge with other teams such as customer support and product.

11) How important is a long-term A/B testing calendar for high-traffic eCommerce websites?

I think a long-term A/B testing calendar is essential for any kind of website. You need to have a prudent approach; clearly define all the tests that you can conduct, along with the time that each test is going to take.

Think of time as a resource. If you are not testing all the time, you are wasting opportunities to optimize your website (the same time that your competitors might capitalize to move ahead of you). With a long-term calendar, you can easily identify time slots in which you can fit quick A/B tests and utilize all your resources effectively.

Here is a sample template that I would use as my CRO roadmap (you can access the template here):

Example Testing Roadmap Peaks Pies Google Sheets

12) How important is a knowledge repository of past A/B tests?

A knowledge repository is crucial to make your CRO program effective. It helps you know what works for your users (and what doesn’t) and helps you create better tests in the future. It is also used to introduce newcomers to the testing culture of an organization.

With every test you perform, it is important to document the details. You can start with a simple Google doc, and later get a sophisticated and comprehensive spreadsheet. Share the document with the entire CRO team so that everyone is on the same page and can avoid repeating any mistake. When the number of stakeholders is large, you can even think of a weekly/daily newsletter.

I usually archive my test results for every quarter.

Regarding the Nuances of eCommerce Conversion Optimization

13) Which eCommerce webpages do you think are key to a CRO program?

All of them. It’s important to go through the complete customer journey and find optimization opportunities. The most common customer journey path is from the home page to the catalog page to the product page to the cart page, and finally to the checkout page. Of course, there are other customer journey paths as well.

It’s important to go through the complete customer journey and find optimization opportunities.

However, a good CRO manager should always be able to identify low hanging fruits. I have seen that home page is the most tested page of eCommerce websites—mostly because it attracts the largest amount of traffic. Next in the list are product pages. These pages have a lot of traffic from channels such as affiliate partners and display ads. Third in the list are the register or login pages of websites.

14) With a large amount of traffic, how long do you run a test?

It is necessary to run an A/B test until it delivers a statistically significant result. However, with eCommerce websites, it is also important to consider the business cycle. For instance, the business cycle for fashion eCommerce is about one to two weeks. So even when a test delivers results in a few days, you need to run it for an entire business cycle to derive useful learning.

15) How many A/B tests do large eCommerce enterprises roughly run on a monthly basis?

That completely depends on the scale at which an eCommerce website is operating.

Large eCommerce enterprises like Zalando easily generate a traffic of millions of visitors in a single day. For example, Zalando had traffic from numerous markets and it could allow even 100 tests to run at an instance.

16) How does a festival season or a “sale period” affect a CRO process?

I have a good amount of experience working with fashion eCommerce. I’d say that the sale period definitely has an effect on their CRO process. One of the main goals in a sale period is to clear out the inventory. This is when the principle of “urgency” is deployed heavily in CRO campaigns.

This is when the principle of “urgency” is deployed heavily in CRO campaigns.

For other eCommerce websites, the heavy-traffic period can be markedly different. For example, I am working with an eCommerce enterprise that sells premium alcohol. The on-season period for them is winters when people tend to stay at home more and buy alcohol (for personal consumption or as a gift). As a result, this enterprise is focused on optimization activities more during the winter season.

Your Turn

What do you think about this interview? Do you have any question of your own that you would like to ask Tomasz? Post them in the comments section below.

VWO Free trial CTA

The post All You Need to Know About eCommerce Conversion Optimization | An Interview with Tomasz Mazur appeared first on VWO Blog.

View article:  

All You Need to Know About eCommerce Conversion Optimization | An Interview with Tomasz Mazur

What Does a CRO Program Consist of? | An Interview with Avast’s Michal Parizek

The following is an interview with Michal Parizek, Senior eCommerce & Optimization Specialist at Avast (a leading antivirus software company). Michal is a Conversion Rate Optimization (CRO) expert, having over seven years of experience across multiple industries.

Michal has created the popular Conversion Rate Optimization Maturity Model, where he illustrates the core assets of a successful CRO program for organizations operating at different scales. Using the model, organizations can understand the current state of their CRO efforts and identify ways to improve their programs.

This is how the CRO maturity model looks like:

Conversion Rate Optimization Maturity ModelThe questions in this interview aim to bring out actionable tips for organizations trying to bring structure to their CRO program.

Let’s begin.

Regarding the People and Culture Required for CRO

1) What are the essential skills or capabilities needed in a CRO team?

CRO is a complex discipline. In my opinion, it is a combination of a few, quite different skills.

  • One of them is analytics. To have a successful CRO program, you need to understand your data and derive insights.
  • Another skill is the user experience design. Being able to prototype great user experience is important.
  • Then marketing and copywriting—working on pricing strategies, composing an effective value proposition, and other relevant activities.
  • Other useful skills include statistics, consumer psychology, email marketing, and so on.

CRO can benefit from a range of skill areas. Therefore, it is very difficult to find great CRO consultants.

When you are building a CRO team, you should make sure you essentially hire an analyst, a UX designer, and a marketer/copywriter. These three I see as the key to driving an effective CRO program and results.

2) What does Avast do to create and spread a culture of CRO within the organization?

CRO has a good position in Avast culture. We are keen on A/B testing every major change in our sales flows or website. Data-driven decisions outbalance the gut-driven ones, even though there is a room for improvement. When I think about the reasons, I think there are mainly three things:

  • No data silos: Everyone can have access to pretty much every data in the company.
  • Sharing knowledge: We have been practicing A/B testing in the company for over four years, and the practice is now deeply rooted in our eCommerce department. Senior employees share their knowledge with the newcomers and help to spread the CRO culture.
  • Effectiveness: Particularly when Avast was much smaller than today, we counted literally every penny we spent. (Was the initiative worth the cost? How much did the $1,000 investment in that partnership return us?) Being small is an advantage since you need to monitor your spend and earning closely, and it naturally forces you to be more data-driven.

Regarding the Importance of a Sponsor

3) How does the absence of a sponsor affect an organization’s CRO program?

It makes things more difficult. It is not just about the budget, but also about time and resources. You are fighting two enemies at the same time, low conversion rate and your boss. It is not very easy to practice CRO in such circumstances. In the end, to be successful, you need to get the buy-in from the sponsor—either your boss or the management.

4) How can you convince the top management to back your CRO program?

From my experience, the strongest argument for a CRO program was always the “results.”

Speak in terms of dollars. Prove how much money your organization gained because of the CRO efforts. And explain how much money your company can gain in the future if you get more resources, time, or money. Don’t forget that tests with negative results are equally powerful since you can argue how much money you can lose if you did not practice A/B testing.

Regarding the Research Methodology in CRO

5) What is the importance of pre-test analysis or research?

It is the absolute key. If you just throw ideas to your A/B testing tool, your success rate will suck and you will waste time and resources. Arriving at hypotheses, scientifically, is essential. (In my previous job at Liberty Global, we did not pay a lot of attention to research and we were not very successful in our CRO activities.)

Also, the pre-test analysis can help you identify the test feasibility—if you are able to get results in a meaningful time and/or if you know how many variants you can afford to have.

6) What are the essential tools required for the research?

You can segregate them as quantitative and qualitative:

  • The quantitative tools include analytics, reports, heatmaps, and session recordings. They help you identify where the problem is.
  • On the other hand, the qualitative tools help you find out why people take certain actions on a website. These include usability testing, card sorting, surveys, interviews, focus groups, and so on.

7) What are some of the basic mistakes people often make with pre-test analysis?

The biggest (and a common) mistake is the absence of a pre-test analysis altogether. Then there are, I believe, the other common analysis mistakes: wrong interpretation of metrics, sampling issue ignorance, common sense absence, no test feasibility analysis, and more.

Every test specification should contain the research part—explaining why a particular test should be executed and what insights led you to the test idea.

From my experience, tests which have a solid research in the background have a higher success rate than the tests without any research.

Regarding A/B Testing Practices

8) How important is it to keep a long-term calendar for testing experiments?

A test calendar helps to focus on important tests being launched on time. It is also vital for resource planning and for bringing all stakeholders in the loop. We usually do a quarterly overview of what tests we’d like to run and then we specify and add details on a monthly basis.

9) How do you prioritize tests?

I often use a rather simple formula. First, I list all the possible tests we could run in a certain timeframe. Then, to every test I add two estimates:

  1. Effort: How many hours/days are required to execute the test? (It’s even better if you can translate that into monetary values.)
  2. Impact: How much money will be returned if the test is successful. Rather than thinking if the test can increase conversion rate by 10% or 15%, pay attention to where the test is running and what element you are changing. Do a pre-test analysis and calculate how many conversions a particular page generates. By using research or common sense, identify the importance of the changes. For example, in most cases, changing prices will have a bigger impact than changing button colors on a homepage. After you have executed several A/B tests, you will get an idea what matters and what does not. And this will help you set better expectations.

When all your tests are listed with effort and impact estimates, congratulate yourself. Now, it is easy. You execute the tests with the highest impact and least effort first. Then in the long term, you focus on the tests with a high impact, but also with great effort. In the meantime, you can execute the tests that don’t have a huge impact, but are easy to launch. And, you avoid ideas that require a lot of effort, but have almost no impact.

10) Can failed/inconclusive tests still provide value?

Yes, they can! If the test is designed and executed correctly (variants differ in one element, flawless measurement, sufficient data, no bugs in all variants, and so on), it provides great value regardless of the results. As long as you can learn from a test, it is a good test. Failed tests help you see which way you should not go. Inconclusive tests (again, if the tests are done correctly) tell you that perhaps the testing element does not matter much and you should test something else.

I really like what Ron Kohavi says,

“A valuable test is when the real results differ from your expectations. You learn the most in these cases and the learning matters the most in the long term.”

11) What are the common post-test analysis mistakes?

One of the common mistakes is not making sure if the test results are trustworthy or statistically significant. We sometimes tend not to analyze these thoroughly. The XX% lift always sounds appealing and we want to believe it (particularly when we have also designed the test—that’s why it is wise to always have somebody else to have a second look). But do we have enough data? Are the results consistent with time? Do we have traffic split balanced? Is the conversion lift driven by the change in design or just by chance? If we ignore chasing these findings, we can easily implement changes, which may not have any effect, or even worse, may decrease our performance.

The other common issue is not monitoring post-test results in the long term. Do we see the XX% lift after the winning variation has been implemented? Once we have a successful test, we tend to switch our attention to another issue and another test and we forget to monitor the previous effect.

Regarding CRO Tools

12) What are the key attributes based on which a CRO tool is chosen?

Usually, costs and benefits are the main attributes we look at. You must look at both costs and benefits from a wider perspective. Costs are not only the money you pay for the tool, but you also need to include implementation cost and maintenance cost (including the extra staff you need to hire to work on the tools).

Trying to estimate the business impact of the benefits of these tools is often challenging. Many CRO tools focus primarily on driving insights, and it is difficult to evaluate these in terms of $. Fortunately, many tools offer free trial versions, so you can get an idea on how useful they can be for you.

13) When would you say an organization should invest in developing in-house CRO tools?

When a tool is key to your business, and the cost of developing (and maintaining) the in-house tool outweighs the cost of having a third-party tool.

Regarding Coordination Between Teams

14) Does the CRO team need to coordinate closely with any other team in an organization?

It does need to cooperate with several teams. From my perspective, the following two are the key.

  • First, they need to have a close relationship with the business intelligence department to get correct data.
  • Second, they need to have a dedicated team of developers to execute the ideas that CRO team creates.
  • Then, there are many other vital cooperations. A support team has always been a great source of customer feedback; and for a CRO team, it is wise to be in touch with them. The product itself is a conversion asset so be in close touch with a product management team. How the product is marketed often defines the quality (and quantity) of leads and website visitors. Therefore, marketing is then another team I recommend being close to.

Your Thoughts

What do you think about the essential assets of a successful CRO program? Do you have any more questions for Michal? Post them in the comments section below.

Free-trial CTA

The post What Does a CRO Program Consist of? | An Interview with Avast’s Michal Parizek appeared first on VWO Blog.

Visit source: 

What Does a CRO Program Consist of? | An Interview with Avast’s Michal Parizek

How to Get Top Management Buy-in for a Conversion Rate Optimization Program

Conversion Rate Optimization (CRO) is gradually becoming a known concept across enterprises, globally.

And, as more and more enterprises become aware of the benefits of CRO, you would expect that most of them are putting CRO to practice.

Unfortunately, that is not the case.

Many organizations still haven’t included CRO in their “growth activities” suite. Even among the few organizations that are practicing CRO, more than half don’t have a structured or documented process in place. Moreover, a lot of these organizations have little or no budget allocated for CRO.

This clearly indicates that the top management at enterprises are not yet convinced about the effectiveness of CRO.

If you are trying to introduce a CRO program in your organization, this post is for you.

We have listed five key ways through which you can influence your top management to buy into CRO.

1) Highlight Improved User Experience as a Double Win

Improving user experience is one of the top objectives of many organizations. Talking about the same can get your management’s interest in CRO.

Conversion rate optimization, in essence, is all about improving user experience. The underlying principle is that if a website offers unmatched user experience, visitors would convert more. (For example, nameOn improved user experience on its “cart page” by removing distracting CTAs, and saw an increase in conversions by 11.40%.)

CRO aims at helping you simplify every task that users have to complete on a website. Creating prominent call-to-action buttons, removing distracting elements, and streamlining navigation flow — these and other similar actions to help users complete a task quickly and improve user experience.

A superior user experience helps an enterprise in many ways. Some of them are mentioned here:

  • Better customer acquisition and retention because of a higher satisfaction level of visitors and existing customers.
  • Greater word-of-mouth publicity because of a higher satisfaction level.
  • Reduced overhead costs; minimal spend on support because visitors face less issues navigating through a website.

Now, we know that measuring user experience can be difficult — it’s not a quantifiable unit. Still, there are metrics that can indicate an upgrade in user experience. One such metric that you can use is Net Promoter Score, or “NPS.” This term is simply derived by asking users this question: “How likely is it that you would recommend us to a friend or colleague?” You can monitor NPS for your business before and after the implementation of your CRO program. If there’s an increase in NPS, it’s safe to assume that the user experience also improved.

2) Present a Competitive Analysis

Sometimes, “the competitors are leveraging it” can be the one reason that can help you influence the top management for starting a CRO program.

Or better, you can convince your bosses by showing industry leaders (or, companies that they admire) practicing and winning by using CRO.

You can start by building a list of companies that your bosses look up to. Next, analyze the tools these companies use for website analytics and optimization. Tracking the tools used by these companies should not be difficult: You can take the help of Built With or Datanyze (or any such software) used by the companies in your list. (You can even employ browser extensions such as Ghostery to identify tags of different web apps that are active on a company’s website or app.) Shortlist the companies that are using tools for website analytics, A/B testing, user behavior analysis tools, and so on. The software mentioned above can also give you an idea about how much these companies spend on CRO technologies. The higher the spend, the more involved the companies are with CRO.

Detailed technology usage report of bbc.com given by Built With
Detailed technology usage report of bbc.com given by Built With

Additionally, you can search for CRO-related case studies involving these companies. In fact, case studies that highlight CRO activities in any company from your industry can be effective in influencing the bosses.

3) Stress the Gaps in Your Current Approach

Often, the top management in enterprises is under the impression that its teams are already working on optimization. They, possibly, see CRO as a part of marketing team’s responsibility. The truth, however, is that a CRO program relies as much on marketing as it does on IT.

The essential members of a CRO team typically consist of the following:

  1. Strategist: Focuses on managing the program and deciding the goals. Usually knows the most about conversion journeys, personas, and persuasion design. Owns the KPIs.
  2. Analyst: Looks at the data before and after an A/B test, connects it with other important data sources, and helps everyone understand the test outcomes.
  3. Conversion Centered Designer: Focuses on conversion centered design.
  4. Copywriter: Someone who’s great with the written word and can write to reduce anxieties, ease friction, and persuade and delight visitors.
  5. Developer: To help you run your tests with optimized front-end code and send events or record goals in your analytics software.

A single team member can many times take up more than one of these roles. Nonetheless, the lack of any member possessing these skills in a team can lead to a substandard CRO program. Such a program might or might not deliver favorable results, that is, increased conversions. What is equally important is the (timely and transparent) coordination among these team members. The same can be ensured by having a documented CRO process in place.

A documented process helps you create a long-term calendar of CRO activities and sort these on the basis of their priority. Without that, you might end up conducting low-priority A/B tests that have little effect on the bottom-line. Another major risk is missing out on the learning from the first CRO activity and, therefore, not applying it to the later activities. (For instance, without a structured process, you might not archive the learning from a failed A/B test. And, because of that, you might continue creating A/B tests that run on hypotheses similar to the one that failed.)

Get your bosses’ attention to all such gaps (and possible pitfalls) in the existing strategy and practice regarding CRO.

What a CRO program consists of - a maturity model
What a CRO program consists of – a maturity model

4) Show Them the Money

For many enterprises, the marketing budget is majorly spent on traffic acquisition through channels such as pay-per-click (PPC), social media, and SEO.

However, it’s always difficult to generate satisfactory return on investment (ROI) from such channels.

  • It’s a tough task to ensure the desired quality of traffic through such sources. Users that are not your target audience can also visit your website through these channels. (The “targeting” options can be specific only to a certain point).
  • It’s not under your control on how you perform across these channels over a long period of time. For instance, a change in a search engine algorithm or an increase in cost-per-click on a PPC platform can always spoil the result/ROI of your traffic acquisition efforts.

CRO, on the other hand, lets you improve your sales figures by working on something that you’ll always control — your website. CRO aims at increasing the number of conversions from the existing traffic of your website. (Many times, even a small increase in the conversion rate impacts the revenue significantly. For example, BrookdaleLiving.com managed to increase its monthly revenue by $106,000 because of a modest improvement of 3.92% in the conversion rate.)

Example of conversion optimization across a funnel
Example of conversion rate optimization across a funnel

Now, while pitching the idea of CRO to top management, it is essential to highlight the forecast of such improvement in sales and revenue.

Kieron Woodhouse, UX Head at MVF Global, shares how financial projections helped them start their CRO program: “It’s important to speak to different parts of the business in a language that’s appropriate and tailored to their requirements. The commercial part of the business will expect a detailed analysis of how following a CRO program could positively impact the business. At MVF, before our CRO team was created we put forward a detailed financial projection of all of our top performing landing pages and estimated potential uplifts of 1%, 5% or 10% as a result of our actions.

This approach made the decision very easy for the commercial part of the business to understand and meant we had their complete support before the project began. Of course, it’s also important to have multiple scenarios to help pre-empt performance and manage expectation.”

5) Show Them the Data

It is always easy to convince the top management when you back your pitch with data. And with CRO, you get all the data that is needed.

The success of a CRO program can always be measured on the basis of quantitative metrics — sales and revenue, or micro and macro conversions.

Performance of a CRO program measured on the basis of data
Performance of a CRO program measured on the basis of data

This is in contrast to some of the brand-related campaigns that run on other channels such as PPC and social media. Often, those campaigns work on “improving the brand value,” the ROI of which can seldom be measured.

Further, the whole process of conversion rate optimization is scientific and is based on informed decision-making. There is little room for guesswork.

You use website data and user behavior patterns to identify the pain points on your website (and areas where the conversion rate can be further improved):

When traffic on your website is not converting, you fix the pages with the highest bounce rate and exit rate. When visitors don’t click a specific CTA button, you observe visitor recordings and heatmaps to identify the elements that are proving to be a distraction. When you do not get enough visitors to fill your website forms, you run a form-analysis to check which form fields cause friction. When you are not sure what is stopping visitors from converting on a web page, you employ an on-page survey to learn about that. The use-cases of CRO are endless.

Arguably, the best way to convince your bosses about the benefit of CRO is by showing them a quick but effective A/B test. (Don’t have an A/B testing tool? Start with VWO’s free-trial.) Use methods similar to the above to identify optimization opportunities that have the highest potential for improvement. When you succeed, share the results emphasizing the improvement in the bottom line. This is bound to get you your bosses’ attention.

Angie Schottmuller of Three Deep Marketing suggests, “Tests with interesting results quickly get management attention, and an addictive demand for more testing invariably follows.”

Nate Shurilla, CRO consultant at iProspect Japan, reveals a similar approach they follow: “We typically do a trial run before full-out testing.  We’ll do a little analysis of one page and come up with a small test or two, and then show the results in our full pitch.  Explaining how we can then replicate and expand upon those results site-wide usually gets them on board, especially when they see how even a little jump in CVR has huge effects on their revenue.”

Final Thoughts

Importantly, keep building your case for the need of CRO.

In the words of Jacob Baadsgaardfounder and CEO of Disruptive Advertising, “Don’t pitch once and back off if the idea isn’t immediately accepted. Challenge yourself to view this situation as your own CRO effort on converting your manager/boss! Those that are persistent and back it up with good data will win in the long run so don’t give up.”

Over to You

What do you think will invariably convince top bosses in an organization to run a structured CRO program? We’d love to know your thoughts. Use the comments section below.

Free-trial CTA

The post How to Get Top Management Buy-in for a Conversion Rate Optimization Program appeared first on VWO Blog.

Continue reading here:

How to Get Top Management Buy-in for a Conversion Rate Optimization Program

Thumbnail

[Infographic] Why a Website Redesign Doesn’t Always Work

(This is a guest post, contributed by PRWD.)


The website redesign.

It is often the big money ticket for a business; the project upon which a lot of faith is placed, and improved numbers are expected across the board.

Ideally, it is expected to provide a significant improvement in sales and leads figures, based on a modern, seamless and intuitive experience for the visitors that is future-proofed for the constantly changing marketplace.

Unfortunately, the numbers from PRWD‘s survey show that this is not always the case. In fact, a few businesses even find a decline in sales after undergoing website redesign.

So if you’re currently in the middle of a website redesign (or planning one in the near future), have a quick look at the infographic below.

Following on from PRWD’s last infographic on the effectiveness of website traffic acquisition strategies, this infographic provides insight into why a website redesign for many of the UK’s biggest businesses don’t provide a jump in sales and leads.

At the end of the infographic, there are key takeaways that can help you get the most out of a website redesign.

Click here to get the full infographic

[Infographic] Why-website-redesign-doesnt-always-work

What Do You Think?

Have you ever done a website redesign? What would you do to ensure that a redesigned website improves the bottom-line?

We’d love to know your thoughts. Post them in the comments section below.

The post [Infographic] Why a Website Redesign Doesn’t Always Work appeared first on VWO Blog.

View article: 

[Infographic] Why a Website Redesign Doesn’t Always Work

A Definitive Guide to Converting Failed A/B Tests Into Wins

alt : http://feedproxy.google.com/~r/ILoveSplitTesting/~5/LClAaoFWRhc/vba-session-recording02.mp4http://feedproxy.google.com/~r/ILoveSplitTesting/~5/LClAaoFWRhc/vba-session-recording02.mp4

As a marketer, there aren’t many things sweeter than running successful A/B tests and improving your website conversion rate.

It’s sweet because getting a winning A/B test is hard work.

To carry out a successful A/B test, marketers need to follow a robust process. They need to develop data-driven hypotheses, create appropriate website variations, and test on targeted audience. And even by following such a structured process, marketers tend to win just one out of three A/B tests.

[What’s more worrying is that the percentage of winning A/B tests overall is only 14% (one out of seven). That’s largely because most of the marketers still don’t follow a documented process for A/B testing (and CRO as a whole). For instance, only 13% of eCommerce businesses base their testing on extensive historical data.]

But, here is a good news: Your failed A/B tests can still be of value.

By analyzing the A/B tests that didn’t win, you can highlight flaws in your approach, improve the tests, and even identify hidden winners.

This post talks about the key things you can do after encountering an unsuccessful test.

For convenience sake, we’ve segregated unsuccessful tests into two parts: inconclusive tests, and tests with negative results.

When A/B Tests Give Inconclusive Results

An inconclusive result is when an A/B test is unable to declare a winner between variations. Here’s what you need to do with such a test:

Finding Hidden Winners

Even when your A/B test hasn’t found a winner among different variations, there are chances that you can still uncover wins by slicing and dicing your test audience.

What if the A/B test produced results for specific segments of your traffic (segmented the basis of traffic source, device type, etc.)?

This scenario is similar to the Simpson’s Paradox. Let’s understand it with a simple example.

A gender bias study among the UC Berkeley admissions in 1973 showed that men had a higher chance of being admitted as compared to women.

Simpson's paradox example

However, the department-specific data showed that women had a higher admission rate for most departments. Actually, a large number of women had applied for departments with low admission rate (in contrast to a small number of men).

Simpson's paradox example 2

We can see how multiple micro-trends skewed the overall study result.

Likewise, an A/B test can be found working for some traffic segments and not working for some, leading to an inconclusive result.

You can reveal hidden winners (traffic segments where an A/B test delivered results) with post result segmentation.

For instance, you can find if your website conversion rate improved specifically for new visitors or old ones; for paid traffic or organic traffic; or for desktop traffic or mobile traffic.

The analysis can help you identify segments that have the most potential. For example, your inconclusive A/B test might have increased conversions for “returning visitors.” You can run a new (or the same old) test targeting only the returning visitors.

Post Result Segmentation for an A/B Test

Nonetheless, it’s essential to observe the number of visitors for each segment. The conversion rate and other data points for different segments can only be trusted if the individual segment traffic is large enough.

Tracking the Right Metric(s)

The effectiveness of an A/B test’s result depends largely on the metric you’re tracking.

A lot of times, A/B tests aim at improving only micro-conversions for a website. Mostly, that’s either because the test is carried out at a beginning stage of a conversion funnel or on less-critical web pages. Such tests do not track changes in a website’s macro conversions, and fail to notice any rise in the bottom-line (sales/revenue).

When your A/B test is inconclusive, you need to check if you’re optimizing for the correct metric. If multiple metrics are involved, you need to analyze all of them individually.

Let’s suppose, you run an eCommerce store. You create a variation for your product description page that mentions “free shipping,” with the objective of increasing add-to-cart actions (a micro conversion). You A/B test the variation with the control page that gives “no information on shipping.” To your surprise, the test couldn’t come up with a clear winner. Now, you need to see whether the variation boosted your revenue (macro conversion), or not. If it did, the reason can be simple: the “free shipping” variation might have led only the users with high purchase-intent to the checkout page, thus, increasing the number of conversions.

If you realize that you weren’t tracking the most relevant metric with your A/B test, you need to edit the test with new goals. With new metrics in place, you can run the test for a while longer, and find improvements.

It’s advisable to keep your eyes on both micro and macro conversions.

Micro and macro conversions

Analyzing Visitors’ Behavior

Using on-site analysis tools, you can uncover a lot of insights which plain data just can’t offer. With the help of heatmaps/scrollmaps and visitor recordings, you can observe the behavior of your users (A/B test participants) and find probable causes that led to an inconclusive test.

Heatmaps can tell you if the element you’re testing is going unnoticed by most users. For instance, if you’re testing a variation of a CTA button that lies deep down the fold, heatmaps/scrollmaps can highlight the number of users that are reaching the CTA button. An A/B test might be inconclusive if only a handful of users are reaching the CTA button.

Here’s how a scroll map looks:

Scroll Map - VWO Pricing Page

In the same case, visitor recordings can show you how users are interacting with the content and elements above the CTA. With high engagement above the CTA, users might have already made up their mind about their next action (a conversion or an exit). Hence, any changes in the CTA would not affect users and would result in an unsuccessful A/B test.

Apart from giving insights on specific pages, visitor recordings can help you understand user behavior across your entire website (or, conversion funnel). You can learn how critical the page on which you’re testing is in your conversion funnel. Consider a travel website where users can find holiday destinations using a search box and a drop-down navigation bar. An A/B test on the navigation bar will only work if users are actually engaging with it. Visitor recordings can reveal if users are finding the bar friendly and engaging. If the bar itself is too complex, all variations of it can fail to influence users.

Double Checking Your Hypothesis

Whenever an A/B test fails to provide a result, the blaming-fingers invariably point to the hypothesis associated with it.

With an inconclusive A/B test, the first thing to check is the credibility of the test hypothesis.

Start with reviewing the basis of your hypothesis. Ideally, all your test hypothesis should be either backed by website data analysis or user feedback. If that’s not the case, you need to backtrack, and validate your hypothesis with either of the two methods.

When your hypothesis is, in fact, supported by website data or feedback, you need to assess whether your variation closely reflects it. You can also take help of on-site analysis tools, and find ways to improve your variations.

Funnel data analysis
Sample website data that can be used to create hypothesis (Source)

Here’s an example: Let’s suppose you have a form on your website, and data analysis tells you that a majority of users drop off on the form. You hypothesize that reducing friction on the form will increase submissions. For that, you cut down the number of form-fields and run an A/B test. Now, if the test remains inconclusive, you need to see if you’ve removed the friction-inducing form fields or not. Form-analysis can help you find exactly those form-fields that lead to the majority of drop-offs.

Reviewing the Variations

One of the biggest reasons A/B tests remain inconclusive is that the difference between test variations is minuscule.

Now, I know, there are numerous case studies boasting double/triple-digit improvement in conversion rate by just “changing button color.” But what we don’t see are all the tests that fail to achieve the same feat. There probably are tens/hundreds of such failed tests for every single winning test.

For instance, Groove (a helpdesk software), ran six different A/B tests with trivial changes. All of them proved to be inconclusive. Have a look:

CTA button color change A/B test

CTA Text change A/B test

Keeping this in mind, you need to go through your test variations and see if they really have noticeable changes.

If you’re testing for minor elements, you need to start being more radical. Radical or bold A/B tests are usually accompanied by strong hypotheses, tending to deliver results more often.

(Interestingly, testing radical changes is also advisable when you have a low traffic website.)

Deriving Further Learnings from the Tests

So you’ve finished a thorough analysis of your inconclusive A/B test using the above-mentioned points. You now know what went wrong and where you need to improve. But, there’s more.

You also get to know about the elements that (possibly) don’t influence users for conversions.

When your inconclusive test had no hidden winners, you tracked the correct metrics, your hypothesis was spot on, and your variations were disparate enough, you can safely assume that the element tested just didn’t bother your users. You can recognize that the element is not high on your criticality list.

This will help you create a priority list of elements for your future A/B testing.

When A/B Tests Give Negative Results

A negative result for an A/B test means that the control beat the variation. Even with a failed test, you can gain insights and conduct future tests effectively.

Finding What Went Wrong

There could be many reasons because of which your A/B test returned a negative result. Having the hypothesis wrong, or executing the variation poorly are among them.

A negative result will make you question the test hypothesis. Did you follow a data-driven approach to come up with the hypothesis? Did you blindly follow a “best practice?”

Unbounce highlights a few cases where A/B tests performed against “common expectations.”

Example: ”Privacy assurance with form” best practice failed
Example: ”Privacy assurance with form” best practice failed

These tests again emphasize the importance of a data-driven process behind A/B testing and CRO. A negative A/B test result can prove to be a wake-up call for practicing the same.

Knowing Your Users’ Preference

Negative A/B test results let you understand your users’ preferences better. Specifically, you get to know your users’ dislikes (in the form of the changes you made to the losing variation).

Since you know what your users don’t like with your website, you can build on hypotheses about what they might like. In other words, you can use your negative test results to create better tests in the future.

Let’s talk about the Unbounce example used in the point above. The A/B test was performed on a form, where the variation flaunted privacy assurance, saying “100% privacy – we will never spam you.” The variation couldn’t beat the control — it reduced conversions by 17.80%. Upon analyzing the result, it was deduced that users didn’t like the mention of the word “spam.” Knowing what the users hated, the next test was run with a different variation. The form still had privacy assurance but this time it read “We guarantee 100% privacy. Your information will not be shared.” (No mention of the dreaded “spam” word.) This time the result changed — the variation ended up increasing signups by 19.47%.

Learing used from failed A/B test for a win

What’s Your Take?

How often do you encounter failed A/B tests? We’d love to know your thoughts on how to tackle them. Post them in the comments section below.

12

The post A Definitive Guide to Converting Failed A/B Tests Into Wins appeared first on VWO Blog.

View article:

A Definitive Guide to Converting Failed A/B Tests Into Wins

VWO eCommerce Cart Abandonment Report 2016

Cart abandonment has long been one of the biggest pain-points for the eCommerce industry. Numerous studies have reported the average cart abandonment rate to be over 60%, across all websites.

In other words, six out of 10 “add-to-cart” actions on eCommerce sites do not result in sales!

To alleviate this situation, eCommerce marketers need to understand their users’ behavior better. They need to know the following:

  • Why do users abandon carts?
  • What measures can be taken to prevent cart abandonment?
  • What really makes user return to an eCommerce site and checkout an abandoned cart?
  • How can window shoppers be converted into customers?

VWO ran a survey with online shoppers, gaining critical insights on common cart abandonment issues. The respondents, more than 1000 in number, lay in the age group of 18 to 65 years and belong to the US territory.

You can download the full report here.

Note: This report does not claim to represent the full picture of the state of cart abandonment worldwide. It attempts at providing answers to some of the most important issues regarding cart recovery, and eCommerce optimization as a whole.

Here are the key findings from the report:

Why Do Shoppers Abandon Carts?

The biggest reason users don’t complete the purchase process is “hidden item costs.” Not showing the total bill amount upfront doesn’t sit well with them.

One-fourth of the respondents will not checkout if they encounter “unexpected shipping cost.” Other crucial concerns for users include “having to create a new account for checkout” and “payment security.”

Why shoppers abandon cart

eCommerce sites can reduce cart abandonment by providing the estimated cost of shipping and additional taxes on the product description page itself.

If Shoppers Choose to Buy the Product They Abandoned, How Would They Do It?

About a quarter of respondents (22%) would buy the product they initially abandoned from another website if offered a better deal.

This user preference was followed by “buying the product from a physical store” and “buying from a more trustworthy website.”

Interestingly, a total of 41% respondents would go on to purchase the product from a different site.
How do shoppers buy back a product they abandoned earliereCommerce sites can display popular trust seals and employ social proof to nudge users towards completing a purchase.

What Convinces Shoppers To Buy From an Unfamiliar Website?

A user’s perception of a new website is influenced greatly by the experiences of other shoppers with that website. 30% users would buy products from an unfamiliar website based on the customer reviews presented there.

Trust seals prove to be the second biggest factor, followed by “a deal too good to pass.”

What Convinces Shoppers To Buy From an Unfamiliar Website

eCommerce sites should encourage users to write reviews for the products they’ve bought. After users complete a purchase, they can be targeted with emails, soliciting product reviews. They can also be provided with incentives, if required.

Do Shoppers Add Products to Cart Without the Intention of Buying? If Yes, Why?

Almost half (45%) of the users that add a product to their cart without willing to buy it, do that to know its final price. Users want to be aware of all the costs which might show up at the checkout page, such as shipping cost and taxes.

35% use the add-to-cart function as a wishlist. 10% users hope to get a discount on the product later.

Why users add product to cart without an intention of buying

This again emphasizes the importance of providing transparency in the product cost.

What Motivates Shoppers to Buy the Product They Abandoned?

A whopping 58% respondents said that emails and ads, which offer a discount on an abandoned product, brings them back to the website. This proves the effectiveness of retargeting in reducing cart abandonment.

Applying the scarcity principle to retargeted emails/ads also works with some shoppers. 9% respondents said that they return to buy a product if it’s going out of stock.

What Motivates Shoppers to Buy the Product They Abandoned

eCommerce sites need to master retargeting for retrieving lost carts. (Here’s a post that lists out the popular tools and tips for retargeting.)

Do Shoppers Abandon a Purchase When Their Bank Rejects the Transaction?

25% respondents said that they’ve abandoned a purchase at least once because of the bank rejecting the transactions.

Do Shoppers Abandon a Purchase When Their Bank Rejects the Transaction

eCommerce sites need to offer multiple payment options to users. The mode of payment can include digital wallets and cash on delivery service.

What Motivates Window Shoppers To Make a Purchase?

99% users who don’t actively look for products to buy admit that time-bound offers encourage them to make a purchase.

Almost half of the respondents (48%) will buy a product if offered a limited-time discount, even when they’re just window shopping.

What Motivates Window Shoppers To Make a Purchase?

eCommerce sites should run time-based offers (promoting urgency) to attract window shoppers.

Which Part of the Checkout Process Frustrates Online Shoppers the Most?

Nothing frustrates shoppers more than filling a piece of information twice — 32% respondents admitted to this.

A close second was “too many form fields” having the votes from 26% respondents. 14% said “not being able to change/modify order” is what they hate the most in a checkout process.

Which Part of the Checkout Process Frustrates Online Shoppers the Most

eCommerce sites must ensure that their checkout process has minimum friction. They should ask only the most relevant and necessary details from users. Here are a few other actionable tips to optimize web forms.

Do Online Shoppers Abandon Carts If They’re Asked for Personal Information?

Almost 60% respondents have at least once abandoned a purchase because the website asked them for information they didn’t want to share.

34% respondents admitted having abandoned a purchase when they were asked to provide their social security number. Other information points consist of gender type, phone number and date of birth.

Do Online Shoppers Abandon Carts If They’re Asked for Personal Information

eCommerce sites should ask users for only that information which they’ll be actively using.

Conclusion

Cart abandonment is still a major bane to eCommerce businesses. This report attempts to shed light on the common factors why shoppers abandon a cart, and how eCommerce marketers can overcome them. The key strategies to alleviate cart abandonment include retargeting, employing scarcity and urgency principles, maintaining transparency in prices, and minimizing form-friction.

New Call-to-action

What Do You Think

How useful did you find our report? Do you want any additional questions to be included in our next survey? Tell us in the comments section below.

The post VWO eCommerce Cart Abandonment Report 2016 appeared first on VWO Blog.

Original article -  VWO eCommerce Cart Abandonment Report 2016

Just another WordPress site