Being data-driven is good. Unless of course, all that data driving your decisions is wrong. Google Analytics does a lot of good. It might look fine and seem correct when Goals are firing properly. But just because it’s working, doesn’t mean it’s accurate. Most analytics programs have to make a few implicit assumptions. They’re taking leaps of faith in some cases. And unless you know where to look, you could fall victim to these little white lies. Here are seven of the most common (along with how to fix them). Lie #1. Growing ‘Dark Traffic’ ‘Dark Traffic’ sounds ominous. And…
State Of Buyer Personas 2016 established that approximately 60% of the survey respondents took their first-ever buyer persona development initiative within the last 2 years—a result similar to the previous year survey on personas.
It has been almost two decades since the term “persona” was first coined and used by Alan Cooper in his book “The Inmates Run the Asylum.” However, organizations still struggle to develop personas effectively. As a result, the gap between what the consumer wants and what companies provide has widened.
Look at this survey graph for a quick look into the mistakes that can taint customer-business relations, when the latter does not know its ideal customer well:
In this blog post, we walk you through the process of creating effective personas, how your business can benefit from these, and why these should be a part of your conversion optimization strategy. Let’s begin:
How to Create Effective User Personas
To create personas that are effective, it is important to first understand what personas should not be:
A demographic profile
A market segment
A documentation of behavior based on a research that lacks data
Having listed the “nots” of personas, let’s deep-dive into what effective personas comprise and how to develop these. Research, qualitative and quantitative, is the foundation of personas. When based on research, personas unveil:
What your users want to accomplish?
What drives your users’ behaviors?
What do your users think?
What are their expectations?
What will make them buy?
What could be their reasons for hesitation?
What could be their hindrances?
To develop personas that can give you insights as deep as finding answers to the above questions, and a few more tough ones, we advise you to:
Use Qualitative Research.
Use A/B Testing.
Perform Competitor Analysis.
Using Qualitative Research
Qualitative research tools such as on-page surveys, in-person interviews, and so on can help you uncover the expectations and motivation of a user.
We list some use cases for on-page surveys to help you understand how these can be wisely used for gathering information that is required for developing effective personas:
Use Case 1. Understanding Purchase Decision
Understanding customer motivation for buying a product plays a significant role in replicating the buying behavior. If you knew precisely what motivated a visitor to buy from you, it is the next step to motivate other visitors in the same direction.
What you could ask?
Did you find what you were looking for?
What motivated you to complete your purchase?
What triggers to use?
Goal Completion: As soon as a user completes a signup form or makes payment for items in the cart, this survey should pop up to understand the true motivation behind the purchase.
Use Case 2. Determining Purchase Satisfaction
It is important to know the purchase satisfaction level to determine if there are reasons that can stop them from buying or make them buy from elsewhere. It can also help you categorize people who have high or low purchase satisfaction levels, if you are able to observe a pattern.
What you could ask?
On a scale of 0–10, 0 being the lowest and 10 being the highest, how satisfied are you with your last purchase?
The reason behind your rating. What do you think is good/bad about buying from us?
Analyzing the information that you have from your survey:
With regard to the question on purchase satisfaction levels, the information that your users reveal can be smartly analyzed to create user personas. Let’s say that you have an online apparel business. Running such surveys can help you:
Identify users who are not at all interested in your product (those who rate you between 0–3), users who do not have a firm opinion on your product (4-7), and users who have purchase satisfaction levels (8–10).
Understand the reasons behind high and low purchase satisfaction levels for all categories of users mentioned in the previous point.
Identify patterns, if any, in those rating your product high/low. For example, do those who rate the product on a scale of 8–10 buy the product because of “fresh styles and patterns,” do most of these people fall in the age group (20–25), and so on.
Build user personas based on this information.
What triggers to use?
Time spent on a page: Show the survey after visitors spend “X” seconds on the first webpage they visit. Target the survey using custom targeting to those who have made an online purchase earlier from you.
Asking these questions at the right time can help you fetch actionable information, uncover user motivations, as well as apprehensions.
Similarly, exit intent pop-ups and in-phone surveys also help you find out if your product/service is providing the value that your users and/or customers expect out of it.
Your qualitative research findings can then be dissected to create personas. Consider an example:
You are an eCommerce business selling antiallergic bedding. Your in-phone customer interview and on-page surveys help you determine one of your persona “Jane” with the following attributes:
Aged 32, she has very sensitive skin, which is prone to allergies.
She is willing to pay a little more if the product quality is good.
She also cares about the product being eco-friendly.
Your qualitative research would further help establish:
Jane’s motivation to buy your product: The bedding suits her needs, is priced just what she thinks is right, and can be found easily online.
Jane’s mindset while making a buying decision: She cares about her health and skin. She will not risk investing in any product that can cause allergies. She is also quality-conscious.
Jane’s bottlenecks to buying: She might return the product if she does not find it comfortable and per the quality that she expects. Style and comfort go hand in hand for her.
When you have conducted qualitative research and listed down motivations, bottlenecks, and mindset, you need to gather insights on what your user/customer is doing online. So the next logical step is to unveil Jane’s onsite behavior.
For example, using form analysis can help you identify the form fields that lead to customer hesitation or customers abandoning the form.
Using A/B Testing
Let’s say that you have listed a few findings about your personas, after conducting an in-depth research. However, you want to be as sure as possible. The following attributes can be put to test:
Comfort vs. Style
Discount vs. Buy One, Get One Free
Value of free shipping and free returns
A/B testing can help you narrow down to attributes as close as true to your real users. Whatever assumptions, observations, and opinions you have about your users, you can A/B test them to find out what your ideal users associate more with.
Performing Competitor Research
Digital intelligence tools can help you dig deeper into competitor data to analyze their traffic. Using such tools, you can find out where your competitors are putting their effort into—social media, mobile, content, email marketing, and so on.
After you have an idea of where your competitors’ major efforts go into, you can work backward to identify the audience they are targeting for creating user personas. This elaborate and well-researched post on medium will tell you how you can crack competitor research to create user personas for your business.
Benefits of Personas for Your Business
Mathilde Boyer, Customer Experience Director at the House of Kaizen, lists 5 ways in which every business can benefit by using personas.
“Personas shouldn’t only be created to trigger user empathy within an organization. They should be built with a practical application in mind so that they can be instrumental in a Conversion Optimization Strategy. Validating personas through actual user data and connecting them to target audiences increases their ability to drive business strategies.
Creating and leveraging user personas brings 5 key benefits to Marketers and Product Owners.
Connect research insights
Develop a unified view of your customers and prospects by identifying commonalities and unique attributes to provide a deep understanding of motivations, anxieties, decision making styles and moments when users find inspiration.
Strategically manage marketing budget
User personas allow you to prioritize target audiences and shift spend based on channel performance for individual audiences. Maximize your marketing investment by focusing your efforts and budget on the profitable leads.
Develop powerful brand and product storytelling
User personas can be leveraged to tailor storylines and bring your value proposition to life. They are key to understand aspirations, desires and perceptions of your customers. They are also crucial to strike the right note with unique content created to move buyers from interests to purchase.
Go beyond marketing silos
User personas allow you to ensure continuity and complementarity of messaging and creative across all user touchpoints (ads, website, emails, offline campaigns, customer service script, sales pitch, etc.).
Prioritize product roadmap
User personas should be a valuable levier to inform your product development cycles and ensure that new features are developed to solve evolving prospects’ problems and needs.”
Other than the benefits that Mathilde talks about, personas are also helpful in bringing uniformity to every department of the business regarding who their customer is. From customer service representatives to sales to marketing to the administrators, everyone is aligned to consumer goals. This helps everyone across the business keep their ideal customers happy, and thus increase overall satisfaction as well as retention.
Why Should Personas Be a Part of Your CRO Program
Protocol80 compiles some interesting facts on why personas are awesome. We list 2 of these here as evidence on why personas should be a part of your conversion optimization program.
“In the case of Intel, buyer personas surpassed campaign benchmarks by 75%. They were more cost efficient than the average campaign by 48% DemandGen Report.
In the case of Thomson Reuter, buyer personas contributed to a 175% increase in revenue attributed to marketing, 10% increase in leads sent to sales, and a 72% reduction in lead conversion time.”
Personas can help you improve conversions by:
Improving your personalization efforts.
Helping enhance product user experience.
Improving Personalization – Content
Personas help bring in more clarity on crafting tailored content that appeals to the target audience of the business. Consider an example:
You are an eCommerce business. One of your user persona is say, Mary – The Loyal, with some of the following characteristics:
Visits your website frequently
Makes a purchase every month or two
Does not purchase expensive products
Does not buy more than 2 or 3 products in a single visit
Is fashion-conscious, but does not compromise with quality
As you understand the buying behavior of this user persona, you can run campaigns with content specifically focussed at converting these users. For example, when Mary-the loyal visits your website again, you can personalize recommendations based on her last purchase, which might interest her into making a purchase.
Enhancing User Experience – Design and Development
At the design and development level, personas work as a research tool for businesses intending to enhance browsing/buying experience for their online users. These personas that are based on usage goals, browsing and exploring behavior, as well as pain points, tell the why behind the actions that users take on a website.
Such information is critical for designing any product or service. Understand, relate to, and remember the ideal user Mary-The Loyal throughout the entire product development process. The following design and development problems can be sorted by making user personas a part of the process.
When design teams do not have an understanding of which design elements on the website to prioritize. In this case, design and development teams end up wasting time on either developing or optimizing features that their ideal customer, Mary-The Loyal, does not use.
When design teams are finding it difficult to pitch their proposal to the management. This is where they can use actual data to enhance their idea and show the actual problem they are trying to solve by making the proposed changes.
Mathilde adds to how personas help enhance user experience.
“From a UX perspective, user personas are crucial to prevent self-referential design as they allow to focus the efforts on the needs of the customers and help be mindful of designing experiences as if we, marketers, were the end users.
Data-driven personas are also the foundation to map out customer journeys and ensure full alignment between user needs or perceived needs and the relevancy and length of the experience they have to go through to achieve them.
Personas become extremely powerful when they are taken beyond their naturally descriptive focus and provide a predictive view on how your product or service improves your ideal customers’ lives once they’ve used it for a certain time. The predictive side of personas is a key asset to design future-proof products and experiences.”
To Wrap It Up
When you make personas a part of your strategy, you are trying to maximize value for your ideal users. Here’s how Alan Cooper explains this concept in The Inmates Are Running The Asylum:
“The broader a target you aim for, the more certainty you have of missing the bull’s-eye. If you want to achieve a product-satisfaction level of 50%, you cannot do it by making a large population 50% happy with your products. You can only accomplish it by singling out 50% of the people and striving to make them 100% happy. It goes further than that. You can create an even bigger success by targeting 10% of your market and working to make them 100% ecstatic. It might seem counterintuitive, but designing for a single user is the most effective way to satisfy a broad population.”
Ultimately, filling the gap between the product value as perceived by your ideal user and the actual value that your product provides, will help you convince and convert your users into buyers.
It’s not every day that marketers use the words “email” and “CRO” in the same sentence. After all, most email marketing strategies for eCommerce are mainly focused on sending newsletters, promotional emails, transactional emails, and maybe even cart abandonment messages. If you’re really savvy, you might even be sending post-purchase emails to leverage the traffic you already converted in the hopes that those shoppers will come back to buy more. But here’s the thing: When you focus your email marketing efforts solely on the end of your sales funnel, you’re actually neglecting the majority of your site traffic. That’s traffic…
We have hardly seen through the first month of the year and the internet is already overwhelmed with the advice and trend pieces on eCommerce.
In this post, however, we specifically focus on those trends that can influence eCommerce conversion rates this year. It is important to keep a watch on such trends to keep ahead of the game.
Let’s read through what eCommerce experts are saying.
On-Site Search Optimization
Effective site search is well known for increasing website conversion rates. Weblink’s internal study for 2016 points out that shoppers who use internal site search converted at a 216% higher rate than those who do not.
60% of e-commerce websites do not support searches with symbols and abbreviations.
While 82% websites have autocomplete suggestions, 36% of implementations do more harm than good.
According to Paul Rogers, 2017 will see more and more eCommerce businesses fix and optimize their on-site search in order to increase their conversion rates.
Paul Rogers, eCommerce consultant
I think an area of eCommerce that more and more merchants are starting to address, with a view to optimizing conversion metrics, is on-site search. Many of the clients I work with have upped their game in this area this year, making use of things like self-learning capabilities (via a third-party solution, supporting merchandising), natural language processing (to better understand more complex queries), product / category / attribute boosting and also promoting the use of the function.
In my experience, users who complete a search are considerably more likely to convert. I’ve seen positive results from making search boxes more prominent and more of a core navigation focus (through encouraging more complex queries like ‘search for product, SKU, brand or help’ for example). There are some really good, advanced solutions available for eCommerce stores now that can handle far more complex queries and drive more trade — I really like Klevu for the NLP and catalog enrichment side of things, but Algolia is very strong too.
Using a third-party solution is generally the best route for optimizing search, as the majority of the eCommerce platforms on the market (with the exception of enterprise systems like Oracle Commerce Cloud and IBM Websphere) have weak search technology, some of which are unable to process even the most simple queries.
Amazon Rise Continues
A survey conducted by BloomReach 2015 revealed that approximately half of the online consumers conduct their first product search on Amazon. The survey gives some interesting insights into how and why Amazon continues to dominate American e-commerce market year-on-year.
In fact, the percentage of people who search for a product first on Amazon has gone up from 30% in 2012 to 44% in 2015. Check the graph for numbers on first searches made on Amazon vs. search engines vs. retailer websites.
Andrew Youderian believes that the same trend will continue well into 2017 unless other players are able to build a brand connection with customers.
Andrew Youderian, eCommerce entrepreneur
I think many merchants in 2017—especially those in the U.S.—will see continued downward pressure on their website conversion rates due to Amazon. As Amazon continues to gobble up market share, they are increasingly becoming the go-to place for consumers looking to purchase online. Unless merchants are selling something unique or have a strong brand connection with their customers, it will be difficult to win this battle, and it’s a transition that many merchants haven’t yet made.
The main trend for 2017 is the widespread maturing of the Conversion Rate Optimization industry. It is reminiscent of the usability and analytics industry a decade or so ago. Budgets are on the rise, companies are adopting a structured approach to optimization, and hiring in-house staff for the same.
Chris backs his statement with an interesting study by eConsultancy, according to which over half of companies plan to increase their conversion optimization budgets in 2017. The whole CRO industry will attract attention from the C-suite, he adds further.
The eCommerce industry has been talking about personalization for a while, without much data or fruition. In 2017, I think personalization is going to be the key to more sales from your already existing customers –– i.e. driving up AOV and retention. With so many channels for customers to check out on (and most brands being at least multi if not omnichannel), what will make them checkout on *your* webstore? VIP programs, special discounts, and early access will help to foster loyalty and drive up repeat sales. Plus, you can use this same type of segmentation to sell B2B and wholesale without having to take every single call. 2017 will be about efficiency, and there’s nothing more efficient than getting people who have already purchased from you to buy again, and again, and again.
Throughout the day, the one device that consumes most of our time is mobile. comScore reports that digital media time in the U.S. has exploded recently – growing nearly 50 percent in the past two years, with more than three-fourths of that growth directly attributable to the mobile app.
Since mobile plays a critical role in significantly increasing reach, awareness, and engagement, it is time that eCommerce players start giving it the due attention. Look at the following graph to see how mobile and tablet usage has been increasing over time.
Google has already shown its inclination towards mobile by announcing a “mobile-first” culture. As a result, Accelerated Mobile Pages (AMP) is being talked about a lot.
Look to see more merchants adopting AMP and pushing the mobile conversion rate even higher (especially when AMP gets better and more flexible).”
Smarter Buy Buttons
The busy consumer is looking for smarter ways to shop. While he browses his mobile to make a mental to-buy list, he compares the best deals on a desktop for making an informed purchase.
For retailers, there lies an opportunity in this challenge. With the help of buy buttons, social commerce has enabled eCommerce players convert the buyer at the first point of contact – mobile, tablet, desktop, email, Facebook, Pinterest, or anywhere else.
The buyer journey will always evolve and as a result, retailers must, as well. Among the ways I believe eCommerce, in particular, will see change in the year ahead is by the introduction of smart buy buttons. Such buy buttons do not need as many steps to purchase as they have in the past. This will undoubtedly help conversion rates, as well as connect consumers to brands more efficiently and more quickly than ever before. Through the introduction of buy buttons via social media, email, video platforms and other digital avenues, I believe that customers will be able to skip steps they have not been able to in the past. And, as a result, retailers will benefit with stronger sales and customer engagement.
To Wrap Up
Personalization, on-site optimization, the continuous rise of Amazon, conversion rate optimization, buy buttons—eCommerce businesses can use these trends to their advantage in 2017.
Have any of the trends listed above had any impact on your business? Tell us and our readers in the comments section below.
The following is a case study about how Tough Mudder achieved a 9% session uplift by optimizing for mobile. With the help of altima° and VWO, they identified and rectified pain points for their mobile users, to provide seamless event identification and sign-ups.
About the Company
Tough Mudder offers a series of mud and obstacle courses designed to test physical strength, stamina, and mental grit. Events aren’t timed races, but team activities that promote camaraderie and accomplishment as a community.
Tough Mudder wanted to ensure that enrolment on their mobile website was smooth and easy for their users. They partnered with altima°, a digital agency specializing in eCommerce, and VWO to ensure seamless event identification and sign-ups.
Research on Mobile Users
The agency first analyzed Tough Mudder’s Google Analytics data to identify any pain points across participants’ paths to enrollment. They analyzed existing rates from the Event List, which demonstrated that interested shoppers were not able to identify the events appropriate for them. The agency began to suspect that customers on mobile might not be discovering events easily enough.
On the mobile version of the original page, most relevant pieces of information like the event location and date, were being pushed too far down below the fold. In addition, lesser relevant page elements were possibly distracting users from the mission at hand. This is how it looked like:
The agency altima° decided to make the following changes in the variation:
Simplified header: Limiting the header copy to focus on the listed events. The following image shows how this looked.
List redesign: Redesigning the filter and event list to prominently feature the events themselves. The following image shows the same:
Additionally, an Urgency Message was added to encourage interested users to enroll in events nearing their deadline. See the following image to know how it was done:
For these three variations, seven different combinations were created and a multivariate test was run using VWO. The test experienced over 2k event sign-ups across 4 weeks. The combinations of variations are shown below:
After 4 weeks, Variation 2, which included the redesigned event list, proved to be the winning variation. This is not to say that other test variations were not successful. Variation 2 was just the MOST successful:
The winning variation produceda session value uplift of 9%! Combined with the next 2 rounds of optimization testing, altima° helped Tough Mudder earn a session value uplift of over 33%!
Why Did Variation 2 Win?
altima° prefers to let the numbers speak for themselves and not dwell on subjective observations. After all, who needs opinions when you’ve got data-backed results? altima°, however, draws the following conclusions on why Variation 2 won:
Social proof has demonstrated itself to be a worthy component of conversion optimization initiatives. These often include customer reviews and/or indications of popularity across social networks.
In fact, Tough Mudder experienced a significant lift in the session value due to the following test involving the addition of Facebook icons. It’s likely that the phrase Our Events Have Had Over 2 Million Participants Across 3 Continents warranted its own kind of social proof.
The most ambitious testing element to design and develop was also the most successful.
It appeared that an unnecessary amount of real estate was being afforded to the location filter. This was resolved by decreasing margins above and below the filter, along with removing the stylized blue graphic.
The events themselves now carried a more prominent position relative to the fold on mobile devices. Additionally, the list itself was made to be more easily read, with a light background and nondistracting text.
The underperformance of the urgency message came as a surprise. It was believed that this element would prove to be valuable, further demonstrating the importance of testing with VWO.
Something to consider is that not every event included an urgency message. After all, not every enrolment period was soon to close. Therefore, it could be the case that some customers were less encouraged to click through and enroll in an individually relevant event if they felt that they had more time to do so later.
They might have understood that their event of interest wasn’t promoting urgency and was, therefore, not a priority. It also might have been the case that an urgency message was introduced too early in the steps to event enrolment.
How did you find this case study? There are more testing theories to discuss! Please reach out to altima° and VWO to discuss. You could also drop in a line in the Comments section below.
When it comes to overlays, everyone’s a critic — especially your prospects. Image via Shutterstock.
These days, cyberspace is about as cluttered as my closet.
And in that deep sea of endless streams and notifications and other dopamine-releasing distractions, getting your offer seen can be challenging to say the least.
Luckily, overlays can help mute some of that background noise by focusing your visitor’s attention on one (hopefully) compelling offer.
But your job doesn’t end there.
Once you get your prospect’s attention with an overlay, it’s your job to use design and copywriting best practices to keep their interest.
What are these best practices I speak of? Let’s take a look at some overlay examples we spotted in the wild for some concrete examples of what you should — and shouldn’t — do.
Be immediately clear on the value of your offer
I have to admit that when I first saw this overlay, I found the tongue-in-cheek copywriting delightful.
The headline was clever and had me nodding my head:
And while the self-aware overlay is a cute idea, you know what’s less cute? Just how quickly your prospect will look for that “x” button if the value of the offer isn’t abundantly clear.
Don’t make readers work to find out what your offer is. It’s fine to be cutesy, as long as you’re explaining what’s in it for them. See how Groove clearly explains the benefit of signing up for their newsletter?
The transparency of this offer makes it appealing, and the specificity of Groove’s current monthly revenue adds credibility.
Pro tip: When you’re pushing a subscription, your copy has to do a lot of work because there’s no immediate value. Test including a tangible offer like a free ebook.
It’s not about you!
This overlay by the Chive has personality, but not much persuasive power:
The headline – “the best newsletter in the world” – is playful (if a little cocky), but it fails to communicate what makes the newsletter great and why readers should care.
They’re so caught up in self-praise that they forget to explain what’s in it for the reader. How will signing up for this newsletter impact the reader’s life?
This overlay by GetResponse is guilty of a similar infraction, and to be frank, the tone is a little despie:
This overlay uses “I” and “us” language without ever explaining the benefits of the offering — not to mention it never really explains what GetResponse is.
This is problematic, because the overlay appears on a page giving away an ebook only marginally related to their core offering — so it’s safe to assume that not everyone will know what GetResponse is.
I’d test an overlay that includes a compelling, customer-focused unique value proposition and a clear hero shot so people can quickly understand what they’re dealing with at a glance.
Want more overlay best practices?
Download Unbounce’s free guide: Best Practices for Creating High-Converting Overlays
By entering your email you’ll receive weekly Unbounce Blog updates and other resources to help you become a marketing genius.
Lead with what’s in it for them
So what does customer-focused copy look like? Preneur Marketing’s overlay leads with a headline that explains in detail what the reader will get when they sign up:
So much specificity!
But Preneur Marketing doesn’t stop there. They lay the persuasion on thick using a number of trusted devices, such as a UVP, a hero shot, a list of benefits, social proof and a single conversion goal (do these elements sound familiar?).
A great thing to test would be a hero shot representative of the actual offering, like the one in this overlay by Acquire Convert:
Use overlays to counter objections
No matter which stage of the buyer journey your prospect is at, their inner monologue will include some objections to your offer. Overlays are a great way to counter them.
For example, have a look at this overlay by Gr8fires, which appeared for visitors to their ecommerce store. They knew visitors to that page were likely shopping around for the best deals and were likely already thinking, “I don’t know how much stove installation is going to cost.”
To counter that objection, Gr8fires created an overlay with an “installation calculator” that detailed the costs associated with installing their product. See how the headline mirrors the conversation in the prospect’s head?
The results of Gr8fires’ overlay campaign were incredible: 300% increase in monthly sales leads and a 48.54% lift in sales. Image source.
This example is particularly wonderful because it accomplishes something for both the marketer and the prospect. On the prospect’s end, it delivers great value in exchange for a very small commitment (entering name and email). On the marketer’s end, it helps to educate prospects on a larger-ticket item that typically requires more convincing.
A real win-win scenario. Beautiful, isn’t it?
Don’t be a negative Nelly
If you’ve seen overlays across the web, you’ve likely noticed that “yes” button text is often juxtaposed with “no” hyperlink text in close proximity. And you’ve likely noticed that the “no” hyperlink text is often sassy.
I see this everywhere online — marketers resorting to language like:
Nobody thinks this.
Or this one:
Don’t forget this one:
Or finally, this example, which borders on offensive:
This is getting out of hand.
It should go without saying, but you should never talk down to prospects simply because they might not want your offering.
Not only does that create friction to completing the form, it can also damage your brand’s image and credibility.
This example by Narcity misses the mark for a different reason:
This overlay forces a lie in order to opt-out: “I’m already subscribed.”
This is problematic for two reasons:
If people are subscribed then they shouldn’t be seeing this to begin with
It creates cognitive dissonance, forcing prospects to stop and think.
In short, it creates a jarring experience that doesn’t make you wanna fill in the form.
So what should you be doing?
Mirror the voice in your prospect’s head
Don’t talk down to your visitors with “I can’t stand exclusive offers” opt-out copy.
Stop and reflect on what they’re likely thinking when they click that “no” button. The folks at TVLiftCabinet.com keep it classy:
When at a loss, stick with a straightforward, “No thanks, I’m not interested.”
Make it easy to say yes
There are tons of other things you can test to make your overlay offers irresistible to visitors.
Test fewer form fields to reduce perceived friction on your forms:
Adding too many form fields can have a negative impact on conversion rates.
Make visitors feel like they’re being offered something exclusive:
Whatever you do, never forget that your prospect’s attention is a valuable commodity.
And once you have it, you should respect it by doing everything you can to deliver meaningful value.
He is also the coauthor of the book E-commerce Optimization, out in January 2017. He speaks about how they practice conversion optimization for different verticals of their eCommerce clients.
After reading this post, agencies will learn the nuances of CRO when applied to different eCommerce verticals such as Fashion, Homeware, and Consumer Electronics. They’ll learn CRO strategies that will help them make a stronger case for adopting CRO for their prospective eCommerce clients.
1) How important do you think Conversion Optimization (CRO) is for eCommerce enterprises? Why?
CRO is one of the best growth strategies available to eCommerce firms. Turnover is not something you influence directly. It is the outcome of activities performed in other areas. The rate at which people buy from you and how much they spend when they buy, are within your control. In turn, these will increase the revenue and ultimately profit. This is what CRO is about.
On average, for every £92 spent on getting traffic to UK websites, only £1 is spent on improving the conversion rate. If you improve the ability of your site to generate money, your acquisition dollars stretch further as more of the visitors are converted to buyers.
2) Is there a difference in your approach to CRO for different eCommerce verticals? If yes, how?
Not really. We always follow an evidence-led approach informed by research, data analysis, and testing. That said, our implementation will not be the same on two projects as we are guided by the opportunities specific to that particular website.
As long as you follow the scientific method, which we outline in our book E-commerce Optimization (Kogan Page), the same approach can generally be applied across different verticals. Broadly speaking, it’s a system of generating and prioritizing relevant ideas, and a mechanism by which to test those ideas.
3) Which are the major eCommerce verticals that you have worked with?
We have extensive experience in the fashion retail industry, having worked with top clothing and footwear brands from different countries. Furniture and homeware are two other categories we are well-known for.
Other big verticals for us include consumer electronics, flowers, gardening, gifting, health products, and outdoor travel gear. Our entire portfolio ranges from bathroom fittings to wearable technology.
Conversion Optimization for Different eCommerce Verticals
4) Do your CRO goals (micro and macro) differ for Fashion, Homeware and Consumer electronics based eCommerce businesses?
Our philosophy is to optimize for revenue, so in almost all cases, the primary metric is Revenue Per Visitor (RPV). If it’s an eCommerce business, Conversion Rate simply doesn’t give you the complete picture.
Secondary metrics, aligned with micro goals, vary widely. These are typically determined by the context of the experiment, rather than the vertical. For example, on a product detail page (PDP), you might want to track clicks on “add to basket” and engagement with product images. It helps to interpret the outcome of the test.
Sometimes we track key performance indicators (KPIs) outside of the testing environment. For example, experimenting with free delivery for a fashion client, we tracked product returns and married this data manually with test results.
5) What are the main “conversion funnels” for these different eCommerce websites? Do you see a difference in major traffic sources for the websites?
It’s not uncommon to see organic search being the major source of traffic for known brands. Often, the lion’s share of that is branded search terms, so in a way, it’s an extension of direct traffic. When a business is still establishing its brand, you’d expect to see more from paid search and other channels.
Many agencies limit optimization efforts to the website, which is a mistake. Social is an exciting area for some businesses, often rich with opportunities. Email consistently delivers good results for many of our clients and therefore, any gains in this arena can have a significant impact on overall business results.
Omni-channel, where we have a lot of experience, adds different dynamics. Not only do you see more direct traffic at the top of the funnel, but a large group of website visitors tend to browse with the intention to buy in-store. Or they may buy online, but only after visiting a store to “touch and feel” the product.
It’s important for the optimizer to take into consideration the entire journey, mapping out how the various touch points contribute to the purchase decision.
6) Which persuasion principles (scarcity, social proof, reciprocity, etc.) do you use in optimizing different eCommerce vertical websites?
We regularly use social proof, liking, authority, and scarcity. It depends entirely on the situation. We don’t actively look for ways to plug them in. Instead, we examine the data and use a principle if it seems like a relevant solution. For example, one of our clients sells plants by catalogue and online. A common sales objection was whether the flowers would look as good in the customer’s garden as they are in the product images. This prompted us to invite customers to submit pictures of products in their gardens, invoking the social proof principle.
Once we’ve decided to use a principle, we may run a few tests to find the best implementation.
If a principle is already present on the website, there could be ways of making it more persuasive. In some cases, a message can be distracting in one part of the funnel yet very effective in another area of the site.
7) Which are the common conversion killers for these different eCommerce enterprises?
Some are universal, for example, delivery. Not only do consumers generally resist paying for shipping, but long waiting periods put them off. If you charge for it, you have to treat it like a product with its own compelling value proposition.
In the fashion industry, it’s size and fitting. Will these boots fit me? How will this shirt hang on me? Is your size 8 the same as another manufacturer’s size 8? These are the common themes. Typical concerns in the furniture and homeware space are material composition, dimensions, and perspective.
Sometimes we’re surprised by what we uncover. One of our clients, a gifting site, had a great returns policy. Obviously this was messaged clearly on the website. However, we discovered that it actually turned out to be a conversion killer for them. Why? Many of the buyers were grandparents who didn’t want to contemplate the prospect of their grandchildren returning their gifts.
8) The buying cycle for each eCommerce vertical website varies. Does your CRO strategy take this into account?
Definitely. The buying cycle is something we map out carefully.
For us, it is crucial to get under the skin of the customer. We want to understand exactly what goes into a sale being made or lost.
It can also inform our approach with testing. Normally we’d run an experiment for two weeks. However, if the purchase cycle is longer than that for the majority of customers, we may extend the test duration.
9) Does seasonality have an effect on your CRO strategy for different eCommerce verticals?
Clearly, some verticals are affected by it more than others. Seasonality is partly a reflection of customer needs. It is easier to deal with if you have a solid understanding of the core needs. In some verticals like gardening, it might be a good idea to conduct user research in low and high seasons.
Some clients are loath to run tests during peak trading periods like Christmas sales. Our view is that it is essential to optimize the site for those periods, especially if they contribute to annual turnover in any significant way.
10) On which eCommerce websites do you employ upsell/cross-sell strategies mostly?
Because our primary metric is usually Revenue Per Visitor rather than Conversion Rate, a driving question for us is how to increase Average Order Value. Upselling and cross-selling strategies are, therefore, almost always on our radar. We have had great success, for example, by optimizing the presentation and algorithms of popular product recommendation tools.
Upselling and cross-selling may be thought of as “tricking” the customer into spending more money. However, we’ve seen how frustrated customers become, having to hunt for items related to the one they are considering. It actually improves user experience, which is then reflected in an increase in revenue.
11) What CRO strategies do you apply on product pages of different eCommerce vertical websites (for instance, on product descriptions, product images, etc.)?
On most eCommerce sites, the product detail pages, or PDPs, have the highest drop-off rates on the site.
Exit rates in the region of 90%, and even higher are not uncommon. It is where the visitor is asked to make the decision. This is where questions about the product itself, as well as the buying process, are often most prominent.
We don’t have a checklist of tactics to apply to PDPs. Our test ideas emerge from research and analysis. If you understand the customer and what goes into the purchase decision, you’ll know what to test. Think of it as optimizing the sales conversation. It’s all about how you influence what plays out in the visitor’s mind.
If the visitors engage with product description, they may be closer to making a buying decision. Often this decision is based on the image, and reading the copy serves only to rationalize the purchase. Perhaps they are checking a few details or looking to answer a question about the product. The starting point for writing a good copy is knowing the customers and understanding their motivations and sales objections in relation to the product.
Likely to be the focus of most attention on the PDP. Often a substitute for reading product descriptions, so make sure you have a good selection of images that will answer key questions. On a lantern page, customers might wonder about the light patterns on their wall. Show them! Images appeal to System 1 thinking, which means purchase decisions are made instantly without thinking it over. Good images help the customer imagine themselves using the item, which can be quite persuasive.
Over to You
Do you have anything to add or suggest to this interview? Share with us what you think in the comments below.
Instagram and ecommerce are logical bedfellows. The brand-audience engagement rates here outperform all of the mainstream social channels, while the visual elegance of Instagram posts are perfect for showcasing people enjoying beautiful products in the wild. It’s lifestyle marketing but without the phoniness of high-concept production shoots – just compelling, evocative imagery wrapped up in authentic social proof. What else could an ecommerce marketer ask for? Image Source So when it comes to branding and engagement, ecommerce marketers have it great on Instagram. Providing audience members with seamless opportunities to make purchases, however, is another story. Researchers have estimated that…
Every year it feels like TV and radio stations, retailers and other businesses start the “holiday season” talk earlier and earlier. Some people complain. Some people love it. I’m usually thinking about other things — like how to stay sane. You see, online retailers (which many of my clients are) are basically going berserk during the months of November and December. Ever heard of Black Friday? Cyber Monday? E-commerce booms as the holidays near, especially among women and Millennials. Taking advantage of these search trends takes preparation, grit, timing, and even a little bit of luck. I’m not promising an…
To A/A test or not is a question that invites conflicting opinions. Enterprises when faced with the decision of implementing an A/B testing tool do not have enough context on whether they should A/A test. Knowing the benefits and loopholes of A/A testing can help organizations make better decisions.
In this blog post we explore why some organizations practice A/A testing and the things they need to keep in mind while A/A testing. We also discuss other methods that can help enterprises decide whether or not to invest in a certain A/B testing tool.
Why Some Organizations Practice A/A Testing
A/A testing is done when organizations are taking up new implementation of an A/B testing tool. Running an A/A test at that time can help them with:
Checking the accuracy of an A/B Testing tool
Setting a baseline conversion rate for future A/B tests
Deciding a minimum sample size
Checking the Accuracy of an A/B Testing Tool
Organizations who are about to purchase an A/B testing tool or want to switch to a new testing software may run an A/A test to ensure that the new software works fine, and that it has been set up properly.
Tomasz Mazur, an eCommerce Conversion Rate Optimization expert, explains further: “A/A testing is a good way to run a sanity check before you run an A/B test. This should be done whenever you start using a new tool or go for new implementation. A/A testing in these cases will help check if there is any discrepancy in data, let’s say, between the number of visitors you see in your testing tool and the web analytics tool. Further,this helps ensure that your hypothesis are verified.”
In an A/A test, a web page is A/B tested against an identical variation. When there is absolutely no difference between the control and the variation, it is expected that the result will beinconclusive. However, in cases where an A/A test provides a winner between two identical variations, there is a problem. The reasons could be the following:
The tool has not been set up properly.
The test hasn’t been conducted correctly.
The testing tool is inefficient.
Here’s what Corte Swearingen,Director, A/B Testing and Optimization at American Eagle,has to say about A/A testing: “I typically will run an A/A test when a client seems uncertain about their testing platform, or needs/wants additional proof that the platform is operating correctly. There really is no better way to do this than to take the exact same page and test it against itself with no changes whatsoever. We’re essentially tricking the platform and seeing if it catches us! The bottom line is that while I don’t run A/A tests very often, I will occasionally use it as a proof of concept for a client, and to help give them confidence that the split testing platform they are using is working as it should.”
Determining the Baseline Conversion Rate
Before running any A/B test, you need to know the conversion rate that you will be benchmarking the performance results against. This benchmark is your baseline conversion rate.
An A/A test can help you set the baseline conversion rate for your website. Let’s explain this with the help of an example. Suppose you are running an A/A test where the control gives 303 conversions out of 10,000 visitors and the identical variation B gives 307 out of 10,000 conversions. The conversion rate for A is 3.03%, and that for B is 3.07%, when there is no difference between the two variations. Therefore, the conversion rate range that can be set as a benchmark for future A/B tests can be set at 3.03–3.07%. If you run an A/B test later and get an uplift within this range, this might mean that this result is not significant.
Deciding a Minimum Sample Size
A/A testing can also help you get an idea about the minimum sample size from your website traffic. A small sample size would not include sufficient traffic from multiple segments. You might miss out on a few segments which can potentially impact your test results. With a larger sample size, you have a greater chance of taking into account all segments that impact the test.
Corte says, “A/A testing can be used to make a client understand the importance of getting enough people through a test before assuming that a variation is outperforming the original.” He explains this with an A/A testing case study that was done for Sales Training Program landing pages for one of his clients, Dale Carnegie. The A/A test that was run on two identical landing pages got test results indicating that a variation was producing an 11.1% improvement over the control. The reason behind this was that the sample size being tested was too small.
After having run the A/A test for a period of 19 days and with over 22,000 visitors, the conversion rates between the two identical versions were the same.
Michal Parizek, Senior eCommerce & Optimization Specialist atAvast, shares similar thoughts. He says, “At Avast, we did a comprehensive A/A test last year. And it gave us some valuable insights and was worth doing it!” According to him, “It is always good to check the statistics before final evaluation.”
At Avast, they ran an A/A test on two main segments—customers using the free version of the product and customers using the paid version. They did so to get a comparison.
The A/A test had been live for 12 days, and they managed to get quite a lot of data. Altogether, the test involved more than 10 million users and more than 6,500 transactions.
In the “free” segment, they saw a 3% difference in the conversion rate and 4% difference in Average Order Value (AOV). In the “paid” segment, they saw a 2% difference in conversion and 1% difference in AOV.
“However, all uplifts were NOT statistically significant,” says Michal. He adds, “Particularly in the ‘free’ segment, the 7% difference in sales per user (combining the differences in the conversion rate and AOV) might look trustworthy enough to a lot of people. And that would be misleading. Given these results from the A/A test, we have decided to implement internal A/B testing guidelines/lift thresholds. For example, if the difference in the conversion rate or AOV is lower than 5%, be very suspicious that the potential lift is not driven by the difference in the design but by chance.”
Michal sums up his opinion by saying, “A/A testing helps discover how A/B testing could be misleading if they are not taken seriously. And it is also a great way to spot any bugs in the tracking and setup.”
Problems with A/A Testing
In a nutshell, the two main problems inherent in A/A testing are:
Everpresent element of randomness in any experimental setup
Requirement of a large sample size
We will consider these one by one:
Element of Randomness
As pointed out earlier in the post, checking the accuracy of a testing tool is the main reason for running an A/A test. However, what if you find out a difference between conversions of control and an identical variation? Do you always point it out as a bug in the A/B testing tool?
The problem (for the lack of a better word) with A/A testing is that there is always an element of randomness involved. In some cases, the experiment acquires statistical significance purely by chance, which means that the change in the conversion rate between A and its identical version is probabilistic and does not denote absolute certainty.
Tomaz Mazur explains randomness with a real-world example. “Suppose you set up two absolutely identical stores in the same vicinity. It is likely, purely by chance or randomness, that there is a difference in results reported by the two. And it doesn’t always mean that the A/B testing platform is inefficient.”
Requirement of a Large Sample Size
Following the example/case study provided by Corte above, one problem with A/A testing is that it can be time-consuming. When testing identical versions, you need a large sample size to find out if A is preferred to its identical version. This in turn will take too much time.
As explained in one of the ConversionXL’s posts, “The amount of sample and data you need to prove that there is no significant bias is huge by comparison with an A/B test. How many people would you need in a blind taste testing of Coca-Cola (against Coca-Cola) to conclude that people liked both equally? 500 people, 5000 people?” Experts at ConversionXL explain that entire purpose of an optimization program is to reduce wastage of time, resources, and money. They believe that even though running an A/A test is not wrong, there are better ways to use your time when testing. In the post they mention, “The volume of tests you start is important but even more so is how many you *finish* every month and from how many of those you *learn* something useful from. Running A/A tests can eat into the “real” testing time.”
VWO’s Bayesian Approach and A/A Testing
VWO uses a Bayesian-based statistical engine for A/B testing. This allows VWO to deliver smart decisions–it tells you which variation will minimize potential loss.
Chris Stucchio, Director of Data Science at VWO, shares his viewpoint on how A/A testing is different in VWO than typical frequentist A/B testing tools.
Most A/B testing tools are seeking truth. When running an A/A test in a frequentist tool, an erroneous “winner” should only be reported 5% of the time. In contrast, VWO’s SmartStats is attempting to make a smart business decision. We report a smart decision when we are confident that a particular variation is not worse than all the other variations, that is, we are saying “you’ll leave very little money on the table if you choose this variation now.” In an A/A test, this condition is always satisfied—you’ve got nothing to lose by stopping the test now.
The correct way to evaluate a Bayesian test is to check whether the credible interval for lift contains 0% (the true value).
He also says that the possible and simplest reason for A/A tests to provide a winner
Other Methods and Alternatives to A/A Testing
A few experts believe that A/A testing is inefficient as it consumes a lot of time that could otherwise be used in running actual A/B tests. However, there are others who say that it is essential to run a health check on your A/B testing tool. That said, A/A testing alone is not sufficient to establish whether one testing tool should be prefered over another. When making a critical business decision such as buying a new tool/software application for A/B testing, there are a number of other things that should be considered.
Corte points out that though there is no replacement or alternative to A/A testing, there are other things that must be taken into account when a new tool is being implemented. These are listed as follows:
Will the testing platform integrate with my web analytics program so that I can further slice and dice the test data for additional insight?
Will the tool let me isolate specific audience segments that are important to my business and just test those audience segments?
Will the tool allow me to immediately allocate 100% of my traffic to a winning variation? This feature can be an important one for more complicated radical redesign tests where standardizing on the variation may take some time. If your testing tool allows immediate 100% allocation to the winning variation, you can reap the benefits of the improvement while the page is built permanently in your CMS.
Does the testing platform provide ways to collect both quantitative and qualitative information about site visitors that can be used for formulating additional test ideas? These would be tools like heatmap, scrollmap, visitor recordings, exit surveys, page-level surveys, and visual form funnels. If the testing platform does not have these integrated, do they allow integration with third-party tools for these services.
Does the tool allow for personalization? If test results are segmented and it is discovered that one type of content works best for one segment and another type of content works better for a second segment, does the tool allow you to permanently serve these different experiences for different audience segments”?
That said, there is still a set of experts or people who would opt for alternatives such as triangulating data over an A/A test. Using this procedure means you have two sets of performance data to cross-check with each other. Use one analytics platform as the base to compare all other outcomes against, to check if there is something wrong or something that needs fixing.
And then there is the argument—why just A/A test when you can get more meaningful insights by running an A/A/B test. Doing this, you can still compare two identical versions while also testing some changes in the B variant.
When businesses face the decision of implementing a new testing software application, they need to run a thorough check on the tool. A/A testing is one method that some organizations use for checking the efficiency of the tool. Along with personalization and segmentation capabilities and some other pointers mentioned in this post, this technique can help check if the software application is good for implementation.
Did you find the post insightful? Drop us a line in the comments section with your feedback.