Tag Archives: model

Your mobile website optimization guide (or, how to stop pissing off your mobile users)

Reading Time: 15 minutes

One lazy Sunday evening, I decided to order Thai delivery for dinner. It was a Green-Curry-and-Crispy-Wonton kind of night.

A quick google search from my iPhone turned up an ad for a food delivery app. In that moment, I wanted to order food fast, without having to dial a phone number or speak to a human. So, I clicked.

From the ad, I was taken to the company’s mobile website. There was a call-to-action to “Get the App” below the fold, but I didn’t want to download a whole app for this one meal. I would just order from the mobile site.

Dun, dun, duuuun.

Over the next minute, I had one of the most frustrating ordering experiences of my life. Labeless hamburger menus, the inability to edit my order, and an overall lack of guidance through the ordering process led me to believe I would never be able to adjust my order from ‘Chicken Green Curry’ to ‘Prawn Green Curry’.

After 60 seconds of struggling, I gave up, utterly defeated.

I know this wasn’t a life-altering tragedy, but it sure was an awful mobile experience. And I bet you have had a similar experience in the last 24 hours.

Let’s think about this for a minute:

  1. This company paid good money for my click
  2. I was ready to order online: I was their customer to lose
  3. I struggled for about 30 seconds longer than most mobile users would have
  4. I gave up and got a mediocre burrito from the Mexican place across the street.

Not only was I frustrated, but I didn’t get my tasty Thai. The experience left a truly bitter taste in my mouth.

10 test ideas for optimizing your mobile website!

Get this checklist of 10 experiment ideas you should test on your mobile website.




Why is mobile website optimization important?

In 2017, every marketer ‘knows’ the importance of the mobile shopping experience. Americans spend more time on mobile devices than any other. But we are still failing to meet our users where they are on mobile.

Americans spend 54% of online time on mobile devices. Source: KPCB.

For most of us, it is becoming more and more important to provide a seamless mobile experience. But here’s where it gets a little tricky…

Conversion optimization”, and the term “optimization” in general, often imply improving conversion rates. But a seamless mobile experience does not necessarily mean a high-converting mobile experience. It means one that meets your user’s needs and propels them along the buyer journey.

I am sure there are improvements you can test on your mobile experience that will lift your mobile conversion rates, but you shouldn’t hyper-focus on a single metric. Instead, keep in mind that mobile may just be a step within your user’s journey to purchase.

So, let’s get started! First, I’ll delve into your user’s mobile mindset, and look at how to optimize your mobile experience. For real.

You ready?

What’s different about mobile?

First things first: let’s acknowledge that your user is the same human being whether they are shopping on a mobile device, a desktop computer, a laptop, or in-store. Agreed?

So, what’s different about mobile? Well, back in 2013, Chris Goward said, “Mobile is a state of being, a context, a verb, not a device. When your users are on mobile, they are in a different context, a different environment, with different needs.”

Your user is the same person when she is shopping on her iPhone, but she is in a different context. She may be in a store comparing product reviews on her phone, or she may be on the go looking for a good cup of coffee, or she may be trying to order Thai delivery from her couch.

Your user is the same person on mobile, but in a different context, with different needs.

This is why many mobile optimization experts recommend having a mobile website versus using responsive design.

Responsive design is not an optimization strategy. We should stop treating mobile visitors as ‘mini-desktop visitors’. People don’t use mobile devices instead of desktop devices, they use it in addition to desktop in a whole different way.

– Talia Wolf, Founder & Chief Optimizer at GetUplift

Step one, then, is to understand who your target customer is, and what motivates them to act in any context. This should inform all of your marketing and the creation of your value proposition.

(If you don’t have a clear picture of your target customer, you should re-focus and tackle that question first.)

Step two is to understand how your user’s mobile context affects their existing motivation, and how to facilitate their needs on mobile to the best of your ability.

Understanding the mobile context

To understand the mobile context, let’s start with some stats and work backwards.

  • Americans spend more than half (54%) of their online time on mobile devices (Source: KPCB, 2016)
  • Mobile accounts for 60% of time spent shopping online, but only 16% of all retail dollars spent (Source: ComScore, 2015)

Insight: Americans are spending more than half of their online time on their mobile devices, but there is a huge gap between time spent ‘shopping’ online, and actually buying.

  • 29% of smartphone users will immediately switch to another site or app if the original site doesn’t satisfy their needs (Source: Google, 2015)
  • Of those, 70% switch because of lagging load times and 67% switch because it takes too many steps to purchase or get desired information (Source: Google, 2015)

Insight: Mobile users are hypersensitive to slow load times, and too many obstacles.

So, why the heck are our expectations for immediate gratification so high on mobile? I have a few theories.

We’re reward-hungry

Mobile devices provide constant access to the internet, which means a constant expectation for reward.

“The fact that we don’t know what we’ll find when we check our email, or visit our favorite social site, creates excitement and anticipation. This leads to a small burst of pleasure chemicals in our brains, which drives us to use our phones more and more.” – TIME, “You asked: Am I addicted to my phone?

If non-stop access has us primed to expect non-stop reward, is it possible that having a negative mobile experience is even more detrimental to our motivation than a negative experience in another context?

When you tap into your Facebook app and see three new notifications, you get a burst of pleasure. And you do this over, and over, and over again.

So, when you tap into your Chrome browser and land on a mobile website that is difficult to navigate, it makes sense that you would be extra annoyed. (No burst of fun reward chemicals!)

A mobile device is a personal device

Another facet to mobile that we rarely discuss is the fact that mobile devices are personal devices. Because our smartphones and wearables are with us almost constantly, they often feel very intimate.

In fact, our smartphones are almost like another limb. According to research from dscout, the average cellphone user touches his or her phone 2,167 times per day. Our thumbprints are built into them, for goodness’ sake.

Just think about your instinctive reaction when someone grabs your phone and starts scrolling through your pictures…

It is possible, then, that our expectations are higher on mobile because the device itself feels like an extension of us. Any experience you have on mobile should speak to your personal situation. And if the experience is cumbersome or difficult, it may feel particularly dissonant because it’s happening on your mobile device.

User expectations on mobile are extremely high. And while you can argue that mobile apps are doing a great job of meeting those expectations, the mobile web is failing.

If yours is one of the millions of organizations without a mobile app, your mobile website has got to work harder. Because a negative experience with your brand on mobile may have a stronger effect than you can anticipate.

Even if you have a mobile app, you should recognize that not everyone is going to use it. You can’t completely disregard your mobile website. (As illustrated by my extremely negative experience trying to order food.)

You need to think about how to meet your users where they are in the buyer journey on your mobile website:

  1. What are your users actually doing on mobile?
  2. Are they just seeking information before purchasing from a computer?
  3. Are they seeking information on your mobile site while in your actual store?

The great thing about optimization is that you can test to pick off low-hanging fruit, while you are investigating more impactful questions like those above. For instance, while you are gathering data about how your users are using your mobile site, you can test usability improvements.

Usability on mobile websites

If you are looking take get a few quick wins to prove the importance of a mobile optimization program, usability is a good place to begin.

The mobile web presents unique usability challenges for marketers. And given your users’ ridiculously high expectations, your mobile experience must address these challenges.

mobile website optimization - usability
This image represents just a few mobile usability best practices.

Below are four of the core mobile limitations, along with recommendations from the WiderFunnel Strategy team around how to address (and test) them.

Note: For this section, I relied heavily on research from the Nielsen Norman Group. For more details, click here.

1. The small screen struggle

No surprise, here. Compared to desktop and laptop screens, even the biggest smartphone screen is smaller―which means they display less content.

“The content displayed above the fold on a 30-inch monitor requires 5 screenfuls on a small 4-inch screen. Thus mobile users must (1) incur a higher interaction cost in order to access the same amount of information; (2) rely on their short-term memory to refer to information that is not visible on the screen.” – Nielsen Norman Group, “Mobile User Experience: Limitations and Strengths

Strategist recommendations:

Consider persistent navigation and calls-to-action. Because of the smaller screen size, your users often need to do a lot of scrolling. If your navigation and main call-to-action aren’t persistent, you are asking your users to scroll down for information, and scroll back up for relevant links.

Note: Anything persistent takes up screen space as well. Make sure to test this idea before implementing it to make sure you aren’t stealing too much focus from other important elements on your page.

2. The touchy touchscreen

Two main issues with the touchscreen (an almost universal trait of today’s mobile devices) are typing and target size.

Typing on a soft keyboard, like the one on your user’s iPhone, requires them to constantly divide their attention between what they are typing, and the keypad area. Not to mention the small keypad and crowded keys…

Target size refers to a clickable target, which needs to be a lot larger on a touchscreen than it is does when your user has a mouse.

So, you need to make space for larger targets (bigger call-to-action buttons) on a smaller screen.

Strategist recommendations:

Test increasing the size of your clickable elements. Google provides recommendations for target sizing:

You should ensure that the most important tap targets on your site—the ones users will be using the most often—are large enough to be easy to press, at least 48 CSS pixels tall/wide (assuming you have configured your viewport properly).

Less frequently-used links can be smaller, but should still have spacing between them and other links, so that a 10mm finger pad would not accidentally press both links at once.

You may also want to test improving the clarity around what is clickable and what isn’t. This can be achieved through styling, and is important for reducing ‘exploratory clicking’.

When a user has to click an element to 1) determine whether or not it is clickable, and 2) determine where it will lead, this eats away at their finite motivation.

Another simple tweak: Test your call-to-action placement. Does it match with the motion range of a user’s thumb?

3. Mobile shopping experience, interrupted

As the term mobile implies, mobile devices are portable. And because we can use ‘em in many settings, we are more likely to be interrupted.

“As a result, attention on mobile is often fragmented and sessions on mobile devices are short. In fact, the average session duration is 72 seconds […] versus the average desktop session of 150 seconds.”Nielsen Norman Group

Strategist recommendations:

You should design your mobile experience for interruptions, prioritize essential information, and simplify tasks and interactions. This goes back to meeting your users where they are within the buyer journey.

According to research by SessionM (published in 2015), 90% of smartphone users surveyed used their phones while shopping in a physical store to 1) compare product prices, 2) look up product information, and 3) check product reviews online.

You should test adjusting your page length and messaging hierarchy to facilitate your user’s main goals. This may be browsing and information-seeking versus purchasing.

4. One window at a time

As I’m writing this post, I have 11 tabs open in Google Chrome, split between two screens. If I click on a link that takes me to a new website or page, it’s no big deal.

But on mobile, your user is most likely viewing one window at a time. They can’t split their screen to look at two windows simultaneously, so you shouldn’t ask them to. Mobile tasks should be easy to complete in one app or on one website.

The more your user has to jump from page to page, the more they have to rely on their memory. This increases cognitive load, and decreases the likelihood that they will complete an action.

Strategist recommendations:

Your navigation should be easy to find and it should contain links to your most relevant and important content. This way, if your user has to travel to a new page to access specific content, they can find their way back to other important pages quickly and easily.

In e-commerce, we often see people “pogo-sticking”—jumping from one page to another continuously—because they feel that they need to navigate to another page to confirm that the information they have provided is correct.

A great solution is to ensure that your users can view key information that they may want to confirm (prices / products / address) on any page. This way, they won’t have to jump around your website and remember these key pieces of information.

Implementing mobile website optimization

As I’m sure you’ve noticed by now, the phrase “you should test” is peppered throughout this post. Because understanding the mobile context, and reviewing usability challenges and recommendations are first steps.

If you can, you should test any recommendation made in this post. Which brings us to mobile website optimization. At WiderFunnel, we approach mobile optimization just like we would desktop optimization: with process.

You should evaluate and prioritize mobile web optimization in the context of all of your marketing. If you can achieve greater Return on Investment by optimizing your desktop experience (or another element of your marketing), you should start there.

But assuming your mobile website ranks high within your priorities, you should start examining it from your user’s perspective. The WiderFunnel team uses the LIFT Model framework to identify problem areas.

The LIFT Model allows us to identify barriers to conversion, using the six factors of Value Proposition, Clarity, Relevance, Anxiety, Distraction, and Urgency. For more on the LIFT Model, check out this blog post.

A LIFT illustration

I asked the WiderFunnel Strategy team to do a LIFT analysis of the food delivery website that gave me so much grief that Sunday night. Here are some of the potential barriers they identified on the checkout page alone:

Mobile website LIFT analysis
This wireframe is based on the food delivery app’s checkout page. Each of the numbered LIFT points corresponds with the list below.
  1. Relevance: There is valuable page real estate dedicated to changing the language, when a smartphone will likely detect your language on its own.
  2. Anxiety: There are only 3 options available in the navigation: Log In, Sign Up, and Help. None of these are helpful when a user is trying to navigate between key pages.
  3. Clarity: Placing the call-to-action at the top of the page creates disjointed eyeflow. The user must scan the page from top to bottom to ensure their order is correct.
  4. Clarity: The “Order Now” call-to-action and “Allergy & dietary information links” are very close together. Users may accidentally tap one, when they want to tap the other.
  5. Anxiety: There is no confirmation of the delivery address.
  6. Anxiety: There is no way to edit an order within the checkout. A user has to delete items, return to the menu and add new items.
  7. Clarity: Font size is very small making the content difficult to read.
  8. Clarity: The “Cash” and “Card” icons have no context. Is a user supposed to select one, or are these just the payment options available?
  9. Distraction: The dropdown menus in the footer include many links that might distract a user from completing their order.

Needless to say, my frustrations were confirmed. The WiderFunnel team ran into the same obstacles I had run into, and identified dozens of barriers that I hadn’t.

But what does this mean for you?

When you are first analyzing your mobile experience, you should try to step into your user’s shoes and actually use your experience. Give your team a task and a goal, and walk through the experience using a framework like LIFT. This will allow you to identify usability issues within your user’s mobile context.

Every LIFT point is a potential test idea that you can feed into your optimization program.

Case study examples

This wouldn’t be a WiderFunnel blog post without some case study examples.

This is where we put ‘best mobile practices’ to the test. Because the smallest usability tweak may make perfect sense to you, and be off-putting to your users.

In the following three examples, we put our recommendations to the test.

Mobile navigation optimization

In mobile design in particular, we tend to assume our users understand ‘universal’ symbols.

Aritzia Hamburger Menu
The ‘Hamburger Menu’ is a fixture on mobile websites. But does that mean it’s a universally understood symbol?

But, that isn’t always the case. And it is certainly worth testing to understand how you can make the navigation experience (often a huge pain point on mobile) easier.

You can’t just expect your users to know things. You have to make it as clear as possible. The more you ask your user to guess, the more frustrated they will become.

– Dennis Pavlina, Optimization Strategist, WiderFunnel

This example comes from an e-commerce client that sells artwork. In this experiment, we tested two variations against the original.

In the first, we increased font and icon size within the navigation and menu drop-down. This was a usability update meant to address the small, difficult to navigate menu. Remember the conversation about target size? We wanted to tackle the low-hanging fruit first.

With variation B, we dug a little deeper into the behavior of this client’s specific users.

Qualitative Hotjar recordings had shown that users were trying to navigate the mobile website using the homepage as a homebase. But this site actually has a powerful search functionality, and it is much easier to navigate using search. Of course, the search option was buried in the hamburger menu…

So, in the second variation (built on variation A), we removed Search from the menu and added it right into the main Nav.

Mobile website optimization - navigation
Wireframes of the control navigation versus our variations.

Results

Both variations beat the control. Variation A led to a 2.7% increase in transactions, and a 2.4% increase in revenue. Variation B decreased clicks to the menu icon by -24%, increased transactions by 8.1%, and lifted revenue by 9.5%.

Never underestimate the power of helping your users find their way on mobile. But be wary! Search worked for this client’s users, but it is not always the answer, particularly if what you are selling is complex, and your users need more guidance through the funnel.

Mobile product page optimization

Let’s look at another e-commerce example. This client is a large sporting goods store, and this experiment focused on their product detail pages.

On the original page, our Strategists noted a worst mobile practice: The buttons were small and arranged closely together, making them difficult to click.

There were also several optimization blunders:

  1. Two calls-to-action were given equal prominence: “Find in store” and “+ Add to cart”
  2. “Add to wishlist” was also competing with “Add to cart”
  3. Social icons were placed near the call-to-action, which could be distracting

We had evidence from an experiment on desktop that removing these distractions, and focusing on a single call-to-action, would increase transactions. (In that experiment, we saw transactions increase by 6.56%).

So, we tested addressing these issues in two variations.

In the first, we de-prioritized competing calls-to-action, and increased the ‘Size’ and ‘Qty’ fields. In the second, we wanted to address usability issues, making the color options, size options, and quantity field bigger and easier to click.

mobile website optimization - product page variations
The control page versus our variations.

Results

Both of our variations lost to the Control. I know what you’re thinking…what?!

Let’s dig deeper.

Looking at the numbers, users responded in the way we expected, with significant increases to the actions we wanted, and a significant reduction in the ones we did not.

Visits to “Reviews”, “Size”, “Quantity”, “Add to Cart” and the Cart page all increased. Visits to “Find in Store” decreased.

And yet, although the variations were more successful at moving users through to the next step, there was not a matching increase in motivation to actually complete a transaction.

It is hard to say for sure why this result happened without follow-up testing. However, it is possible that this client’s users have different intentions on mobile: Browsing and seeking product information vs. actually buying. Removing the “Find in Store” CTA may have caused anxiety.

This example brings us back to the mobile context. If an experiment wins within a desktop experience, this certainly doesn’t guarantee it will win on mobile.

I was shopping for shoes the other day, and was actually browsing the store’s mobile site while I was standing in the store. I was looking for product reviews. In that scenario, I was information-seeking on my phone, with every intention to buy…just not from my phone.

Are you paying attention to how your unique users use your mobile experience? It may be worthwhile to take the emphasis off of ‘increasing conversions on mobile’ in favor of researching user behavior on mobile, and providing your users with the mobile experience that best suits their needs.

Note: When you get a test result that contradicts usability best practices, it is important that you look carefully at your experiment design and secondary metrics. In this case, we have a potential theory, but would not recommend any large-scale changes without re-validating the result.

Mobile checkout optimization

This experiment was focused on one WiderFunnel client’s mobile checkout page. It was an insight-driving experiment, meaning the focus was on gathering insights about user behavior rather than on increasing conversion rates or revenue.

Evidence from this client’s business context suggested that users on mobile may prefer alternative payment methods, like Apple Pay and Google Wallet, to the standard credit card and PayPal options.

To make things even more interesting, this client wanted to determine the desire for alternative payment methods before implementing them.

The hypothesis: By adding alternative payment methods to the checkout page in an unobtrusive way, we can determine by the percent of clicks which new payment methods are most sought after by users.

We tested two variations against the Control.

In variation A, we pulled the credit card fields and call-to-action higher on the page, and added four alternative payment methods just below the CTA: PayPal, Apple Pay, Amazon Payments, and Google Wallet.

If a user clicked on one of the four alternative payment methods, they would see a message:

“Google Wallet coming soon!
We apologize for any inconvenience. Please choose an available deposit method.
Credit Card | PayPal”

In variation B, we flipped the order. We featured the alternative payment methods above the credit card fields. The focus was on increasing engagement with the payment options to gain better insights about user preference.

mobile website optimization - checkout page
The control against variations testing alternative payment methods.

Note: For this experiment, iOS devices did not display the Google Wallet option, and Android devices did not display Apple Pay.

Results

On iOS devices, Apple Pay received 18% of clicks, and Amazon Pay received 12%. On Android devices, Google Wallet received 17% of clicks, and Amazon Pay also received 17%.

The client can use these insights to build the best experience for mobile users, offering Apple Pay and Google Wallet as alternative payment methods rather than PayPal or Amazon Pay.

Unexpectedly, both variations also increased transactions! Variation A led to an 11.3% increase in transactions, and variation B led to an 8.5% increase.

Because your user’s motivation is already limited on mobile, you should try to create an experience with the fewest possible steps.

You can ask someone to grab their wallet, decipher their credit card number, expiration date, and ccv code, and type it all into a small form field. Or, you can test leveraging the digital payment options that may already be integrated with their mobile devices.

The future of mobile website optimization

Imagine you are in your favorite outdoor goods store, and you are ready to buy a new tent.

You are standing in front of piles of tents: 2-person, 3-person, 4-person tents; 3-season and extreme-weather tents; affordable and pricey tents; light-weight and heavier tents…

You pull out your smartphone, and navigate to the store’s mobile website. You are looking for more in-depth product descriptions and user reviews to help you make your decision.

A few seconds later, a store employee asks if they can help you out. They seem to know exactly what you are searching for, and they help you choose the right tent for your needs within minutes.

Imagine that while you were browsing products on your phone, that store employee received a notification that you are 1) in the store, 2) looking at product descriptions for tent A and tent B, and 3) standing by the tents.

Mobile optimization in the modern era is not about increasing conversions on your mobile website. It is about providing a seamless user experience. In the scenario above, the in-store experience and the mobile experience are inter-connected. One informs the other. And a transaction happens because of each touch point.

Mobile experiences cannot live in a vacuum. Today’s buyer switches seamlessly between devices [and] your optimization efforts must reflect that.

Yonny Zafrani, Mobile Product Manager, Dynamic Yield

We wear the internet on our wrists. We communicate via chat bots and messaging apps. We spend our leisure time on our phones: streaming, gaming, reading, sharing.

And while I’m not encouraging you to shift your optimization efforts entirely to mobile, you must consider the role mobile plays in your customers’ lives. The online experience is mobile. And your mobile experience should be an intentional step within the buyer journey.

What does your ideal mobile shopping experience look like? Where do you think mobile websites can improve? Do you agree or disagree with the ideas in this post? Share your thoughts in the comments section below!

The post Your mobile website optimization guide (or, how to stop pissing off your mobile users) appeared first on WiderFunnel Conversion Optimization.

See original: 

Your mobile website optimization guide (or, how to stop pissing off your mobile users)

Eating Our Own Dogfood – How To Optimize For Revenue As A SaaS Business

It wouldn’t be an exaggeration to say that we at VWO are very passionate about experimentation.

Not only have we built a product around A/B testing and conversion optimization, but we are always looking for ways to run experiments on our website.

Recently, we got our entire team to actively research and contribute ideas for optimization on our website and ran multiple tests. This post is a narrative of what we did after.

Who Is This Post for?

This post will help SaaS growth-hackers, marketers, and optimization experts to predict the business value from a test.

The aim of this post is to not only share the tests we ran on our website, but also introduce a revenue-based framework that predicts the business impact of an A/B test and prioritizing on the basis of it.

Revenue-Based Optimization

Need for a Model

After we propelled our team to suggest ideas for testing, we had more than 30 hypotheses looking at us, but no distinct way of knowing which of these to take up first. Of course, there is a range of prioritizing frameworks available, but we particularly wanted to look at the ones that would directly impact our revenue.

This framework helped us project the potential impact on the revenue from each test. Here’s what we did:

Step 1

We decided to identify high-impact pages and winnow the pages that were not as important for our business, that is, pages where no goal conversions take place. We looked at Google Analytics for pages with the:

  • Highest Amount of Traffic
    (We used “New Users” to nullify visits by existing customers.)
  • Highest Number of Goal Conversions
    (Goal conversion, which contributes to your overall business goal, is the main goal for your website. In our case, this meant all qualified lead-generating forms. A free trial or request a demo qualifies a visitor as a lead with a genuine interest in our product; or, as the industry popularly refers to it, a Marketing Qualified Lead.)

This gave us a list of pages which were high-value in terms of, either traffic generation or last touch before conversions.

We identified the following key pages:

  • Free-trial page
  • Request-a-demo page
  • Homepage
  • Pricing page
  • Features page
  • Blog pages (All)
  • Contact-us page

Step 2

Our main objective was to project an estimated increase in the revenue due to a particular test. If your test increases the conversion rate by say 20%, what would this mean for your business and, in turn, the revenue?

This is how our marketing funnel looked like:

VWO Marketing Funnel

Note: You should use data from the recent 3–6 months, and the average (mean) of each step. This is to accurately reflect what to expect from your testing and be relevant to your business.

For each of the “Key Pages” we identified in the first step, we also dug out the corresponding numbers at each funnel stage. We’ve explained each stage of the funnel and how it is calculated:

a) Key Page Traffic: The total number of pageviews per Key Page (new users in our case). You can find the data in Google Analytics.

b) Total Conversions: The total number of leads generated from each particular page. If there is an additional qualification your company follows, source this data from your preferred CRM or Marketing Automation software. For example, at VWO, we use Clearbit to qualify our leads in Salesforce.

c) Opportunities: The total number of opportunities generated for your sales team. This data will be available in your CRM; make sure to count qualified opportunities only.

d) Customers:  The total number of customers created in a month.

e) MRR (New): Or monthly recurring revenue, means revenue booked on a monthly basis; you can use this to estimate annual recurring revenue, or ARR, as well.

Step 3

Now that we had all the numbers needed in our arsenal, I decided to calculate some more internal benchmarks. This gave us the performance of our marketing and/or sales funnel.

  1. We computed the conversion rate of a particular page, using the following formula:
    Existing conversion rate = (Total Conversions Key Page Traffic); this is represented as %
  2. The conversion of your leads into opportunities:
    (Opportunities ÷ Total conversions) × 100, represented as %
  3.  The conversion rate of opportunities into customers:
    (Customers ÷ Opportunities) × 100, represented as %
  4.  The average revenue per user or ARPU:
    Total MRR  ÷ Total number of paying customers

Now all you have to do is to impute these numbers in this template.
Revenue-based Testing Model
The model uses all of that data and projects how much revenue increase or decrease you can estimate based on your test results. This estimate can give you a good idea of where to begin or prioritize your testing.

Step 4 (Optional)

This is where it may get tricky. At VWO, we sell both Enterprise plans and Standard plans. So to be fair, we must estimate each cohort with separate data and individual conversion rates.

For example, Opportunity creation % for an Enterprise plan may be lower, but a Standard plan is easier to convert. You may want to decide what type of plan do you want to focus on.

We, for instance, used website traffic and Alexa rank as the benchmark for lead qualification. We attributed more value to the leads that came in through key pages and prioritized them.

This led us to the next step, which is the qualification rate of the said lead of high value. This rate may be in the range 30–50%, depending on your definition.

It was interesting to note that each page had a different qualification rate. For example, we get better quality leads from our Request a demo page than we do from our free trial or blog post page.

Tests Conducted:

After we had the model in place, we played around with the increase or decrease in our conversion rates. This was to identify what would be our best optimization opportunities?

The free trial pages and the home page were among the high-priority pages, in terms of the impact of revenue. (Unfortunately, I can’t share the exact numbers with you.) We first looked at the hypotheses on the free trial page:

Test 1 – Free Trial Page

Our hypothesis was “Illustrating VWO features and social proof on the free trial page will compel users to sign up for the free trial.”

Here is a screenshot of what it looks like in VWO.
hypothesis-free-trial

Bonus tip: VWO has recently launched a new capability called PLAN that lets you manage and prioritize your testing hypotheses. To learn more about this capability, visit the VWO evolution page.

This is what the control looked like:

Free Trial Control

Our heatmap data also showed a lot of users clicking the features page after accessing the free trial page.

Screenshot of heatmap data:

Heatmap Screenshot for test

We created a variation which included the features we offer to solve this issue. Here’s a screenshot of the same.

This is our current free trial page:

Free Trial Page(New)(Variation)

We ran the test for over 2 months. The result was an increase of 6% in our conversion rate, which led to increased revenues.

Test 2 – Request a Demo CTA (A/B Test)

The main CTA on the homepage has been the free trial CTA. The headline on the homepage was “A/B Testing Software for Marketers.”

The hypothesis for the test was “We will get more qualified leads through a request a demo CTA on the homepage.”

This is what the control looked like:

Homepage Control

We came up with a more targeted copy and changed the existing CTA to Request A Demo. Here is what the variation looked like:

Homepage variation

We also wanted to change our positioning due to our foray into Conversion Optimization. The results from this test were that our variation beat the control and had more than 31% improvement in the conversion rate.

Based on the first example, we have already implemented the new free-trial page as our main free-trial page now. Based on the second test, we updated our current home page.

All in all, this model helped us correctly predict the best optimization opportunities, make our testing better, and more strategically aligned to business goals.

Let me know your experience with this model and how you go about testing.

Would love to hear your feedback on this!

Free-trial CTA

The post Eating Our Own Dogfood – How To Optimize For Revenue As A SaaS Business appeared first on VWO Blog.

Read more – 

Eating Our Own Dogfood – How To Optimize For Revenue As A SaaS Business

How to get evergreen results from your landing page optimization

Reading Time: 7 minutes

Landing page optimization is old news.

Seriously. A quick google will show you that Unbounce, QuickSprout, Moz, Qualaroo, Hubspot, Wordstream, Optimizely, CrazyEgg, VWO (and countless others), have been writing tips and guides on how to optimize your landing pages for years.

Not to mention the several posts we have already published on the WiderFunnel blog since 2008.

And yet. This conversation is so not over.

Warning: If your landing page optimization goals are short-term, or completely focused on conversion rate lift, this post may be a waste of your time. If your goal is to continuously have the best-performing landing pages on the internet, keep reading.



Marketers are funnelling more and more money into paid advertising, especially as Google allocates more and more SERP space to ads.

In fact, as an industry, we are spending upwards of $92 billion annually on paid search advertising alone.

landing-page-optimization-SERP-space
The prime real estate on a Google search results page often goes to paid.

And it’s not just search advertising that is seeing an uptick in spend, but social media advertising too.

It makes sense that marketers are still obsessing over their landing page conversion rates: this traffic is costly and curated. These are visitors that you have sought out, that share characteristics with your target market. It is extremely important that these visitors convert!

But, there comes a time in every optimizer’s life, when they face the cruel reality of diminishing returns. You’ve tested your landing page hero image. You’ve tested your value proposition. You’ve tested your form placement. And now, you’ve hit a plateau.

So, what next? What’s beyond the tips and guides? What is beyond the optimization basics?

1) Put on your customer’s shoes.

First things first: Let’s do a quick sanity check.

When you test your hero image, or your form placement, are you testing based on tips and recommended best practices? Or, are you testing based on a specific theory you have about your page visitors?

landing-page-optimization-customer-shoes
Put on your customer’s shoes.

Tips and best practices are a fine place to start, but the insight behind why those tactics work (or don’t work) for your visitors is where you find longevity.

The best way to improve experiences for your visitors is to think from their perspective. And the best way to do that is to use frameworks, and framework thinking, to get robust insights about your customers.

– Chris Goward, Founder & CEO, WiderFunnel

Laying the foundation

It’s very difficult to think from a different perspective. This is true in marketing as much as it is in life. And it’s why conversion optimization and A/B testing have become so vital: We no longer have to guess at what our visitors want, but can test instead!

That said, a test requires a hypothesis. And a legitimate hypothesis requires a legitimate attempt to understand your visitor’s unique perspective.

To respond to this need for understanding, WiderFunnel developed the LIFT Model® in 2008: our foundational framework for identifying potential barriers to conversion on a page from the perspective of the page visitor.

Get optimization ideas with the LIFT poster!

Get the LIFT Model poster, and challenge yourself to keep your visitor’s perspective in mind at all times. Use the six conversion factors to analyze your pages, and get optimization ideas!



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.


The LIFT Model attempts to capture the idea of competing forces in communication, narrowing them down to the most salient aspects of communication that marketers should consider.

I wanted to apply the principles of Relevance, Clarity, Distraction, Urgency and Anxiety to what we were delivering to the industry and not just to our clients. And the LIFT Model is a part of that: making something as simple as possible but no simpler.

– Chris Goward

When you look at your page through a lens like the LIFT Model, you are forced to question your assumptions about what your visitors want when they land on your page.

landing-page-optimization-LIFT-Model
View your landing pages through a framework lens.

You may love an interactive element, but is it distracting your visitors? You may think that your copy creates urgency, but is it really creating anxiety?

If you are an experienced optimizer, you may have already incorporated a framework like the LIFT Model into your optimization program. But, after you have analyzed the same page multiple times, how do you continue to come up with new ideas?

Here are a few tips from the WiderFunnel Strategy team:

  1. Bring in fresh eyes from another team to look at and use your page
  2. User test, to watch and record how actual users are using your page
  3. Sneak a peek at your competitors’ landing pages: Is there something they’re doing that might be worth testing on your site?
  4. Do your page analyses as a team: many heads are better than one
  5. Brainstorm totally new, outside-the-box ideas…and test one!

You should always err on the side of “This customer experience could be better.” After all, it’s a customer-centric world, and we’re just marketing in it.

2) Look past the conversion rate.

“Landing page optimization”, like “conversion rate optimization”, is a limiting term. Yes, on-page optimization is key, but mature organizations view “landing page optimization” as the optimization of the entire experience, from first to last customer touchpoint.

Landing pages are only one element of a stellar, high-converting marketing campaign. And focusing all of your attention on optimizing only one element is just foolish.

From testing your featured ads, to tracking click-through rates of Thank You emails, to tracking returns and refunds, to tracking leads through the rest of the funnel, a better-performing landing page is about much more than on-page conversion rate lift.

landing-page-optimization-big-picture
On-page optimization is just one part of the whole picture.

An example is worth 1,000 words

One of our clients is a company that provides an online consumer information service—visitors type in a question and get an Expert answer. One of the first zones (areas on their website) that we focused on was a particular landing page funnel.

Visitors come from an ad, and land on page where they can ask their question. They then enter a 4-step funnel: Step 1: Ask the question > Step 2: Add more information > Step 3: Pick an Expert > Step 4: Get an answer (aka the checkout page)

Our primary goal was to increase transactions, meaning we had to move visitors all the way through the funnel. But we were also tracking refunds and chargebacks, as well as revenue per visitor.

More than pushing a visitor to ‘convert’, we wanted to make sure those visitors went on to be happy, satisfied customers.

In this experiment, we focused on the value proposition statements. The control landing page exclaimed, “A new question is answered every 9 seconds!“. Our Strategy team had determined (through user testing) that “speed of answers” was the 8th most valuable element of the service for customers, and that “peace of mind / reassurance” was the most important.

So, they tested two variations, featuring two different value proposition statements meant to create more peace of mind for visitors:

  • “Join 6,152,585 satisfied customers who got professional answers…”
  • “Connect One on One with an Expert who will answer your question”

Both of these variations ultimately increased transactions, by 6% and 9.4% respectively. But! We also saw large decreases in refunds and chargebacks with both variations, and large increases in net revenue per visitor for both variations.

By following visitors past the actual conversion, we were able to confirm that these initial statements set an impactful tone: visitors were more satisfied with their purchases, and comfortable investing more in their expert responses.

3) Consider the big picture.

As you think of landing page optimization as the optimization of a complete digital experience, you should also think of landing page optimization as part of your overall digital optimization strategy.

When you discover an insight about visitors to your product page, feed it into a test on your landing page. When you discover an insight about visitor behavior on your landing page, feed it into a test on your website.

It’s true that your landing pages most likely cater to specific visitor segments, who may behave totally differently than your organic visitors. But, it is also true that landing page wins may be overall wins.

Plus, landing page insights can be very valuable, because they are often new visitor insights. And now, a little more advice from Chris Goward, optimization guru:

“Your best opportunities for testing your value proposition are with first impression visitors. These are usually new visitors to your high traffic landing pages or your home page […]

By split testing your alternative value propositions with new visitors, you’ll reduce your exposure to existing customers or prospects who are already in the consideration phase. New prospects have a blank canvas for you to present your message variations and see what sticks.

Then, from the learning gained on landing pages, you can validate insights with other target audience groups and with your customers to leverage the learning company-wide.

Landing page testing can do more than just improve conversion rates on landing pages. When done strategically, it can deliver powerful, high-leverage marketing insights.”



Just because your landing pages are separate from your website, does not mean that your landing page optimization should be separate from your other optimization efforts. A landing page is just another zone, and you are free to (and should) use insights from one zone when testing on another zone.

4) Go deeper, explore further.

A lot of marketers talk about landing page design: how to build the right landing page, where to position each element, what color scheme and imagery to use, etc.

But when you dig into the why behind your test results, it’s like breaking into a piñata of possibilities, or opening a box of idea confetti.

landing-page-optimization-ideas
Discovering the reason behind the result is like opening a box of idea confetti!

Why do your 16-25 year old, mobile users respond so favorably to a one-minute video testimonial from a past-purchaser? Do they respond better to this indicator of social proof than another?

Why do your visitors prefer one landing page under normal circumstances, and a different version when external factors change (like a holiday, or a crisis)? Can you leverage this insight throughout your website?

Why does one type of urgency phrasing work, while slightly different wording decreases conversions on your page? Are your visitors sensitive to overly salesy copy? Why or why not?

Not only are there hundreds of psychological principles to explore within your landing page testing, but landing page optimization is also intertwined with your personalization strategy.

For many marketers, personalized landing pages are becoming more normal. And personalization opens the door to even more potential customer insights. Assuming you already have visitor segments, you should test the personalized experiences on your landing pages.

For example, imagine you have started using your visitors’ first names in the hero banner of your landing page. Have you validated that this personalized experience is more effective than another, like moving a social proof indicator above the fold? Both can be deemed personalization, but they tap into very different motivations.

From psychological principles, to validating your personalized experiences, the possibilities for testing on your landing pages are endless.

Just keep testing, Dory-style

Your landing page(s) will never be “optimized”. That is the beauty and cruelty of optimization: we are always chasing unattainable perfection.

But your landing pages can definitely be better than they are now. Even if you have a high-converting page, even if your page is listed by Hubspot as one of the 16 best designed landing pages, even if you’ve followed all of the rules…your landing page can be better.

Because I’m not just talking about conversions, I’m talking about your entire customer experience. If you give them the opportunity, your new users will tell you what’s wrong with your page.

They’ll tell you where it is unclear and where it is distracting.

They’ll tell you what motivates them.

They’ll tell you how personal you should get.

They’ll tell you how to set expectations so that they can become satisfied customers or clients.

A well-designed landing page is just the beginning of landing page optimization.

The post How to get evergreen results from your landing page optimization appeared first on WiderFunnel Conversion Optimization.

More: 

How to get evergreen results from your landing page optimization

Tips and tactics for A/B testing on AngularJS apps

Reading Time: 8 minutes

Alright, folks, this week we’re getting technical.

This post is geared toward Web Developers who’re working in conversion optimization, specifically those who are testing on AngularJS (or who are trying to test on AngularJS).

Angular, while allowing for more dynamic web applications, presents a problem for optimization on the development side.

It basically throws a wrench in the whole “I’m trying to show you a variation instead of the original webpage without you knowing it’s a variation”-thing for reasons I’ll get into in a minute.

At WiderFunnel, our Dev team has to tackle technical obstacles daily: many different clients means many different frameworks and tools to master.

Recently, the topic of How the heck do you test on Angular came up and Tom Davis, WiderFunnel Front End Developer, was like, “I can help with that.”

So here we go. Here are the tips, tricks, and workarounds we use to test on AngularJS.

Let’s start with the basics:

What is AngularJS?

Angular acts as a Javascript extension to HTML, running in most cases on the client-side (through the browser). Because HTML isn’t a scripting language (it doesn’t run code), it’s limited. Angular allows for more functionality that HTML doesn’t have. It provides a framework to develop apps that are maintainable and extendable, while allowing for features such as single page navigation, rich content, and dynamic functionality.

Note: You can mimic Angular with plain Javascript, however, Angular provides a lot of functionality that a Developer would otherwise have to build themselves.

Why is AngularJS popular?

The real question here is why are JS front-end frameworks and libraries popular? Angular isn’t the only framework you can use, of course: there’s EmberJS, React.js, BackBone etc., and different Developers prefer different frameworks.

But frameworks, in general, are popular because they offer a means of providing a rich user experience that is both responsive and dynamic. Without Angular, a user clicks a button or submits a form on your site, the browser communicates with the server, and the server provides entirely new HTML content that then loads in the browser.

When you’re using Angular, however, a user clicks a button or submits a form and the browser is able to build that content itself, while simultaneously performing server tasks (like database submissions) in the background.

For example, let’s think about form validations.

No Angular:

A user submits a form to create an account on a site. The browser talks to the server and the server says, “There’s a problem. We can’t validate this form because this username already exists.” The server then has to serve up entirely new HTML content and the browser re-renders all of that new content.

This can lead to a laggy, cumbersome user experience, where changes only happen on full page reloads.

With Angular:

A user submits a form to create an account on a site. The browser talks to the server via JSON (a collection of data) and the server says, “There’s a problem. We can’t validate this form because this username already exists.” The browser has already loaded the necessary HTML (on the first load) and then simply fills in the blanks with the data it gets back from the server.

Disclaimer: If you don’t have a basic understanding of web development, the rest of this post may be tough to decipher. There is a Glossary at the end of this post, if you need a quick refresher on certain terms.

Why it can be tricky to test on Angular apps

As mentioned above, Angular acts as an HTML extension. This means that the normal behaviors of the DOM* are being manipulated.

Angular manipulates the DOM using two-way data binding. This means that the content in the DOM is bound to a model. Take a look at the example below:

Testing on Angular_2-way-data-binding

The class “ng-binding” indicates that the H1 element is bound to a model, in this case $scope.helloWorld. In Angular, model data is referred to in an object called $scope. Any changes to the input field value will change helloWorld in the $scope object. This value is then propagated down to the H1 text.

This means that, if you make any changes to the H1 element through jQuery or native JS, they will essentially be overridden by $scope. This is not good in a test environment: you cannot guarantee that your changes will show up when you intend them to, without breaking the original code.

Laymen’s terms: $scope.helloWorld is bound to the H1 tag, meaning if anything in the variable helloWorld changes, the H1 element will change and vice versa. That’s the power of Angular.

Typically, when you’re testing, you’re making changes to the DOM by injecting Javascript after all of the other content has already loaded.

A developer will wait until the page has loaded, hide the content, change elements in the background, and show everything to the user post-change. (Because the page is hidden while these changes are being made, the user is none-the-wiser.)

Tom-Davis

We’re trying to do this switcheroo without anyone seeing it.

– Thomas Davis, Front End Developer, WiderFunnel

In Angular apps, there’s no way to guarantee that all of the content has been rendered before that extra Javascript is injected. At this point, Angular has already initialized the app, meaning any code running after this is outside of Angular’s execution context. This makes it complicated to try to figure out when and how to run the changes that make up your test.

When you’re running a test, the changes that make up Variation A (or B or C) are loaded when the page loads. You can only manipulate what’s in the DOM already. If you can’t guarantee that the content is loaded, how do you ensure that your added Javascript runs at the right time and how do you do this without breaking the code and functionality?

Tom explained that, as a dev trying to do conversion optimization on an Angular application, you find yourself constantly trying to answer this question:

How can I make this change without directly affecting my (or my client’s) built-in functionality? In other words, how can I make sure I don’t break this app?

How to influence Angular through the DOM

Angular makes for a complicated testing environment, but there are ways to test on Angular. Here are a few that we use at WiderFunnel (straight from Tom’s mouth to your eyeballs).

Note: In the examples below, we are working in the Inspector. This is just to prove that the changes are happening outside the context of the app and, therefore, an external script would be able to render the same results.

1. Use CSS wherever possible

When you’re running a test on Angular, use CSS whenever possible to make styling changes.

CSS is simply a set of styling rules that the browser applies to matching elements. Styling will always be applied on repaints regardless of how the DOM is bound to Angular. Everytime something changes within the browser, the browser goes through its list of styling rules and reapplies them to the correct element.

Let’s say, in a variation, you want to hide a banner. You can find the element you want to hide and add a styling tag that has an attribute of display none. CSS will always apply this styling and that element will never be displayed.

Of course, you can’t rely on CSS all of the time. It isn’t a scripting language, so you can’t do logic. For instance, CSS can’t say “If [blank] is true, make the element color green. If [blank] is false, make the element color red.”

In other cases, you may want to try $apply.

2. Using $scope/$apply in the DOM

We’ve established that Angular’s two-way data binding makes it difficult to develop consistent page changes outside of the context of Angular. Difficult…but not impossible.

Say you want to change the value of $scope.helloWorld. You need a way to tell Angular, “Hey, a value has changed — you need to propagate this change throughout the app.”

Angular checks $scope variables for changes whenever an event happens. An event attribute like ng-click or ng-model will force Angular to run the Digest Loop*, where a process called dirty checking* is used to update the whole of the app with any new values.

If you want to change the value of $scope.helloWorld and have it propagated throughout the app, you need to trick Angular into thinking an event has occurred.

But, how?

First step: You’ll need to access the model in the $scope object. You can do this simply by querying it in the DOM.

Testing on Angular_$scope

In this example, you’re looking at the $scope object containing all models available to the H1 element. You’re looking at the helloWorld variable exposed.

Once you have access to helloWorld, you can reassign it. But wait! You’ve probably noticed that the text hasn’t changed in the window… That’s because your code is running outside the context of Angular — Angular doesn’t know that a change has actually been made. You need to tell Angular to run the digest loop, which will apply the change within it’s context.

Fortunately, Angular comes equipped with an $apply function, that can force a $digest, as shown below.

Testing on Angular_$apply

3. Watch for changes

This workaround is a little manual, but very important. If the source code changes a variable or calls a function bound to $scope, you’ll need to be able to detect this change in order to keep your test functional.

That’s where Angular’s $watch function comes in. You can use $watch to listen to $scope and provide a callback when changes happen.

In the example below, $watch is listening to $scope.helloWorld. If helloWorld changes, Angular will run a callback that provides the new value and the old value of helloWorld as parameters.

Testing on Angular_$watch

Custom directives and dependency injection

It’s important that you don’t default to writing jQuery when testing on Angular apps. Remember, you have access to all the functionality of Angular, so use it. For complex experiments, you can use custom directives to manage code structure and make it easy to debug.

To do this, you can implement an injector to apply components in the context of the app that you’re testing on. Here’s a simple example that will alert you if your helloWorld variable changes:

For more details on how to use an injector, click here.

—–

These are just a few of the tactics that the WiderFunnel Dev team uses to run successful conversion optimization on Angular apps. That said, we would love to hear from all of you about how you do CRO on Angular!

Do you use the same tactics described here? Do you know of other workarounds not mentioned here? How do you test successfully on Angular apps? Let us know in the comments!

Glossary

DOM: The Document Object Model (DOM) is a cross-platform and language-independent convention for representing and interacting with objects in HTML, XHTML, and XML documents

$scope: Scope is an object that refers to the application model. It is an execution context for expressions. Scopes are arranged in hierarchical structure which mimic the DOM structure of the application. Scopes can watch expressions and propagate events.

$apply: Apply is used to execute an expression in Angular from outside of the Angular framework. (For example from browser DOM events, setTimeout, XHR or third party libraries).

JSON: (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language, Standard ECMA-262 3rd Edition – December 1999

Two-way data binding: Data-binding in Angular apps is the automatic synchronization of data between the model and view components. The way that Angular implements data-binding allows you to treat the model as the single source of truth in your application.

Digest Loop: There is an internal cycle called $digest that runs through the application and executes watch expressions and compares the value returned with the previous value and if the values do not match then a listener is fired. This $digest cycle keeps looping until no more listeners are fired.

Dirty Checking: Dirty checking is a simple process that boils down to a very basic concept: It checks whether a value has changed that hasn’t yet been synchronized across the app

The post Tips and tactics for A/B testing on AngularJS apps appeared first on WiderFunnel Conversion Optimization.

Visit source: 

Tips and tactics for A/B testing on AngularJS apps

How to succeed at segmentation and personalization

I’m terrible at remembering names.

I was in high school when I first read How to Win Friends and Influence People by Dale Carnegie. One of the six ways he said to make people like you is “Remember that a person’s name is, to that person, the sweetest and most important sound in any language.”

win friends and influence people
According to Dale Carnegie, people love to hear their own names.

It’s true. People love the sound of their own name.

So, I’ve tried. And I’ll keep trying. In the meantime, please forgive me if I ask you to remind me of your name.

Luckily, marketing personalization is much more reliable than my brain. And it can be powerful.

Personalization, segmentation, and the broader concept of web customization, can have a great impact on web sales.

Understandably, it’s a hot topic and becoming more popular among retailers and lead generators. From simple message segmentation to programmatic ad buying and individual-level website customization, the combination of big data and ad technology is transforming the possibilities of personalization.

With so much content swirling around, marketers are scrambling to create messages that are more relevant to consumers. Segmentation and personalization can be seen as saviors in a content shocked world.

More and more often, WiderFunnel is testing and implementing personalization and segmentation methods for our clients too. It’s certainly a method that we always test.

Does your webpage deliver on what your prospects expect? Does the content match their specific needs and feelings? Segmentation and personalization allow you to deliver more relevant messaging to each of your prospects.

Without taking a strategic approach, though, many marketers are wasting a lot of time and effort (and cost on unnecessary tools) trying to find the holy grail of personalization. There are good reasons why segmentation without a proper strategy can hurt your business. I’ve also shown previously the 8 steps to creating and testing segmentation and personalization.

3 ways to create personalization and segmentation hypotheses to test

Today, I’m going to share with you the 3 ways WiderFunnel creates segmentation hypotheses. Together, these tiers form a framework you can use to guide your segmentation and personalization efforts.

First: what are personalization and segmentation?

Personalization and segmentation are often used interchangeably, and are arguably similar. Both use information gathered about the shopper to customize their experience.

I define segmentation in the context of conversion optimization as the process of putting structures in place to deliver appropriate messages to audiences with distinct needs and expectations. While segmentation attempts to bucket prospects into similar aggregate groups, personalization dynamically changes the prospects’ experience based on behavior, and usually uses individual details like the person’s name.

You can think of them as points along a spectrum of customized messaging.

Customized marketing spectrum
The customized marketing spectrum.

As you customize your mass marketing messages based on target audience groups, they become segmented. As more and more data allow you to create smaller segments, they eventually become a personalized experience with a single person segment.

Why should you use web personalization?

Unfortunately, too many people customize their websites for the wrong reasons, like…

  • More and more tools enable segmentation and personalization, so why not?
  • It sounds complex (which is attractive to marketers looking to be on the cutting edge)
  • ‘Segmentation’ and ‘personalization’ are powerful buzzwords in the industry
  • It gives marketers something to do with their personas

Those aren’t good reasons to use segmentation. But there is one good reason: segmentation helps solve a problem. The Relevance problem, to be specific.

Segmentation and personalization create Relevance

Ultimately, segmentation and personalization are aimed at improving Relevance (one of the 6 factors in the LIFT Model®).

Relevance - LIFT Model
Relevance is 1 of the 6 conversion factors detailed on the LIFT Model.

Barriers to Relevance can often be seen by looking at landing page bounce rates. The average bounce rate of landing pages is between 70% and 90%. For various reasons, your bounce rate doesn’t matter all that much, except when it comes to your target audience. For your target audience, bounce rates are vital.

The question then becomes:

How do I make experiences on my website sticky for each of my target audience segments?

This brings us back to Relevance. Segmentation and personalization are one set of tactics to improve the Relevance of what you’re presenting, provided you know how to communicate in a meaningful way for your visitors.

At this point, most marketers begin to think, “Great, what tools should I use to do that?” That’s a good question, but it’s not the only question you should ask.

Watch out for the tool-centric approach to segmentation

Technology is getting smarter and smarter. Most marketing automation tools boast some sort of segmentation capability. The leading A/B testing tool, Optimizely, recently launched their Personalization tool. And while it’s true that tools enable segmentation, they can’t design a great segmentation plan.

Personalization requires the same inputs and workflow as testing; sound technical implementation, research-driven ideation, a clear methodology for translating concepts into test hypotheses, and tight technical execution. In this sense, personalization is really just an extension of a/b testing and normal CRO activities. What’s more, successful personalization campaigns are the result of testing and iteration.

– Hudson Arnold, Strategy Consultant, Optimizely

In recent years, marketers have put a lot of trust in tools and algorithms to create marketing success. But, computers aren’t creative: they can detect patterns and facilitate solutions, but they can’t create new ideas.

Trusting your segmentation and personalization efforts to a machine means risking variation diarrhea, wherein the computer must test everything. When you have too many variations, spurious decisions are based on small data points – this is non-significance decision-making.

Some tool vendors use a lot of buzzwords and hand-waving while over-promising what the tool alone can deliver.

Buzzword marketing
Buzzwords > Other words = Fluff Marketing

As a rule of thumb, when buzzwords outnumber other words, there’s probably a lot of fluff marketing at work.

Just as importantly, computers can’t conduct insight-seeking results analysis. They can make small improvements using automated optimization algorithms, but they can’t glean the why behind the what.

Machine learning isn’t real learning. Real learning happens when you dig into the data, unearth insights and embed those insights into hypotheses, then test and validate the insights.

So, how do you do segmentation effectively? Below are 3 methods you can start using right now.

Best-guess segmentation

As you might’ve guessed, best-guess segmentation starts with your assumptions >> but it doesn’t end there. In best-guess segmentation, you feed those intuitive ideas into experiments that either validate or disprove your segmentation approach.

Start with how do I believe my customers should respond and how should my segments work? Ideally, you should have some data or rationale to back up your intuition. Then, test your segments with a segmentation split test.

Step 1. Maintain a non-segmented control group that sees your mass messaging.

Step 2. Create test groups based on various segmentation and personalization methods.

For example, here are some common methods for creating customized experiences:

  • Traffic source to landing pages
  • Inbound ad group
  • Browsing history
  • Cart contents leading to cross sells
  • Device and OS combination
  • Customer demographics and/or personas
  • Returning or new visitors
  • Time of day, week, or season

Step 3. Create your segmentation approaches, your actual segments, and specific messages and campaigns. It’s important to be very selective about the segments you test. Adding just 2-3 data sources can create many segment groups, which all need customized messages. The KISS principle applies.

Step 4. Test and track conversion rates across the segmentation hypothesis, not just across the segments. In other words, rather than just testing different messages for your segments, you should test different segmentation approaches to see which type of data shows response rate elasticity.

Step 5. Determine whether or not your hypothesis is validated (with 95% significance) and whether or not this hypothesis justifies supporting these segments on an on-going basis.

While best-guess segmentation begins with assumptions, iterative testing is the only way to be sure that your segmentation is actually effective and a worthwhile pursuit.

Be warned, the more you segment, the longer it takes to validate your test results. Try to find the largest possible relevant segments that can be aggregated. Read 6 reasons your over-segmentation is hurting your marketing for more on this risk.

User-selected segmentation

With user-selected segmentation, you’re asking your users to self-identify, segmenting themselves, which triggers specific messaging based on how they self-identified.

Here’s an example to help you visualize what I mean. WiderFunnel worked with a healthcare company for a few years and they used user-selected segmentation to drive more relevant content and offers. When visitors landed on any page throughout the website, they could self-identify in the following ways:

  • I am a physician treating [the disease].
  • I work at a hospital treating [the disease].
  • I manage [disease] while working.
  • I have late stage [disease].
  • I have early stage [disease].
user-selected segmentation
This healthcare company asked users to segment themselves.

Once a user self-identified, the offers and messaging that were featured on the page were adjusted accordingly.

You might be thinking, “He’s going to start saying ‘but you should test that’ pretty soon…” And you’re right. In user-selected segmentation, there are two big places you can test:

1) What are the best segments?
2) Is this the best messaging for each segment?

For this healthcare company, we didn’t simply assume that those 5 segments were the best segments, or that the messages and offers triggered were the best messages and offers. Instead, we tested both.

A series of A/B tests within their segmentation and personalization efforts resulted in a doubling of this company’s conversion rate.

Post-test segmentation

If you’re already optimizing your site, chances are you’ve seen segments naturally emerge through A/B testing. I call this post-test segmentation.

Post-test segmentation is driven by insights from your existing A/B test data. As you test, you discover insights that point you toward segmentation hypotheses.

Here’s an example from one of WiderFunnel’s e-commerce clients that manufactures and sells weather technology products. We ran a test on this client’s product pages, pitting the original page against our variation, which emphasized visual clarity.

Our variation lost to the original, decreasing order completions by 6.8%.

WiderFunnel Optimization Strategist, Nick So, was perplexed by the result and he dug into the data to find an insight. He found that the original page had more pages per session while our variation saw a 7.4% higher average time on page. This could imply that shoppers on the original page were browsing more, while shoppers on our variation spent more time on fewer pages.

Nick recalled an article published by the Nielsen Norman Group describing teen-targeted websites: research suggests that younger users enjoy searching and are impatient while older users enjoy searching but are also much more patient when browsing.

teen usability
Originally published by the Neilsen Norman Group.

So, Nick dug in further and found that our variation actually won for older users to our client’s site, but lost among younger users. So, what’s the takeaway? There are potentially new ways of customizing the shopping experience for different age segments, such as:

  1. Reducing distractions and adding clarity for older visitors
  2. Providing multiple products in multiple tabs for younger visitors

This client can use these insights to inform their age-group segmentation efforts across their site.

The best strategic testing process builds segmentation on insights

Always think of the bigger picture.

At WiderFunnel, segmentation and personalization are just a part of the 2 phases within the Infinity Optimization Process™: Explore and Validate. Insights gained from A/B testing inform future segments and personalized messaging, while insights derived from segmentation and personalization inform future A/B testing hypotheses. And on and on.

Don’t assume that insights gained during segmentation testing are only valid for those segments. Segmentation wins may be overall wins. The best practice when it comes to segmentation is to take the insights you validate within your segmentation test and use them to inform your hypotheses in your general optimization strategy.

The post How to succeed at segmentation and personalization appeared first on WiderFunnel Conversion Optimization.

Continue reading: 

How to succeed at segmentation and personalization