Lessons Learned From 2,345,864 Exit Overlay Visitors

sup

Back in 2015, Unbounce launched its first ever exit overlay on this very blog.

Did it send our signup rate skyrocketing 4,000%? Nope.

Did it turn our blog into a conversion factory for new leads? Not even close — our initial conversion rate was barely over 1.25%.

But what it did do was start us down the path of exploring the best ways to use this technology; of furthering our goals by finding ways to offer visitors relevant, valuable content through overlays.

Overlays are modal lightboxes that launch within a webpage and focus attention on a single offer. Still fuzzy on what an overlay is? Click here.

In this post, we’ll break down all the wins, losses and “holy smokes!” moments from our first 2,345,864 exit overlay viewers.

Psst: Towards the end of these experiments, Unbounce launched Convertables, and with it a whole toolbox of advanced triggers and targeting options for overlays.

Goals, tools and testing conditions

Our goal for this project was simple: Get more people to consume more Unbounce content — whether it be blog posts, ebooks, videos, you name it.

We invest a lot in our content, and we want it read by as many marketers as possible. All our research — everything we know about that elusive thing called conversion, exists in our content.

Our content also allows readers to find out whether Unbounce is a tool that can help them. We want more customers, but only if they can truly benefit from our product. Those who experience ‘lightbulb’ moments when reading our content definitely fit the bill.

As for tools, the first four experiments were conducted using Rooster (an exit-intent tool purchased by Unbounce in June 2015). It was a far less sophisticated version of what is now Unbounce Convertables, which we used in the final experiment.

Testing conditions were as follows:

  1. All overlays were triggered on exit; meaning they launched only when abandoning visitors were detected.
  1. For the first three experiments, we compared sequential periods to measure results. For the final two, we ran makeshift A/B tests.
  1. When comparing sequential periods, testing conditions were isolated by excluding new blog posts from showing any overlays.
  1. A “conversion” was defined as either a completed form (lead gen overlay) or a click (clickthrough overlay).
  1. All experiments were conducted between January 2015 and November 2016.

Experiment #1: Content Offer vs. Generic Signup

Our first exit overlay had a simple goal: Get more blog subscribers. It looked like this.

blog-subscriber-overlay

It was viewed by 558,488 unique visitors over 170 days, 1.27% of which converted to new blog subscribers. Decent start, but not good enough.

To improve the conversion rate, we posed the following.

HYPOTHESIS
Because online marketing offers typically convert better when a specific, tangible offer is made (versus a generic signup), we expect that by offering a free ebook to abandoning visitors, we will improve our conversion rate beyond the current 1.27% baseline.

Whereas the original overlay asked visitors to subscribe to the blog for “tips”, the challenger overlay offered visitors The 23 Principles of Attention-Driven Design.

add-overlay

After 96 days and over 260,000 visitors, we had enough conversions to call this experiment a success. The overlay converted at 2.65%, and captured 7,126 new blog subscribers.

overlay-experiment-1-results

Since we didn’t A/B test these overlays, our results were merely observations. Seasonality is one of many factors that can sway the numbers.

We couldn’t take it as gospel, but we were seeing double the subscribers we had previously.

Observations

  • Offering tangible resources (versus non-specific promises, like a blog signup) can positively affect conversion rates.

Stay in the loop and get all the juicy test results from our upcoming overlay experiments

Learn from our overlay wins, losses and everything in between.
By entering your email you’ll receive weekly Unbounce Blog updates and other resources to help you become a marketing genius.

Experiment #2: Four-field vs. Single-field Overlays

Data people always spoil the party.

The early success of our first experiment caught the attention of Judi, our resident marketing automation whiz, who wisely reminded us that collecting only an email address on a large-scale campaign was a missed opportunity.

For us to fully leverage this campaign, we needed to find out more about the individuals (and organizations) who were consuming our content.

Translation: We needed to add three more form fields to the overlay.

overlay-experiment-2

Since filling out forms is a universal bummer, we safely assumed our conversion rate would take a dive.

But something else happened that we didn’t predict. Notice a difference (besides the form fields) between the two overlays above? Yup, the new version was larger: 900x700px vs. 750x450px.

Adding three form fields made our original 750x450px design feel too cramped, so we arbitrarily increased the size — never thinking there may be consequences. More on that later.

Anyways, we launched the new version, and as expected the results sucked.

overlay-experiment-2-results
Things weren’t looking good after 30 days.

For business reasons, we decided to end the test after 30 days, even though we didn’t run the challenger overlay for an equal time period (96 days).

Overall, the conversion rate for the 30-day period was 48% lower than the previous 96-day period. I knew it was for good reason: Building our data warehouse is important. Still, a small part of me died that day.

Then it got worse.

It occurred to us that for a 30-day period, that sample size of viewers for the new overlay (53,460) looked awfully small.

A closer inspection revealed that our previous overlay averaged 2,792 views per day, while this new version was averaging 1,782. So basically our 48% conversion drop was served a la carte with a 36% plunge in overall views. Fun!

But why?

It turns out increasing the size of the overlay wasn’t so harmless. The size was too large for many people’s browser windows, so the overlay only fired two out of every three visits, even when targeting rules matched.

We conceded, and redesigned the overlay in 800x500px format.

overlay-experiment-redesign

Daily views rose back to their normal numbers, and our new baseline conversion rate of 1.25% remained basically unchanged.

loads-vs-views

Large gap between “loads” and “views” on June 4th; narrower gap on June 5th.

Observations

  • Increasing the number of form fields in overlays can cause friction that reduces conversion rates.
  • Overlay sizes exceeding 800×500 can be too large for some browsers and reduce load:view ratio (and overall impressions).

Experiment #3: One Overlay vs. 10 Overlays

It seemed like such a great idea at the time…

Why not get hyper relevant and build a different exit overlay to each of our blog categories?

With our new baseline conversion rate reduced to 1.25%, we needed an improvement that would help us overcome “form friction” and get us back to that healthy 2%+ range we enjoyed before.

So with little supporting data, we hypothesized that increasing “relevance” was the magic bullet we needed. It works on landing pages why not overlays?

HYPOTHESIS  
Since “relevance” is key to driving conversions, we expect that by running a unique exit overlay on each of our blog categories — whereby the free resource is specific to the category — we will improve our conversion rate beyond the current 1.25% baseline.

blog-categories

We divide our blog into categories according to the marketing topic they cover (e.g., landing pages, copywriting, design, UX, conversion optimization). Each post is tagged by category.

So to increase relevance, we created a total of 10 exit overlays (each offering a different resource) and assigned each overlay to one or two categories, like this:

category-specific-overlays

Creating all the new overlays would take some time (approximately three hours), but since we already had a deep backlog of resources on all things online marketing, finding a relevant ebook, course or video to offer in each category wasn’t difficult.

And since our URLs contain category tags (e.g., all posts on “design” start with root domain unbounce.com/design), making sure the right overlay ran on the right post was easy.

unbounce-targeting

URL Targeting rule for our Design category; the “include” rule automatically excludes the overlay from running in other categories.

But there was a problem: We’d established a strict rule that our readers would only ever see one exit overlay… no matter how many blog categories they browsed. It’s part of our philosophy on using overlays in a way that respects the user experience.

When we were just using one overlay, that was easy — a simple “Frequency” setting was all we needed.

unbounce-frequency

…but not so easy with 10 overlays running on the same blog.

We needed a way to exclude anyone who saw one overlay from seeing any of the other nine.

Cookies were the obvious answer, so we asked our developers to build a temporary solution that could:

  • Pass a cookie from an overlay to the visitor’s browser
  • Exclude that cookie in our targeting settings

They obliged.

unbounce-advanced-targeting

We used “incognito mode” to repeatedly test the functionality, and after that we were go for launch.

Then this happened.

rooster-dashboard
Ignore the layout… the Convertables dashboard is much prettier now :)

After 10 days of data, our conversion rate was a combined 1.36%, 8.8% higher than the baseline. It eventually crept its way to 1.42% after an additional 250,000 views. Still nowhere near what we’d hoped.

So what went wrong?

We surmised that just because an offer is “relevant” doesn’t mean it’s compelling. Admittedly, not all of the 10 resources were on par with The 23 Principles of Attention-Driven Design, the ebook we originally offered in all categories.

That said, this experiment provided an unexpected benefit: we could now see our conversion rates by category instead of just one big number for the whole blog. This would serve us well on future tests.

Observations

  • Just because an offer is relevant doesn’t mean it’s good.
  • Conversion rates vary considerably between categories.

Experiment #4: Resource vs. Resource

“Just because it’s relevant doesn’t mean it’s good.”

This lesson inspired a simple objective for our next task: Improve the offers in our underperforming categories.

We decided to test new offers across five categories that had low conversion rates and high traffic volume:

  1. A/B Testing and CRO (0.57%)
  2. Email (1.24%)
  3. Lead Gen and Content Marketing (0.55%)
Note: We used the same overlay for the A/B Testing and CRO categories, as well as the Lead Gen and Content Marketing Categories.

Hypothesis
Since we believe the resources we’re offering in the categories of A/B testing, CRO, Email, Lead Gen and Content Marketing are less compelling than resources we offer in other categories, we expect to see increased conversion rates when we test new resources in these categories.

With previous studies mentioned in this post, we compared sequential periods. For this one, we took things a step further and jury-rigged an A/B testing system together using Visual Website Optimizer and two Unbounce accounts.

And after finding what we believed to be more compelling resources to offer, the new test was launched.

topic-experiment

We saw slightly improved results in the A/B Testing and CRO categories, although not significant. For the Email category, we saw a large drop-off.

In the Lead Gen and Content Marketing categories however, there was a dramatic uptick in conversions and the results were statistically significant. Progress!

Observations

  • Not all content is created equal; some resources are more desirable to our audience.

Experiment #5: Clickthrough vs. Lead Gen Overlays

Although progress was made in our previous test, we still hadn’t solved the problem from our second experiment.

While having the four fields made each conversion more valuable to us, it still reduced our conversion rate a relative 48% (from 2.65% to 1.25% back in experiment #2).

We’d now worked our way up to a baseline of 1.75%, but still needed a strategy for reducing form friction.

The answer lay in a new tactic for using overlays that we dubbed traffic shaping.

Traffic Shaping: Using clickthrough overlays to incentivize visitors to move from low-converting to high-converting pages.

Here’s a quick illustration:

traffic-shaping-diagram

Converting to this format would require us to:

  1. Redesign our exit overlays
  2. Build a dedicated landing page for each overlay
  3. Collect leads via the landing pages

Basically, we’d be using the overlays as a bridge to move readers from “ungated” content (a blog post) to “gated” content (a free video that required a form submission to view). Kinda like playing ‘form field hot potato’ in a modern day version of Pipe Dream.

Hypothesis
Because “form friction” reduces conversions, we expect that removing form fields from our overlays will increase engagement (enough to offset the drop off we expect from adding an extra step). To do this, we will redesign our overlays to clickthrough (no fields), create a dedicated landing page for each overlay and add the four-field form to the landing page. We’ll measure results in Unbounce.

By this point, we were using Unbounce to build the entire campaign. The overlays were built in Convertables, and the landing pages were created with the Unbounce landing page builder.

We decided to test this out in our A/B Testing and CRO as well as Lead Gen and Content Marketing categories.

clickthrough-overlays

After filling out the form, visitors would either be given a secure link for download (PDF) or taken to a resource page where their video would play.

Again, for this to be successful the conversion rate on the overlays would need to increase enough to offset the drop off we expected by adding the extra landing page step.

These were our results after 21 days.

clickthrough-overlays-results

Not surprisingly, engagement with the overlays increased significantly. I stress the word “engagement” and not “conversion,” because our goal had changed from a form submission to a clickthrough.

In order to see a conversion increase, we needed to factor in the percentage of visitors who would drop off once they reached the landing page.

A quick check in Unbounce showed us landing page drop-off rates of 57.7% (A/B Testing/CRO) and 25.33% (Lead Gen/Content Marketing). Time for some grade 6 math…

clickthrough-overlays-results-2

Even with significant drop-off in the landing page step, overall net leads still increased.

Our next step would be applying the same format to all blog categories, and then measuring overall results.

Onward!

All observations

  • Offering specific, tangible resources (vs. non-specific promises) can positively affect conversion rates.
  • Increasing the number of form fields in overlays can cause friction that reduces conversion rates.
  • Overlay sizes exceeding 800×500 can be too large for some browsers and reduce load:view ratio (and overall impressions).
  • Just because an offer is relevant doesn’t mean it’s good
  • Conversion rates vary considerably between blog categories
  • Not all content is created equal; some resources are more desirable to our audience.
  • “Form friction” can vary significantly depending on where your form fields appear.

Stay tuned…

We’re continuing to test new triggers and targeting options for overlays, and we want to tell you all about it.

So what’s in store for next time?

  1. The Trigger Test — What happens when test our “on exit” trigger against a 15-second time delay?
  2. The Referral Test — What happens when we show different overlays to users from different traffic sources (e.g., social vs. organic)?
  3. New v.s. Returning Visitors — Do returning blog visitors convert better than first-time visitors?

Stay in the loop and get all the juicy test results from our upcoming overlay experiments

Learn from our overlay wins, losses and everything in between.
By entering your email you’ll receive weekly Unbounce Blog updates and other resources to help you become a marketing genius.

More: 

Lessons Learned From 2,345,864 Exit Overlay Visitors

How to Play Into Your Customer’s Need for ‘I Want It Right Now’

Although eCommerce receives most of the limelight, 91.6% of U.S. sales still take place offline. With all the benefits of buying online — lower cost, wider choice, no need to put on pants — how come retail stores are still a thing? According to a study by Ripen Ecommerce, 30.8% is explained by people wanting to be able to touch and feel the products. The second main reason (29.9%) is that people want their items right away. This need for instant gratification is a powerful one. And while a 4D online shopping experience is likely still some years away, there…

The post How to Play Into Your Customer’s Need for ‘I Want It Right Now’ appeared first on The Daily Egg.

Credit – 

How to Play Into Your Customer’s Need for ‘I Want It Right Now’

Improve Your Billing Form’s UX In One Day

The checkout page is the last page a user visits before finally decide to complete a purchase on your website. It’s where window shoppers turn into paying customers. If you want to leave a good impression, you should provide optimal usability of the billing form and improve it wherever it is possible to.

Improve Your Billing Form’s UX In One Day

In less than one day, you can add some simple and useful features to your project to make your billing form user-friendly and easy to fill in. A demo with all the functions covered below is available. You can find its code in the GitHub repository.

The post Improve Your Billing Form’s UX In One Day appeared first on Smashing Magazine.

Visit source: 

Improve Your Billing Form’s UX In One Day

How To Leverage Facebook’s Live 360 Videos

facebook 360

In case you hadn’t noticed – though I’m guessing you have – consumption of online video has been steadily rising in recent years. According to a forecast by Cisco, video will represent 80% percent of all consumer-based internet traffic by 2019. In the information age, the average person has a shorter attention span than a goldfish, and unless your content is extra special, people are unlikely to pay attention. A compelling video stands out from generic mass marketing and communicates your message more impactfully than text-based content. In terms of generating engagement, text-based content simply can’t compete with sensory-rich, emotive…

The post How To Leverage Facebook’s Live 360 Videos appeared first on The Daily Egg.

See the original article here:

How To Leverage Facebook’s Live 360 Videos

8 Overlay Examples to Inspire More Clicks, Sales & Signups [FREE LOOKBOOK]

Need some inspiration for your overlay design? No problem.

overlay-inspiration
Image source.

Oh, sorry, you didn’t mean an inspirational quote?

Let me try that again.

Inspire more clicks, sales & signups with your overlays

Download our free Spring Overlay Lookbook, featuring 8 oh-so-beautiful, Unbounce-built overlays.


By entering your email you'll receive weekly Unbounce Blog updates and other resources to help you become a marketing genius.

Feature image via Shutterstock.

Excerpt from - 

8 Overlay Examples to Inspire More Clicks, Sales & Signups [FREE LOOKBOOK]

First CRO Certification Course in Italy – An Initiative Supported by VWO

alt : http://feedproxy.google.com/~r/ILoveSplitTesting/~5/7jRxHo7WIRI/madri_ok.mp4http://feedproxy.google.com/~r/ILoveSplitTesting/~5/7jRxHo7WIRI/madri_ok.mp4

How can you learn Conversion Rate Optimization in a way that you can apply it easily to any project?  How can you make a low performing website to a highly remunerative one without redesigning it from scratch?

Those are just two of the questions that Luca Catania, Director of Madri Internet Marketing & Head of Marketing of Catchi, answered during the First Certification CRO certification Course in Italy supported by VWO.

The course targeted a wide audience—from people with no experience in CRO to experts in the field. Attendees comprised c-suite executives—Entrepreneurs, Head of Marketing, Managing Directors, Consultants, from more than 20 different industries.

The objective of the training was to teach participants an innovative step-by-step approach to CRO, in which participants are guided to learn a system that they can apply to any business to increase conversion rates, increase leads, increase sales online.

Participants got the chance to learn how to optimize their websites in a real-time setup. Using the VWO platform live in the course allowed the participants to understand and experience how the software can help optimize websites and achieve better conversions.

Do you want to improve you CRO skills? 

You can read interesting case studies and find the dates of upcoming courses in Europe/Australasia, following Luca Catania on LinkedIn.

The post First CRO Certification Course in Italy – An Initiative Supported by VWO appeared first on VWO Blog.

Jump to original:

First CRO Certification Course in Italy – An Initiative Supported by VWO

3 Need-To-Know Social Media Lessons That Will Lift Your Conversions

social media lessons

Ad copy, product descriptions, email subject lines — and social media. These are a few places in marketing where it’s absolutely vital to nail your short-form copywriting technique. 140 characters goes by in the blink of an eye, yet the best Twitter users consistently post quality content that keeps readers engaged. What’s their secret? Social media is a relative newcomer next to email and print advertising, but rest assured that the rules of the game haven’t changed. They’ve simply gotten a bit harder. True, some social platforms enforce a hard character limit, and they all have a soft limit in…

The post 3 Need-To-Know Social Media Lessons That Will Lift Your Conversions appeared first on The Daily Egg.

Original link – 

3 Need-To-Know Social Media Lessons That Will Lift Your Conversions

Beyond The Browser: From Web Apps To Desktop Apps

I started out as a web developer, and that’s now one part of what I do as a full-stack developer, but never had I imagined I’d create things for the desktop. I love the web. I love how altruistic our community is, how it embraces open-source, testing and pushing the envelope.

Beyond The Browser: From Web Apps To Desktop Apps

I love discovering beautiful websites and powerful apps. When I was first tasked with creating a desktop app, I was apprehensive and intimidated. It seemed like it would be difficult, or at least… different.

The post Beyond The Browser: From Web Apps To Desktop Apps appeared first on Smashing Magazine.

See the original post:  

Beyond The Browser: From Web Apps To Desktop Apps

Beyond A vs. B: How to get better results with better experiment design

Reading Time: 7 minutes

You’ve been pushing to do more testing at your organization.

You’ve heard that your competitors at ______ are A/B testing, and that their customer experience is (dare I say it?) better than yours.

You believe in marketing backed by science and data, and you have worked to get the executive team at your company on board with a tested strategy.

You’re excited to begin! To learn more about your customers and grow your business.

You run one A/B test. And then another. And then another. But you aren’t seeing that conversion rate lift you promised. You start to hear murmurs and doubts. You start to panic a little.

You could start testing as fast as you can, trying to get that first win. (But you shouldn’t).

Instead, you need to reexamine how you are structuring your tests. Because, as Alhan Keser writes,

Alhan Keser

If your results are disappointing, it may not only be what you are testing – it is definitely how you are testing. While there are several factors for success, one of the most important to consider is Design of Experiments (DOE).

This isn’t the first (or even the second) time we have written about Design of Experiments on the WiderFunnel blog. Because that’s how important it is. Seriously.

For this post, I teamed up with Director of Optimization Strategy, Nick So, to take a deeper look at the best ways to structure your experiments for maximum growth and insights.

Discover the best experiment structure for you!

Compare the pro’s and con’s of different Design of Experiment tactics with this simple download. The method you choose is up to you!



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.


Warning: Things will get a teensy bit technical, but this is a vital part of any high-performing marketing optimization program.

The basics: Defining A/B, MVT, and factorial

Marketers often use the term ‘A/B testing’ to refer to marketing experimentation in general. But there are multiple different ways to structure your experiments. A/B testing is just one of them.

Let’s look at a few: A/B testing, A/B/n testing, multivariate (MVT), and factorial design.

A/B test

In an A/B test, you are testing your original page / experience (A) against a single variation (B) to see which will result in a higher conversion rate. Variation B might feature a multitude of changes (i.e. a ‘cluster’) of changes, or an isolated change.

ab test widerfunnel
When you change multiple elements in a single variation, you might see lift, but what about insights?

In an A/B/n test, you are testing more than two variations of a page at once. “N” refers to the number of versions being tested, anywhere from two versions to the “nth” version.

Multivariate test (MVT)

With multivariate testing, you are testing each, individual change, isolated one against another, by mixing and matching every possible combination available.

Imagine you want to test a homepage re-design with four changes in a single variation:

  • Change A: New hero banner
  • Change B: New call-to-action (CTA) copy
  • Change C: New CTA color
  • Change D: New value proposition statement

Hypothetically, let’s assume that each change has the following impact on your conversion rate:

  • Change A = +10%
  • Change B = +5%
  • Change C = -25%
  • Change D = +5%

If you were to run a classic A/B test―your current control page (A) versus a combination of all four changes at once (B)―you would get a hypothetical decrease of -5% overall (10% + 5% – 25% +5%). You would assume that your re-design did not work and most likely discard the ideas.

With a multivariate test, however, each of the following would be a variation:

mvt widerfunnel

Multivariate testing is great because it shows you the positive or negative impact of every single change, and every single combination of every change, resulting in the most ideal combination (in this theoretical example: A + B + D).

However, this strategy is kind of impossible in the real world. Even if you have a ton of traffic, it would still take more time than most marketers have for a test with 15 variations to reach any kind of statistical significance.

The more variations you test, the more your traffic will be split while testing, and the longer it will take for your tests to reach statistical significance. Many companies simply can’t follow the principles of MVT because they don’t have enough traffic.

Enter factorial experiment design. Factorial design allows for the speed of pure A/B testing combined with the insights of multivariate testing.

Factorial design: The middle ground

Factorial design is another method of Design of Experiments. Similar to MVT, factorial design allows you to test more than one element change within the same variation.

The greatest difference is that factorial design doesn’t force you to test every possible combination of changes.

Rather than creating a variation for every combination of changed elements (as you would with MVT), you can design your experiment to focus on specific isolations that you hypothesize will have the biggest impact.

With basic factorial experiment design, you could set up the following variations in our hypothetical example:

VarA: Change A = +10%
VarB: Change A + B = +15%
VarC: Change A + B + C = -10%
VarD: Change A + B + C + D = -5%

Factorial design widerfunnel
In this basic example, variation A features a single change; VarB is built on VarA, and VarC is built on VarB.

NOTE: With factorial design, estimating the value (e.g. conversion rate lift) of each change is a bit more complex than shown above. I’ll explain.

Firstly, let’s imagine that our control page has a baseline conversion rate of 10% and that each variation receives 1,000 unique visitors during your test.

When you estimate the value of change A, you are using your control as a baseline.

factorial testing widerfunnel
Variation A versus the control.

Given the above information, you would estimate that change A is worth a 10% lift by comparing the 11% conversion rate of variation A against the 10% conversion rate of your control.

The estimated conversion rate lift of change A = (11 / 10 – 1) = 10%

But, when estimating the value of change B, variation A must become your new baseline.

factorial testing widerfunnel
Variation B versus variation A.

The estimated conversion rate lift of change B = (11.5 / 11 – 1) = 4.5%

As you can see, the “value” of change B is slightly different from the 5% difference shown above.

When you structure your tests with factorial design, you can work backwards to isolate the effect of each individual change by comparing variations. But, in this scenario, you have four variations instead of 15.

Mike St Laurent

We are essentially nesting A/B tests into larger experiments so that we can still get results quickly without sacrificing insights gained by isolations.

– Michael St Laurent, Optimization Strategist, WiderFunnel

Then, you would simply re-validate the hypothesized positive results (Change A + B + D) in a standard A/B test against the original control to see if the numbers align with your prediction.

Factorial allows you to get the best potential lift, with five total variations in two tests, rather than 15 variations in a single multivariate test.

But, wait…

It’s not always that simple. How do you hypothesize which elements will have the biggest impact? How do you choose which changes to combine and which to isolate?

The Strategist’s Exploration

The answer lies in the Explore (or research gathering) phase of your testing process.

At WiderFunnel, Explore is an expansive thinking zone, where all options are considered. Ideas are informed by your business context, persuasion principles, digital analytics, user research, and your past test insights and archive.

Experience is the other side to this coin. A seasoned optimization strategist can look at the proposed changes and determine which changes to combine (i.e. cluster), and which changes should be isolated due to risk or potential insights to be gained.

At WiderFunnel, we don’t just invest in the rigorous training of our Strategists. We also have a 10-year-deep test archive that our Strategy team continuously draws upon when determining which changes to cluster, and which to isolate.

Factorial design in action: A case study

Once upon a time, we were testing with Annie Selke, a retailer of luxury home-ware goods. This story follows two experiments we ran on Annie Selke’s product category page.

(You may have already read about what we did during this test, but now I’m going to get into the details of how we did it. It’s a beautiful illustration of factorial design in action!)

Experiment 4.7

In the first experiment, we tested three variations against the control. As the experiment number suggests, this was not the first test we ran with Annie Selke, in general. But it is the ‘first’ test in this story.

ab testing marketing control
Experiment 4.7 control product category page.

Variation A featured an isolated change to the “Sort By” filters below the image, making it a drop down menu.

ab testing marketing example
Replaced original ‘Sort By’ categories with a more traditional drop-down menu.

Evidence?

This change was informed by qualitative click map data, which showed low interaction with the original filters. Strategists also theorized that, without context, visitors may not even know that these boxes are filters (based on e-commerce best practices). This variation was built on the control.

Variation B was also built on the control, and featured another isolated change to reduce the left navigation.

ab testing marketing example
Reduced left-hand navigation.

Evidence?

Click map data showed that most visitors were clicking on “Size” and “Palette”, and past testing had revealed that Annie Selke visitors were sensitive to removing distractions. Plus, the persuasion principle, known as the Paradox of Choice, theorizes that more choice = more anxiety for visitors.

Unlike variation B, variation C was built on variation A, and featured a final isolated change: a collapsed left navigation.

Collapsed left-hand filter (built on VarA).
Collapsed left-hand filter (built on VarA).

Evidence?

This variation was informed by the same evidence as variation B.

Results

Variation A (built on the control) saw a decrease in transactions of -23.2%.
Variation B (built on the control) saw no change.
Variation C (built on variation A) saw a decrease in transactions of -1.9%.

But wait! Because variation C was built on variation A, we knew that the estimated value of change C (the collapsed filter), was 19.1%.

The next step was to validate our estimated lift of 19.1% in a follow up experiment.

Experiment 4.8

The follow-up test also featured three variations versus the original control. Because, you should never waste the opportunity to gather more insights!

Variation A was our validation variation. It featured the collapsed filter (change C) from 4.7’s variation C, but maintained the original “Sort By” functionality from 4.7’s control.

ab testing marketing example
Collapsed filter & original ‘Sort By’ functionality.

Variation B was built on variation A, and featured two changes emphasizing visitor fascination with colors. We 1) changed the left nav filter from “palette” to “color”, and 2) added color imagery within the left nav filter.

ab testing marketing example
Updated “palette” to “color”, and added color imagery. (A variation featuring two clustered changes).

Evidence?

Click map data suggested that Annie Selke visitors are most interested in refining their results by color, and past test results also showed visitor sensitivity to color.

Variation C was built on variation A, and featured a single isolated change: we made the collapsed left nav persistent as the visitor scrolled.

ab testing marketing example
Made the collapsed filter persistent.

Evidence?

Scroll maps and click maps suggested that visitors want to scroll down the page, and view many products.

Results

Variation A led to a 15.6% increase in transactions, which is pretty close to our estimated 19% lift, validating the value of the collapsed left navigation!

Variation B was the big winner, leading to a 23.6% increase in transactions. Based on this win, we could estimate the value of the emphasis on color.

Variation C resulted in a 9.8% increase in transactions, but because it was built on variation A (not on the control), we learned that the persistent left navigation was actually responsible for a decrease in transactions of -11.2%.

This is what factorial design looks like in action: big wins, and big insights, informed by human intelligence.

The best testing framework for you

What are your testing goals?

If you are in a situation where potential revenue gains outweigh the potential insights to be gained or your test has little long-term value, you may want to go with a standard A/B cluster test.

If you have lots and lots of traffic, and value insights above everything, multivariate may be for you.

If you want the growth-driving power of pure A/B testing, as well as insightful takeaways about your customers, you should explore factorial design.

A note of encouragement: With factorial design, your tests will get better as you continue to test. With every test, you will learn more about how your customers behave, and what they want. Which will make every subsequent hypothesis smarter, and every test more impactful.

One 10% win without insights may turn heads your direction now, but a test that delivers insights can turn into five 10% wins down the line. It’s similar to the compounding effect: collecting insights now can mean massive payouts over time.

– Michael St Laurent

The post Beyond A vs. B: How to get better results with better experiment design appeared first on WiderFunnel Conversion Optimization.

More – 

Beyond A vs. B: How to get better results with better experiment design

Using Personalization To Increase AOV And Conversion Rates

Dear |FNAME|,

As a valued customer, we’d like to…

For many eCommerce companies, the first personalization project begins with FNAME. We have become really good at personalizing emails because we know that it works. Emails personalized with recipients’ first names increase open rates by 2.6 percent.

Shoppers are more attracted to marketing that targets their interests and purchase patterns. This doesn’t only apply to emails–using personalization in your eCommerce branded store is the best way to build a relationship and keep customers converting.

The more often customers return, the better you become at delivering relevant suggestions and content for them. According to an Adobe study, 40% of online revenue comes from returning customers…who only represent 8% of site traffic. Using personalized recommendations, enterprises can build a stronger, more profitable relationship with their users.

Now is the time to optimize revenue opportunities and become better at selling to the right customers at the right time. Read on to learn how to use personalization to drive up average order value, or AOV.

Importance of Good Data

Personalization doesn’t work if you don’t know anything about your customers. The more relevant and accurate data you gather, the more refined and detailed picture you can draw. Customers are happy to help you get to know them too. 75% of shoppers like it when brands personalize products and offers, while 74% of online customers get frustrated with a website when content that appears has nothing to do with their interests.

When customers sign up on your site or check out for the first time, use this opportunity to collect information. This will help you with informed promotion and planning recommendations in the future.

As your relationship grows, you can continue to learn more about your customers.

  • How often are they buying?
  • What is their AOV?
  • What campaigns have converted for them?

Finally, customers have the most information about themselves. Allowing them to personalize their own experience by sharing their gender or interest information is a simple way to ensure that you aren’t showing them irrelevant information or products.

Customer data can come from anywhere, and it’s necessary when personalizing experiences. In summary, look for the following data points:

  • Location/IP address
  • Channel of entry (social/email/Amazon)
  • New or Returning customer
  • Previous searches
  • Shopping history
  • Shopping patterns (based on parameters such as the AOV)
  • Customer segments (people who are like them)
  • Customer-provided information (gender, interests)

Enabling social logins like Connect with Facebook will also help you get demographic information about your customers, without them having to provide it themselves.

Now that we’ve got a good picture of our customers, we can start personalizing their experience. There’re three main ways to do this—by segmenting, history, or trend analysis.

Personalization by Segmenting Customers

There are several ways you can personalize a customer’s experience even without asking for any information. When customers land on your site, you already know more about them than you might think.

Practical Tips

Use geotargeting to show the correct language and currency.

Right now, I’m in Austria, so Wool and Gang default to Austria shipping rates and are showing me prices in Euros. This reduces concerns international customers might have about shipping abroad or currency exchange. Reducing concerns means an easier checkout experience, which means better conversions.

personalization example wool and the gang
Source

Using cookies to know if a customer is new or returning.

If they are new customers, prompt them with a pop-up module to sign up and get a discount on their first purchase. Welcome them to your site, explain who you are, and save their email addresses for future selling opportunities.

Spearmint LOVE offers 10% off for first-time visitors if they sign up for the newsletter. It’s a little bonus that later helps convert visitors at a higher value.

Personalization example Spearmint
Source

Segment on the basis of individual shoppers vs. wholesalers

“Wholesalers” is another segment of customers who have different needs. Individual shoppers want quick, one-off purchases and may not be as likely to sign in or create accounts on branded sites.

But catering to wholesale clients by allowing them to sign in to receive special discounts and review orders without calling an account management team makes the experience much better for them. Clarion Safety sells industrial grade safety labels. This organization has created a special experience for wholesale customers that allows them to use different check-out options, such as “charge to account.”

Personalization example Clarion
Source

Identify and segment by channel as a source of entry

Different paths signal different intents.

If they found your products through Pinterest, they are looking to browse and are more visual. If they clicked an email coupon, they could be price conscious and should be shown more sale items. Get inside your customers’ brains and show them what they want to see—this will provide you the highest chance of conversion.

Personalization by Previous Activity

After a relationship has been established between you and your customers—whether that’s just through visiting or years of purchasing history—you have information about them from their previous activity. Use this information to customize their experience, and upsell and cross-sell products that are relevant to them.

Practical Tip

Before purchasing, visitors go back and forth with regard to an item when not sure. They might visit the same site multiple times in a week. A surefire way to get them to convert is to show them their recently viewed items whenever they visit your website. If you’re able to offer a discount on products that they’ve viewed multiple times, it might help you seal the deal.

EpicTV combines this strategy with a least purchase amount for free shipping. This means that visitors will usually add something from their recently viewed list just to achieve that perk.

Personalization example Epic TV
Source

When customers are viewing their carts, at that instance, you can use previous searches or purchases to suggest complementary items. Red’s Baby uses this method to suggest accessories for the main purchase and incrementally increase the AOV. I added a stroller to my shopping cart, and this site suggested matching accessories—all under $50. At this instance, suggesting other types of strollers wouldn’t be effective.

Personalization example Red's Baby
Source

Think about what it’s like meeting customers in the real world. The more you see them, the more history you have of them. You might know that they have kids or that they like to play squash on weekends.

This context makes personalized recommendations and upsells easier. Try and replicate this online. Shopping at an eCommerce retailer doesn’t need to be impersonal, and it shouldn’t be.

Personalization by Building Patterns

Taking the time to build a better recommendation engine makes sense and helps generate additional revenue. According to Barilliance and data based on 1.5 billion online shopping sessions, personalized on-site product recommendations constitute 11.5% of revenue through eCommerce sites. That’s a big chunk of revenue to miss out on!  

Practical Tip

To optimize across all customer visits, dive into analytics and look for purchasing patterns. Do shoppers tend to return often if they buy a specific item? Do many shoppers buy a combination of items at the same time? Finding and taking advantage of these opportunities can help drive up AOV.

For example, recommending products that other customers bought helps crowd source the best options. Check out these suggestions by Blue Tomato when viewing an item. 

Personalization example Blue Tomato
Source

Flash Tattoos speaks their customer’s language and makes their Recommendation section fun. “You’d also look good in” is a flattering way to suggest similar products across different styles.

Personalization example Flash Tattoos
Source

If customers have viewed the shipping policy and not purchased, they might be hesitant about shipping costs. Try offering free shipping at a certain cart value to convert potentially cost-sensitive customers. Finding these patterns that expose reasons for cart abandonment helps create a better experience for your customers. They’ll feel like you are addressing their concerns before they even ask!

Final Tips

Now that you’re ready to start personalizing the shopping experience, we’ve got a few final tips for you:

When you’re suggesting or upselling, use your screen space wisely:

Remember the purpose of each screen, and don’t distract customers from completing their purchase. On the checkout screen, the single Call-to-Action should be to convert and pay for what they’ve selected. Cluttering the screen with additional products can reduce your overall conversion rate.

Personalization isn’t a set-it-and-forget-it tactic:

You need to constantly reevaluate your metrics, hypotheses, and experiments to keep getting better at selling to your customers. Don’t be afraid to try things out and get personal! Your customers will love it and reward you for it with higher AOVs.

Over to You

Have more ideas on how to increase AOV and conversion rates with personalization? Send us your feedback and views in the comments section below.


Kickstart_Personalization_Guide_Free_Trial

The post Using Personalization To Increase AOV And Conversion Rates appeared first on VWO Blog.

Link:

Using Personalization To Increase AOV And Conversion Rates