Tag Archives: wider

[Case Study] Ecwid sees 21% lift in paid plan upgrades in one month

Reading Time: 2 minutes

What would you do with 21% more sales this month?

I bet you’d walk into your next meeting with your boss with an extra spring in your step, right?

Well, when you implement a strategic marketing optimization program, results like this are not only possible, they are probable.

In this new case study, you’ll discover how e-commerce software supplier, Ecwid, ran one experiment for four weeks, and saw a 21% increase in paid upgrades.

Get the full Ecwid case study now!

Download a PDF version of the Ecwid case study, featuring experiment details, supplementary takeaways and insights, and a testimonial from Ecwid’s Sr. Director, Digital Marketing.



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.

A little bit about Ecwid

Ecwid provides easy-to-use online store setup, management, and payment solutions. The company was founded in 2009, with the goal of enabling business-owners to add online stores to their existing websites, quickly and without hassle.

The company has a freemium business model: Users can sign up for free, and unlock more features as they upgrade to paid packages.

Ecwid’s partnership with WiderFunnel

In November 2016, Ecwid partnered with WiderFunnel with two primary goals:

  1. To increase initial signups for their free plan through marketing optimization, and
  2. To increase the rate of paid upgrades, through platform optimization

This case study focuses on a particular experiment cycle that ran on Ecwid’s step-by-step onboarding wizard.

The methodology

Last Winter, the WiderFunnel Strategy team did an initial LIFT Analysis of the onboarding wizard, and identified several potential barriers to conversion. (Both in terms of completing steps to setup a new store, and in terms of upgrading to a paid plan.)

The lead WiderFunnel Strategist for Ecwid, Dennis Pavlina, decided to create an A/B cluster test to 1) address the major barriers simultaneously, and 2) to get major lift for Ecwid, quickly.

The overarching goal was to make the onboarding process smoother. The WiderFunnel and Ecwid optimization teams hoped that enhancing the initial user experience, and exposing users to the wide range of Ecwid’s features, would result in more users upgrading to paid plans.

Dennis Pavlina

Ecwid’s two objectives ended up coming together in this test. We thought that if more new users interacted with the wizard and were shown the whole ‘Ecwid world’ with all the integrations and potential it has, they would be more open to upgrading. People needed to be able to see its potential before they would want to pay for it.

Dennis Pavlina, Optimization Strategist, WiderFunnel

The Results

This experiment ran for four weeks, at which point the variation was determined to be the winner with 98% confidence. The variation resulted in a 21.3% increase in successful paid account upgrades for Ecwid.

Read the full case study for:

  • The details on the initial barriers to conversion
  • How this test was structured
  • Which secondary metrics we tracked, and
  • The supplementary takeaways and customer insights that came from this test

The post [Case Study] Ecwid sees 21% lift in paid plan upgrades in one month appeared first on WiderFunnel Conversion Optimization.

See original article:

[Case Study] Ecwid sees 21% lift in paid plan upgrades in one month

How pilot testing can dramatically improve your user research

Reading Time: 6 minutes

Today, we are talking about user research, a critical component of any design toolkit. Quality user research allows you to generate deep, meaningful user insights. It’s a key component of WiderFunnel’s Explore phase, where it provides a powerful source of ideas that can be used to generate great experiment hypothesis.

Unfortunately, user research isn’t always as easy as it sounds.

Do any of the following sound familiar:

  • During your research sessions, your participants don’t understand what they have been asked to do?
  • The phrasing of your questions has given away the answer or has caused bias in your results?
  • During your tests, it’s impossible for your participants to complete the assigned tasks in the time provided?
  • After conducting participants sessions, you spend more time analyzing the research design rather than the actual results.

If you’ve experienced any of these, don’t worry. You’re not alone.

Even the most seasoned researchers experience “oh-shoot” moments, where they realize there are flaws in their research approach.

Fortunately, there is a way to significantly reduce these moments. It’s called pilot testing.

Pilot testing is a rehearsal of your research study. It allows you to test your research approach with a small number of test participants before the main study. Although this may seem like an additional step, it may, in fact, be the time best spent on any research project.
Just like proper experiment design is a necessity, investing time to critique, test, and iteratively improve your research design, before the research execution phase, can ensure that your user research runs smoothly, and dramatically improves the outputs from your study.

And the best part? Pilot testing can be applied to all types of research approaches, from basic surveys to more complex diary studies.

Start with the process

At WiderFunnel, our research approach is unique for every project, but always follows a defined process:

  1. Developing a defined research approach (Methodology, Tools, Participant Target Profile)
  2. Pilot testing of research design
  3. Recruiting qualified research participants
  4. Execution of research
  5. Analyzing the outputs
  6. Reporting on research findings
website user research in conversion optimization
User Research Process at WiderFunnel

Each part of this process can be discussed at length, but, as I said, this post will focus on pilot testing.

Your research should always start with asking the high-level question: “What are we aiming to learn through this research?”. You can use this question to guide the development of research methodology, select research tools, and determine the participant target profile. Pilot testing allows you to quickly test and improve this approach.

WiderFunnel’s pilot testing process consists of two phases: 1) an internal research design review and 2) participant pilot testing.

During the design review, members from our research and strategy teams sit down as a group and spend time critically thinking about the research approach. This involves reviewing:

  • Our high-level goals for what we are aiming to learn
  • The tools we are going to use
  • The tasks participants will be asked to perform
  • Participant questions
  • The research participant sample size, and
  • The participant target profile

Our team often spends a lot of time discussing the questions we plan to ask participants. It can be tempting to ask participants numerous questions over a broad range of topics. This inclination is often due to a fear of missing the discovery of an insight. Or, in some cases, is the result of working with a large group of stakeholders across different departments, each trying to push their own unique agenda.

However, applying a broad, unfocused approach to participant questions can be dangerous. It can cause a research team to lose sight of its original goals and produce research data that is difficult to interpret; thus limiting the number of actionable insights generated.

To overcome this, WiderFunnel uses the following approach when creating research questions:

Phase 1: To start, the research team creates a list of potential questions. These questions are then reviewed during the design review. The goal is to create a concise set of questions that are clearly written, do not bias the participant, and compliment each other. Often this involves removing a large number of the questions from our initial list and reworking those that remain.

Phase 2: The second phase of WiderFunnel’s research pilot testing consists of participant pilot testing.

This follows a rapid and iterative approach, where we pilot our defined research approach on an initial 1 to 2 participants. Based on how these participants respond, the research approach is evaluated, improved, and then tested on 1 to 2 new participants.

Researchers repeat this process until all of the research design “bugs” have been ironed out, much like QA-ing a new experiment. There are different criteria you can use to test the research experience, but we focus on testing three main areas: clarity of instructions, participant tasks and questions, and the research timing.

  • Clarity of instructions: This involves making sure that the instructions are not misleading or confusing to the participants
  • Testing of the tasks and questions: This involves testing the actual research workflow
  • Research timing: We evaluate the timing of each task and the overall experiment

Let’s look at an example.

Recently, a client approached us to do research on a new area of their website that they were developing for a new service offering. Specifically, the client wanted to conduct an eye tracking study on a new landing page and supporting content page.

With the client, we co-created a design brief that outlined the key learning goals, target participants, the client’s project budget, and a research timeline. The main learning goals for the study included developing an understanding of customer engagement (eye tracking) on both the landing and content page and exploring customer understanding of the new service.

Using the defined learning goals and research budget, we developed a research approach for the project. Due to the client’s budget and request for eye tracking we decided to use Sticky, a remote eye tracking tool to conduct the research.

We chose Sticky because it allows you to conduct unmoderated remote eye tracking experiments, and follow them up with a survey if needed.

In addition, we were also able to use Sticky’s existing participant pool, Sticky Crowd, to define our target participants. In this case, the criteria for the target participants were determined based on past research that had been conducted by the client.

Leveraging the capabilities of Sticky, we were able to define our research methodology and develop an initial workflow for our research participants. We then created an initial list of potential survey questions to supplement the eye tracking test.

At this point, our research and strategy team conducted an internal research design review. We examined both the research task and flow, the associated timing, and finalized the survey questions.

In this case, we used open-ended questions in order to not bias the participants, and limited the total number of questions to five. Questions were reworked from the proposed lists to improve the wording, ensure that questions complimented each other, and were focused on achieving the learning goals: exploring customer understanding of the new service.

To help with question clarity, we used Grammarly to test the structure of each question.

Following the internal design review, we began participant pilot testing.

Unfortunately, piloting an eye tracking test on 1 to 2 users is not an affordable option when using the Sticky platform. To overcome this we got creative and used some free tools to test the research design.

We chose to use Keynote presentation (timed transitions) and its Keynote Live feature to remotely test the research workflow, and Google Forms to test the survey questions. GoToMeeting was used to observe participants via video chat during the participant pilot testing. Using these tools we were able to conduct a quick and affordable pilot test.

The initial pilot test was conducted with two individual participants, both of which fit the criteria for the target participants. The pilot test immediately pointed out flaws in the research design, which included confusion regarding the test instructions and issues with the timing for each task.

In this case, our initial instructions did not provide our participants with enough information on the context of what they were looking for, resulting in confusion of what they were actually supposed to do. Additionally, we made an initial assumption that 5 seconds would be enough time for each participant to view and comprehend each page. However, the supporting content page was very context rich and 5 seconds did not provide participants enough time to view all the content on the page.

With these insights, we adjusted our research design to remove the flaws, and then conducted an additional pilot with two new individual participants. All of the adjustments seemed to resolve the previous “bugs”.

In this case, pilot testing not only gave us the confidence to move forward with the main study, it actually provide its own “A-ha” moment. Through our initial pilot tests, we realized that participants expected a set function for each page. For the landing page, participants expected a page that grabbed their attention and attracted them to the service, whereas, they expect the supporting content page to provide more details on the service and educate them on how it worked. Insights from these pilot tests reshaped our strategic approach to both pages.

Nick So

The seemingly ‘failed’ result of the pilot test actually gave us a huge Aha moment on how users perceived these two pages, which not only changed the answers we wanted to get from the user research test, but also drastically shifted our strategic approach to the A/B variations themselves.

Nick So, Director of Strategy, WiderFunnel

In some instances, pilot testing can actually provide its own unique insights. It is a nice bonus when this happens, but it is important to remember to always validate these insights through additional research and testing.

Final Thoughts

Still not convinced about the value of pilot testing? Here’s one final thought.

By conducting pilot testing you not only improve the insights generated from a single project, but also the process your team uses to conduct research. The reflective and iterative nature of pilot testing will actually accelerate the development of your skills as a researcher.

Pilot testing your research, just like proper experiment design, is essential. Yes, this will require an investment of both time and effort. But trust us, that small investment will deliver significant returns on your next research project and beyond.

Do you agree that pilot testing is an essential part of all research projects?

Have you had an “oh-shoot” research moment that could have been prevented by pilot testing? Let us know in the comments!

The post How pilot testing can dramatically improve your user research appeared first on WiderFunnel Conversion Optimization.

Source – 

How pilot testing can dramatically improve your user research

Beyond A vs. B: How to get better results with better experiment design

Reading Time: 7 minutes

You’ve been pushing to do more testing at your organization.

You’ve heard that your competitors at ______ are A/B testing, and that their customer experience is (dare I say it?) better than yours.

You believe in marketing backed by science and data, and you have worked to get the executive team at your company on board with a tested strategy.

You’re excited to begin! To learn more about your customers and grow your business.

You run one A/B test. And then another. And then another. But you aren’t seeing that conversion rate lift you promised. You start to hear murmurs and doubts. You start to panic a little.

You could start testing as fast as you can, trying to get that first win. (But you shouldn’t).

Instead, you need to reexamine how you are structuring your tests. Because, as Alhan Keser writes,

Alhan Keser

If your results are disappointing, it may not only be what you are testing – it is definitely how you are testing. While there are several factors for success, one of the most important to consider is Design of Experiments (DOE).

This isn’t the first (or even the second) time we have written about Design of Experiments on the WiderFunnel blog. Because that’s how important it is. Seriously.

For this post, I teamed up with Director of Optimization Strategy, Nick So, to take a deeper look at the best ways to structure your experiments for maximum growth and insights.

Discover the best experiment structure for you!

Compare the pro’s and con’s of different Design of Experiment tactics with this simple download. The method you choose is up to you!



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.


Warning: Things will get a teensy bit technical, but this is a vital part of any high-performing marketing optimization program.

The basics: Defining A/B, MVT, and factorial

Marketers often use the term ‘A/B testing’ to refer to marketing experimentation in general. But there are multiple different ways to structure your experiments. A/B testing is just one of them.

Let’s look at a few: A/B testing, A/B/n testing, multivariate (MVT), and factorial design.

A/B test

In an A/B test, you are testing your original page / experience (A) against a single variation (B) to see which will result in a higher conversion rate. Variation B might feature a multitude of changes (i.e. a ‘cluster’) of changes, or an isolated change.

ab test widerfunnel
When you change multiple elements in a single variation, you might see lift, but what about insights?

In an A/B/n test, you are testing more than two variations of a page at once. “N” refers to the number of versions being tested, anywhere from two versions to the “nth” version.

Multivariate test (MVT)

With multivariate testing, you are testing each, individual change, isolated one against another, by mixing and matching every possible combination available.

Imagine you want to test a homepage re-design with four changes in a single variation:

  • Change A: New hero banner
  • Change B: New call-to-action (CTA) copy
  • Change C: New CTA color
  • Change D: New value proposition statement

Hypothetically, let’s assume that each change has the following impact on your conversion rate:

  • Change A = +10%
  • Change B = +5%
  • Change C = -25%
  • Change D = +5%

If you were to run a classic A/B test―your current control page (A) versus a combination of all four changes at once (B)―you would get a hypothetical decrease of -5% overall (10% + 5% – 25% +5%). You would assume that your re-design did not work and most likely discard the ideas.

With a multivariate test, however, each of the following would be a variation:

mvt widerfunnel

Multivariate testing is great because it shows you the positive or negative impact of every single change, and every single combination of every change, resulting in the most ideal combination (in this theoretical example: A + B + D).

However, this strategy is kind of impossible in the real world. Even if you have a ton of traffic, it would still take more time than most marketers have for a test with 15 variations to reach any kind of statistical significance.

The more variations you test, the more your traffic will be split while testing, and the longer it will take for your tests to reach statistical significance. Many companies simply can’t follow the principles of MVT because they don’t have enough traffic.

Enter factorial experiment design. Factorial design allows for the speed of pure A/B testing combined with the insights of multivariate testing.

Factorial design: The middle ground

Factorial design is another method of Design of Experiments. Similar to MVT, factorial design allows you to test more than one element change within the same variation.

The greatest difference is that factorial design doesn’t force you to test every possible combination of changes.

Rather than creating a variation for every combination of changed elements (as you would with MVT), you can design your experiment to focus on specific isolations that you hypothesize will have the biggest impact.

With basic factorial experiment design, you could set up the following variations in our hypothetical example:

VarA: Change A = +10%
VarB: Change A + B = +15%
VarC: Change A + B + C = -10%
VarD: Change A + B + C + D = -5%

Factorial design widerfunnel
In this basic example, variation A features a single change; VarB is built on VarA, and VarC is built on VarB.

NOTE: With factorial design, estimating the value (e.g. conversion rate lift) of each change is a bit more complex than shown above. I’ll explain.

Firstly, let’s imagine that our control page has a baseline conversion rate of 10% and that each variation receives 1,000 unique visitors during your test.

When you estimate the value of change A, you are using your control as a baseline.

factorial testing widerfunnel
Variation A versus the control.

Given the above information, you would estimate that change A is worth a 10% lift by comparing the 11% conversion rate of variation A against the 10% conversion rate of your control.

The estimated conversion rate lift of change A = (11 / 10 – 1) = 10%

But, when estimating the value of change B, variation A must become your new baseline.

factorial testing widerfunnel
Variation B versus variation A.

The estimated conversion rate lift of change B = (11.5 / 11 – 1) = 4.5%

As you can see, the “value” of change B is slightly different from the 5% difference shown above.

When you structure your tests with factorial design, you can work backwards to isolate the effect of each individual change by comparing variations. But, in this scenario, you have four variations instead of 15.

Mike St Laurent

We are essentially nesting A/B tests into larger experiments so that we can still get results quickly without sacrificing insights gained by isolations.

– Michael St Laurent, Optimization Strategist, WiderFunnel

Then, you would simply re-validate the hypothesized positive results (Change A + B + D) in a standard A/B test against the original control to see if the numbers align with your prediction.

Factorial allows you to get the best potential lift, with five total variations in two tests, rather than 15 variations in a single multivariate test.

But, wait…

It’s not always that simple. How do you hypothesize which elements will have the biggest impact? How do you choose which changes to combine and which to isolate?

The Strategist’s Exploration

The answer lies in the Explore (or research gathering) phase of your testing process.

At WiderFunnel, Explore is an expansive thinking zone, where all options are considered. Ideas are informed by your business context, persuasion principles, digital analytics, user research, and your past test insights and archive.

Experience is the other side to this coin. A seasoned optimization strategist can look at the proposed changes and determine which changes to combine (i.e. cluster), and which changes should be isolated due to risk or potential insights to be gained.

At WiderFunnel, we don’t just invest in the rigorous training of our Strategists. We also have a 10-year-deep test archive that our Strategy team continuously draws upon when determining which changes to cluster, and which to isolate.

Factorial design in action: A case study

Once upon a time, we were testing with Annie Selke, a retailer of luxury home-ware goods. This story follows two experiments we ran on Annie Selke’s product category page.

(You may have already read about what we did during this test, but now I’m going to get into the details of how we did it. It’s a beautiful illustration of factorial design in action!)

Experiment 4.7

In the first experiment, we tested three variations against the control. As the experiment number suggests, this was not the first test we ran with Annie Selke, in general. But it is the ‘first’ test in this story.

ab testing marketing control
Experiment 4.7 control product category page.

Variation A featured an isolated change to the “Sort By” filters below the image, making it a drop down menu.

ab testing marketing example
Replaced original ‘Sort By’ categories with a more traditional drop-down menu.

Evidence?

This change was informed by qualitative click map data, which showed low interaction with the original filters. Strategists also theorized that, without context, visitors may not even know that these boxes are filters (based on e-commerce best practices). This variation was built on the control.

Variation B was also built on the control, and featured another isolated change to reduce the left navigation.

ab testing marketing example
Reduced left-hand navigation.

Evidence?

Click map data showed that most visitors were clicking on “Size” and “Palette”, and past testing had revealed that Annie Selke visitors were sensitive to removing distractions. Plus, the persuasion principle, known as the Paradox of Choice, theorizes that more choice = more anxiety for visitors.

Unlike variation B, variation C was built on variation A, and featured a final isolated change: a collapsed left navigation.

Collapsed left-hand filter (built on VarA).
Collapsed left-hand filter (built on VarA).

Evidence?

This variation was informed by the same evidence as variation B.

Results

Variation A (built on the control) saw a decrease in transactions of -23.2%.
Variation B (built on the control) saw no change.
Variation C (built on variation A) saw a decrease in transactions of -1.9%.

But wait! Because variation C was built on variation A, we knew that the estimated value of change C (the collapsed filter), was 19.1%.

The next step was to validate our estimated lift of 19.1% in a follow up experiment.

Experiment 4.8

The follow-up test also featured three variations versus the original control. Because, you should never waste the opportunity to gather more insights!

Variation A was our validation variation. It featured the collapsed filter (change C) from 4.7’s variation C, but maintained the original “Sort By” functionality from 4.7’s control.

ab testing marketing example
Collapsed filter & original ‘Sort By’ functionality.

Variation B was built on variation A, and featured two changes emphasizing visitor fascination with colors. We 1) changed the left nav filter from “palette” to “color”, and 2) added color imagery within the left nav filter.

ab testing marketing example
Updated “palette” to “color”, and added color imagery. (A variation featuring two clustered changes).

Evidence?

Click map data suggested that Annie Selke visitors are most interested in refining their results by color, and past test results also showed visitor sensitivity to color.

Variation C was built on variation A, and featured a single isolated change: we made the collapsed left nav persistent as the visitor scrolled.

ab testing marketing example
Made the collapsed filter persistent.

Evidence?

Scroll maps and click maps suggested that visitors want to scroll down the page, and view many products.

Results

Variation A led to a 15.6% increase in transactions, which is pretty close to our estimated 19% lift, validating the value of the collapsed left navigation!

Variation B was the big winner, leading to a 23.6% increase in transactions. Based on this win, we could estimate the value of the emphasis on color.

Variation C resulted in a 9.8% increase in transactions, but because it was built on variation A (not on the control), we learned that the persistent left navigation was actually responsible for a decrease in transactions of -11.2%.

This is what factorial design looks like in action: big wins, and big insights, informed by human intelligence.

The best testing framework for you

What are your testing goals?

If you are in a situation where potential revenue gains outweigh the potential insights to be gained or your test has little long-term value, you may want to go with a standard A/B cluster test.

If you have lots and lots of traffic, and value insights above everything, multivariate may be for you.

If you want the growth-driving power of pure A/B testing, as well as insightful takeaways about your customers, you should explore factorial design.

A note of encouragement: With factorial design, your tests will get better as you continue to test. With every test, you will learn more about how your customers behave, and what they want. Which will make every subsequent hypothesis smarter, and every test more impactful.

One 10% win without insights may turn heads your direction now, but a test that delivers insights can turn into five 10% wins down the line. It’s similar to the compounding effect: collecting insights now can mean massive payouts over time.

– Michael St Laurent

The post Beyond A vs. B: How to get better results with better experiment design appeared first on WiderFunnel Conversion Optimization.

More – 

Beyond A vs. B: How to get better results with better experiment design

Build the most effective personalization strategy: A 4-step roadmap

Reading Time: 11 minutes

Whaddya mean, ‘personalization strategy

It’s Groundhog Day again.

Do you remember the Groundhog Day movie? You know… the one where Bill Murray’s character repeats the same day over and over again, every day. He had to break the pattern by convincing someone to fall in love with him, or something like that.

What an odd storyline.

Yet today, it’s reminding me of a pattern in marketing. Marketing topics seem to be pulled by an unstoppable force through fad cycles of hype, over-promise, disappointment, and decline – usually driven by some new technology.

I’ve watched so many fad buzzwords come and go, it’s dizzying. Remember Customer Relationship Marketing? Integrated Marketing? Mobile First? Omnichannel?

A few short years ago, everyone was talking about social media as the only topic that mattered. Multivariate testing was sexy for about five minutes.

Invariably, similar patterns of mistakes appear within each cycle.

Tool vendors proliferate on trade show floors, riding the wave and selling a tool that checks the box of the current fad. Marketers invest time, energy, and budget hoping for a magic bullet without a strategy.

But, without a strategy, even the best tools can fail to deliver the promised results.

(Side note: That’s why I’ve been advocating for years for marketers to start their conversion optimization programs with a strategy in addition to the best tools.)

Now, everyone is swooning for Personalization. And, so they should! It can deliver powerful results.

PDF Bonus: Personalization Roadmap

This post is over 3,000 words of personalization goodness. Get the full PDF now so that you can reference it later, and share it with your co-workers.



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.



From simple message segmentation to programmatic ad buying and individual-level website customization, the combination of big data and technology is transforming the possibilities of personalization.

But the rise of personalization tools and popularity has meant the rise of marketers doing personalization the wrong way. I’ve lost track of the number of times we’ve seen:

  • Ad hoc implementation of off-the-shelf features without understanding what need they are solving.
  • Poor personalization insights with little data analysis and framework thinking driving the implementation.
  • Lack of rigorous process to hypothesize, test, and validate personalization ideas.
  • Lack of resources to sustain the many additional marketing messages that must be created to support multiple, personalized target segments.

That’s why, in collaboration with our partners at Optimizely, we have created a roadmap for creating the most effective personalization strategy:

Featured_Roadmap

  • Step 1: Defining personalization
  • Step 2: Is a personalization strategy right for you?
  • Step 3: Personalization ideation
  • Step 4: Personalization prioritization

Step 1: Defining personalization

Personalization and segmentation are often used interchangeably, and are arguably similar. Both use information gathered about the marketing prospect to customize their experience.

While segmentation attempts to bucket prospects into similar aggregate groups, personalization represents the ultimate goal of customizing the person’s experience to their individual needs and desires based on in-depth information and insights about them.

You can think of them as points along a spectrum of customized messaging.

Personalization spectrum
The marketing customization spectrum.

You’ve got the old mass marketing approach on one end, and the hyper-personalized, 1:1, marketer-to-customer nirvana on the other end. Segmentation lies somewhere in the middle. We’ve been doing it for decades, but now we have the technology to go deeper, to be more granular.

Every marketer wants to provide the perfect message for each customer — that’s the ultimate goal of personalization.

The problem personalization solves

Personalization solves the problem of Relevance (one of 6 conversion factors in the LIFT Model®). If you can increase the Relevance of your value proposition to your visitor, by speaking their language, matching their expectations, and addressing their unique fears, needs and desires, you will see an increase in conversions.

Let me show you an example.

Secret Escapes is a flash-sale luxury travel company. The company had high click-through rates on their search ads and directed all of this traffic to a single landing page.

Personalization strategy ad
Secret Escapes “spa” PPC ad in Google.

The ad copy read:

“Spa Vacations
Save up to 70% on Spa Breaks. Register for free with your email.”

But, the landing page didn’t reflect the ad copy. When visitors landed on the page, they saw this:

personalization strategy secret escapes
Original landing page for Secret Escapes.

Not super relevant to visitors’ search intent, right? There’s no mention of the keyword “spa” or imagery of a spa experience. Fun fact: When we are searching for something, our brains rely less on detailed understanding of the content, and more on pattern matching, or a scent trail.

(Note: some of the foundational research for this originated with Peter Pirolli at PARC as early as the 90’s. See this article, for example.)

In an attempt to convert more paid traffic, Secret Escapes tested two variations, meant to match visitor intent with expectations.

personalization strategy secret escapes 1
Variation 1 used spa imagery and brought the keyword “spa” into the sub-head.
personalization strategy secret escapes 2
Variation 2 used the same imagery, but mirrored the ad copy with the headline copy.

By simply maintaining the scent trail, and including language around “spa breaks” in the signup form, Secret Escapes was able to increase sign-ups by 32%. They were able to make the landing page experience sticky for this target audience segment, by improving Relevance.

Step 2: Is a personalization strategy right for me?

Pause. Before you dig any deeper into personalization, you should determine whether or not it is the right strategy for your company, right now.

Here are 3 questions that will help you determine your personalization maturity and eligibility.

Do I have enough data about my customers?

Hudson Arnold Personalization

Personalization is not a business practice for companies with no idea of how they want to segment, but for businesses that are ready to capitalize on their segments.

Hudson Arnold, Strategy Consultant, Optimizely

For companies getting started with personalization, we recommend that you at least have fundamental audience segments in place. These might be larger cohorts at first, focused on visitor location, visitor device use, single visitor behaviors, or visitors coming from an ad campaign.

Personalization Strategy Segments
Where is your user located? Did they arrive on your page via Facebook ad? Are they browsing on a tablet?

If you haven’t categorized your most important visitor segments, you should focus your energies on segmentation first, before moving into personalization.

Do I have the resources to do personalization?

  • Do you have a team in place that can manage a personalization strategy?
  • Do you have a personalization tool that supports your strategy?
  • Do you have an A/B testing team that can validate your personalization approach?
  • Do you have resources to maintain updates to the segments that will multiply as you increase your message granularity?

Personalization requires dedicated resources and effort to sustain all of your segments and personalized variations. To create a truly effective personalization strategy, you will need to proceduralize personalization as its own workstream and implement an ongoing process.

Which leads us to question three…

Do I have a process for validating my personalization ideas?

Personalization is a hypothesis until it is tested. Your assumptions about your best audience segments, and the best messaging for those segments, are assumptions until they have been validated.

Hudson Arnold Personalization

Personalization requires the same inputs and workflow as testing; sound technical implementation, research-driven ideation, a clear methodology for translating concepts into test hypotheses, and tight technical execution. In this sense, personalization is really just an extension of A/B testing and normal optimization activities. What’s more, successful personalization campaigns are the result of testing and iteration.

– Hudson Arnold

Great personalization strategy is about having a rigorous process that allows for 1) gathering insights about your customers, and then 2) validating those insights. You need a structured process to understand which insights are valid for your target audience and create growth for your business.

WiderFunnel’s Infinity Optimization Process™ represents these two mindsets. It is a proven process that has been refined over many years and thousands of tests. As you build your personalization strategy, you can adopt parts or all of this process.

infinity optimization process
The Infinity Optimization Process is iterative and leads to continuous growth and insights.

There are two critical phases to an effective personalization strategy: Explore and Validate. Explore uses an expansive mindset to consider all of your data, and all of your potential personalization ideas. Validate is a structured process of A/B testing that uses a reductive mindset to refine and select only those ideas that produce value.

Without a process in place to prove your personalization hypotheses, you will end up wasting time and resources sending the wrong messages to the wrong audience segments.

Personalization without validation is simply guesswork.

Step 3: Personalization ideation

If you have answered “Yes” to those three questions, you are ready to do personalization: You are confident in your audience segments, you have dedicated resources, perhaps you’re already doing basic personalization. Now, it’s time to build your personalization strategy by gathering insights from your data.

personalization strategy curiosity
“How do I get ideas for customized messaging that will work?”

One of the questions we hear most often when it comes to personalization is, “How do I get ideas for customized messaging that will work?” This is the biggest area of ongoing work and your biggest opportunity for business improvement from personalization.

The quality of your insights about your customers directly impacts the quality of your personalization results.

Here are the 3 types of personalization insights to explore:

  • Deductive research
  • Inductive research
  • Customer self-selected

You can mix and match these types within your program. We have plenty of examples of how. Let’s look at a few now.

1) Deductive research and personalization insights

Are there general theories that apply to your particular business situation?

Psychological principles? UX principles? General patterns in your data? ‘Best’ practices?

Deductive personalization starts with your assumptions about how your customers will respond to certain messaging based on existing theories…but it doesn’t end there. With deductive research, you should always feed your ideas into experiments that either validate or disprove your personalization approach.

Let’s look at an example:

Heifer International is a charity organization that we have been working with to increase their donations and their average donation value per visitor.

In one experiment, we decided to test a psychological principle called the “rule of consistency”. This principle states that people want to be consistent in all areas of life; once someone takes an action, no matter how small, they strive to make future behavior match that past behavior.

We asked visitors to the Heifer website to identify themselves as a donor type when they land on the site, to trigger this need to remain consistent.

client spotlight psychological persuasion
What kind of donor are you?

Notice there’s no option to select “I’m not a donor.” We were testing what would happen when people self-identified as donors.

The results were fascinating. This segmenting pop up increased donations by nearly 2%, increased the average donation value per visitor by 3%, and increased the revenue per visitor by more than 5%.

There’s more. In looking at the data, we saw that just 14% of visitors selected one of the donation identifications. But, that 14% was actually 68% of Heifer’s donors: The 14% who responded represent a huge percentage of Heifer’s most valuable audience.

personalization strategy heifer donors
Visitors who self-identify as ‘Donors’ are a valuable segment.

Now, Heifer can change the experience for visitors who identify as a type of donor and use that as one piece of data to personalize their experience. Currently, we’re testing which messages will maximize donations even further within each segment.

2) Inductive research and personalization insights

Are there segments within your data and test results that you can analyze to gather personalization insights?

If you are already optimizing your site, you may have seen segments naturally emerge through A/B testing. A focused intention to find these insights is called inductive research.

Inductive personalization is driven by insights from your existing A/B test data. As you test, you discover insights that point you toward generalizable personalization hypotheses.

Here’s an example from one of WiderFunnel’s e-commerce clients that manufactures and sells weather technology products. This company’s original product page was very cluttered, and we decided to test it against a variation that emphasized visual clarity.

personalization strategy variations
We tested the original page (left) against a variation emphasizing clarity (right).

Surprisingly, the clear variation lost to the original, decreasing order completions by -6.8%. WiderFunnel Strategists were initially perplexed by the result, but they didn’t rest until they had uncovered a potential insight in the data.

They found that visitors to the original page saw more pages per session, while visitors to the variation spent a 7.4% higher average time on page. This could imply that shoppers on the original page were browsing more, while shoppers on our variation spent more time on fewer pages.

Research published by the NN Group describes teen-targeted websites, suggesting that younger users enjoy searching and are impatient, while older users enjoy searching but are also much more patient when browsing.

With this research in mind, the Strategists dug in further and found that the clear variation actually won for older users to this client’s site, increasing transactions by +24%. But it lost among younger users, decreasing transactions by -38%.

So, what’s the takeaway? For this client, there are potentially new ways of customizing the shopping experience for different age segments, such as:

  1. Reducing distractions and adding clarity for older visitors
  2. Providing multiple products in multiple tabs for younger visitors

This client can use these insights to inform their age-group segmentation efforts across their site.

(Also, this is a great example of why one of WiderFunnel’s five core values says “Grit – We don’t quit until we find an answer.”)

3) Customer self-selected personalization

Ask your prospects to tell you about themselves. Then, test the best marketing approach for each segment.

Customer self-selected personalization is potentially the easiest strategy to conceptualize and implement. You are asking your users to self-identify, and segment themselves. This triggers specific messaging based on how they self-identified. And then you can test the best approach for each of those segments.

Here’s an example to help you visualize what I mean.

One of our clients is a Fortune 500 healthcare company — they use self-selected personalization to drive more relevant content and offers, in order to grow their community of subscribers.

This client had created segments that were focused on a particular health situation, that people could click on:

  • “Click on this button to get more information,”
  • “I have early stage disease,”
  • “I have late stage disease,”
  • “I manage the disease while I’m working,”
  • “I’m a physician treating the disease,” and,
  • “I work at a hospital treating the disease.”

These segments came from personas that this client had developed about their subscriber base.

personalization strategy messaging
The choices in the header triggered the messaging in the side bar.

Once a user self-identified, the offers and messaging that were featured on the page were adjusted accordingly. But, we wouldn’t want to assume the personalized messages were the best for each segment. You should test that!

In self-selected personalization, there are two major areas you should test. You want to find out:

  1. What are the best segments?
  2. What is the best messaging for each segment?

For this healthcare company, we didn’t simply assume that those 5 segments were the best segments, or that the messages and offers triggered were the best messages and offers. Instead, we tested both.

A series of A/B tests within their segmentation and personalization efforts resulted in a doubling of this company’s conversion rate.

Developing an audience strategy

Developing a personalization strategy requires an audience-centric approach. The companies that are succeeding at personalization are not picking segments ad hoc from Google Analytics or any given study, but are looking to their business fundamentals.

Once you believe you have identified the most important segments for your business, then you can begin to layer on more tactical segments. These might be qualified ‘personas’ that inform your content strategy, UX design, or analytical segments.

Step 4: Personalization prioritization

If this whole thing is starting to feel a little complex, don’t worry. It is complex, but that’s why we prioritize. Even with a high-functioning team and an advanced tool, it is impossible to personalize for all of your audience segments simultaneously. So, where do you start?

Optimizely uses a simple axis to conceptualize how to prioritize personalization hypotheses. You can use it to determine the quantity and the quality of the audiences you would like to target.

Personalization strategy matrix

The x-axis refers to the size of your audience segment, while the y-axis refers to an obvious need to personalize to a group vs. the need for creative personalization.

For instance, the blue bubble in the upper left quadrant of the chart represents a company’s past purchasers. Many clients want to start personalizing here, saying, “We want to talk to people who have spent $500 on leather jackets in the last three months. We know exactly what we wanna show to them.”

But, while you might have a solid merchandising strategy or offer for that specific group, it represents a really, really, really small audience.

That is not to say you shouldn’t target this group, because there is an obvious need. But it needs to be weighed against how large that group is. Because you should be treating personalization like an experiment, you need to be sensitive to statistical significance.

The net impact of any personalization effort you use will only be as significant as the size of the segment, right? If you improve the conversion rate 1000% for 10 people, that is going to have a relatively small impact on your business.

personalization strategy matrix 2

Now, move right on the x-axis; here, you are working with larger segments. Even if the personalized messaging is less obvious (and might require more experimentation), your efforts may be more impactful.

Food for thought: Most companies we speak to don’t have a coherent geographical personalization strategy, but it’s a large way of grouping people and, therefore, may be worth exploring!

You may be more familiar with WiderFunnel’s PIE framework, which we use to prioritize our ideas.

How does Optimizely’s axis relate? It is a simplified way to think about personalization ideas to help you ideate quickly. Its two inputs, “Obvious Need” and “Audience Size” are essentially two inputs we would use to calculate a thorough PIE ranking of ideas.

The “Obvious Need” axis would influence the “Potential” ranking, and “Audience Size” would influence “Importance”. It may be helpful to consider the third PIE factor, “Ease”, if some segmentation data is more difficult to track or otherwise acquire, or if the maintenance cost of ongoing messaging is high.

To create the most effective personalization strategy for your business, you must remember what you already know. For some reason, when companies start personalization, the lessons they have learned about testing all of their assumptions are sometimes forgotten.

You probably have some great personalization ideas, but it is going to take iteration and experimentation to get them right.

A final note on personalization: Always think of it in the context of the bigger picture of marketing optimization.

Insights gained from A/B testing inform future audience segments and personalized messaging, while insights derived from personalization experimentation informs future A/B testing hypotheses. And on and on.

Don’t assume that insights gained during personalization testing are only valid for those segments. These wins may be overall wins.

The best practice when it comes to personalization is to take the insights you validate within your tests and use them to inform your hypotheses in your general optimization strategy.

** Note: This post was originally published on May 3, 2016 as “How to succeed at segmentation and personalization” but has been wholly updated to reflect new personalization frameworks, case studies, and insights from Optimizely. **

Still have questions about personalization? Ask ’em in the comments, or contact us to find out how WiderFunnel can help you create a personalization strategy that will work for your company.

The post Build the most effective personalization strategy: A 4-step roadmap appeared first on WiderFunnel Conversion Optimization.

Continue at source: 

Build the most effective personalization strategy: A 4-step roadmap

Disrupting the norm: 4 ways to tap into your team’s creativity

Reading Time: 6 minutes

It’s easy to get stuck in a work routine.

To go to the office every Monday to Friday, use a particular set of skills, sit at the same desk, talk to the same team members, eat at the same lunch spot…

While routine can be a stabilizing force, it can also lead to stagnation and a lack of inspiration (a worrisome situation for any marketer).

Companies take great care to put structures in place to improve productivity and efficiency, but too often de-prioritize creativity. And yet, creativity is essential to driving innovation and competition—two vital components of business growth.

At WiderFunnel, we believe in the Zen Marketing mindset. This mindset recognizes that there is an intuitive, inspired, exploratory side to marketing that imagines potential insights, as well as a qualitative, logical, data-driven side that proves whether the insights really work.

In order to come up with the very best ideas to test, you must have room to get creative.

So, how can you make creativity a priority at your company?

Last month, the WiderFunnel team set out to answer that question for ourselves. We went on a retreat to one of British Columbia’s most beautiful islands, with the goal of learning how to better tap into and harness our creativity, as individuals and as a team.

creativity_setting
It’s hard to not be creative with a view like this.

We spent three days trying to unleash our creative sides, and the tactics we brought back to the office have had exciting effects! In this post, I’m going to share four strategies that we have put into practice at WiderFunnel to help our team get creative, that you can replicate in your company today.

As Jack London said,

You can’t wait for inspiration. You have to go after it with a club.

An introduction to creativity

There are many ways to think about creativity, but for our purposes, let’s consider the two types of creativity: technical creativity and artistic creativity. The former refers to the creation of new theories, new technologies, and new ideas. The latter revolves around skills, technique, and self-expression.

As a company, we were focused on tapping into technical creativity on our retreat. One of the main elements of technical creativity is lateral thinking.

Your brain recognizes patterns: faces, language, handwriting. This is beneficial in that you can recognize an object or a situation very quickly (you see a can of Coke and you know exactly what it is without having to analyze it).

But, we can get stuck in our patterns. We think within patterns. We problem-solve within patterns. Often, the solutions we come up with are based on solutions we’ve already come up with to similar problems. And we do this without really knowing that our solutions belong to other patterns.

Lateral thinking techniques can help you bust out of this…well…pattern.

While structured, disciplined thinking is vital to making your products and services better, lateral thinking can help you come up with completely new concepts and unexpected solutions.

The following 4 tactics will help you think laterally at work, to find truly original solutions to problems.

1. Put on a different (thinking) hat

One of our first activities on the island was to break into groups and tackle an internal company challenge with the six thinking hats. Developed by Edward de Bono, the “six thinking hats” is a tool for group discussion and individual thinking.

The idea behind the six hats is that our brains think in distinct ways that we can deliberately challenge. Each hat represents a direction in which the brain can be challenged. When you ‘put on a different hat’, your brain will identify and bring into conscious thought certain aspects of the problem you’re trying to solve, according to your hat.

6-hats-creativity
The Six Thinking Hats.

None of these hats represent completely natural ways of thinking, but rather how some of us already represent the results of our thinking.

In our exercise, we began a discussion each wearing one of the six hats. As the conversation progressed, we were forced to switch hats and continue our discussion from entirely different perspectives. It was uncomfortable and challenging, but the different hats forced each of us to explore the problem in a way that was totally alien.

creativity_thinkinghats
Before we could have our discussion, we had to make our own thinking hats.
Our thinking cards.
Our thinking cards.

The outcome was exciting: people who are normally quiet were forced to manage a discussion, people who are normally incredulous were forced to be optimistic, people who are normally dreamers were forced to ask for facts…it opened up totally new doors within the discussion.

In WiderFunnel’s main meeting room, there are six cards that represent each of the six hats. Whenever I find myself stuck, dealing with a challenge I can’t seem to solve, I wander into that meeting room and try to tackle the problem ‘wearing each hat’. Disrupting my normal thinking patterns often leads to ‘A-ha!’ moments.

To encourage lateral thinking, you could: create something physical and tangible (cards, hats, etc.) that your team can utilize when they are stuck to challenge the ‘normal’ ways in which they think.

2. Solve puzzles (literally)

A man jumps out of a window of a 30-story building. He falls all the way to the ground and lands on solid concrete with nothing to cushion his fall, yet he is completely uninjured. How is this possible?

There are 20 birds on a fence. A woman shoots one of the birds. How many birds are left?

There is an egg carton holding a dozen eggs on a table. Twelve people take one egg each, but there is still one egg left in the carton. How?

During our retreat, we spent some time solving word problems just like these, in order to disrupt our day-to-day thinking patterns.

creativity_puzzle
A recently completed WiderFunnel puzzle!

Riddles like these challenge our brains because they are difficult to think through using straightforward logic. Instead, you have to think outside of the content within the puzzle and use your knowledge of language and experience to solve it.

Puzzles require you to use reasoning that is not immediately obvious, and involve ideas that you may not arrive at using traditional step-by-step logic.

When you are faced with a puzzle like one of the riddles above, your mind is forced to think critically about something you might otherwise dismiss or fail to understand completely.

The thinking involved in solving puzzles can be characterized as a blend of imaginative association and memory. It is this blend…that leads us to literally see the pattern or twist that a puzzle conceals. It is a kind of “clairvoyance” that typically provokes an Aha! effect.

– Marcel Danesai, Ph.D. in “Puzzles and the Brain

To encourage creative, critical thinking, you could: incorporate puzzles into your day-to-day. Email your team a word problem every morning, or set up a physical puzzle somewhere in your office, so that people can take puzzle breaks!

3. Unpack your assumptions

Often, when we are faced with a question or problem, we have already classified that question or problem by its perceived limitations or rules. For example, you have assumptions about your users (most likely backed by data!) about what they want and need, what their pain points are, etc.

But, these assumptions, even if they are correct, can sometimes blind you to other possibilities. Unpacking your assumptions involves examining all of your assumptions, and then flipping them upside down. This can be tough because our assumptions are often deeply ingrained.

On the island, WiderFunnel-ers listed out all of our assumptions about what our clients want. At the top of that list was an assumption about what every marketer wants: to increase ROI. When we flipped that assumption, however, we were left with a hypothetical situation in which our clients don’t care at all about ROI.

creativity_assumptions
Various WiderFunnel-ers unpacking their assumptions.

All of a sudden, we were asking questions about what we might be able to offer our clients that has nothing to do with increasing ROI. While this hypothetical is an extreme, it forced us to examine all of the other areas where we might be able to help our clients.

To encourage creative problem-solving, you could: advise your team to list out all of their assumptions about a problem, flip ‘em, and then look for the middle ground.

4. Think of the dumbest idea you possibly can

The worst enemy to creativity is self-doubt.

– Sylvia Plath

To wrap up day 1 of our retreat, we did an activity called Dumbest Idea First. We walked around in a circle in the sunshine, shouting out the dumbest ideas we could think of about how to encourage more creativity at WiderFunnel.

The circle was quiet at first. Because being dumb, sounding dumb, looking dumb is scary. But, after a few people yelled out some really, really dumb ideas, everyone got into it. We were all moving, and making ridiculous suggestions, and in the midst of it all, one person would shout out a gem of an idea.

For instance, someone suggested a ‘brainstorm bubble’: a safe space within the office where you can go when you’re stuck, and your co-workers can see you and join you in the bubble to help you brainstorm.

(We have since started doing this at the office and it has been awesome!)

I don’t know about you, but I sometimes limit myself during a brainstorm—I find myself trying to be creative while still being pragmatic.

But, when you give yourself permission to be dumb, all of a sudden the possibilities are endless. And I guarantee you will surprise yourself with the great ideas you stumble upon.

Encourage creativity by allowing yourself and your team time and space to be unapologetically dumb.

What are some of the strategies you use to keep things creative at your company? Have you tried or improved upon any of the aforementioned strategies? Let us know in the comments!

The post Disrupting the norm: 4 ways to tap into your team’s creativity appeared first on WiderFunnel Conversion Optimization.

Excerpt from: 

Disrupting the norm: 4 ways to tap into your team’s creativity