All posts by Natasha Wahid

How to do server-side testing for single page app optimization

Reading Time: 5 minutes

Gettin’ technical.

We talk a lot about marketing strategy on this blog. But today, we are getting technical.

In this post, I team up with WiderFunnel front-end developer, Thomas Davis, to cover the basics of server-side testing from a web development perspective.

The alternative to server-side testing is client-side testing, which has arguably been the dominant testing method for many marketing teams, due to ease and speed.

But modern web applications are becoming more dynamic and technically complex. And testing within these applications is becoming more technically complex.

Server-side testing is a solution to this increased complexity. It also allows you to test much deeper. Rather than being limited to testing images or buttons on your website, you can test algorithms, architectures, and re-brands.

Simply put: If you want to test on an application, you should consider server-side testing.

Let’s dig in!

Note: Server-side testing is a tactic that is linked to single page applications (SPAs). Throughout this post, I will refer to web pages and web content within the context of a SPA. Applications such as Facebook, Airbnb, Slack, BBC, CodeAcademy, eBay, and Instagram are SPAs.


Defining server-side and client-side rendering

In web development terms, “server-side” refers to “occurring on the server side of a client-server system.”

The client refers to the browser, and client-side rendering occurs when:

  1. A user requests a web page,
  2. The server finds the page and sends it to the user’s browser,
  3. The page is rendered on the user’s browser, and any scripts run during or after the page is displayed.
Static app server
A basic representation of server-client communication.

The server is where the web page and other content live. With server-side rendering, the requested web page is sent to the user’s browser in final form:

  1. A user requests a web page,
  2. The server interprets the script in the page, and creates or changes the page content to suit the situation
  3. The page is sent to the user in final form and then cannot be changed using server-side scripting.

To talk about server-side rendering, we also have to talk a little bit about JavaScript. JavaScript is a scripting language that adds functionality to web pages, such as a drop-down menu or an image carousel.

Traditionally, JavaScript has been executed on the client side, within the user’s browser. However, with the emergence of Node.js, JavaScript can be run on the server side. All JavaScript executing on the server is running through Node.js.

*Node.js is an open-source, cross-platform JavaScript runtime environment, used to execute JavaScript code server-side. It uses the Chrome V8 JavaScript engine.

In laymen’s (ish) terms:

When you visit a SPA web application, the content you are seeing is either being rendered in your browser (client-side), or on the server (server-side).

If the content is rendered client-side, JavaScript builds the application HTML content within the browser, and requests any missing data from the server to fill in the blanks.

Basically, the page is incomplete upon arrival, and is completed within the browser.

If the content is being rendered server-side, your browser receives the application HTML, pre-built by the server. It doesn’t have to fill in any blanks.

Why do SPAs use server-side rendering?

There are benefits to both client-side rendering and server-side rendering, but render performance and page load time are two huge pro’s for the server side.

(A 1 second delay in page load time can result in a 7% reduction in conversions, according to Kissmetrics.)

Server-side rendering also enables search engine crawlers to find web content, improving SEO; and social crawlers (like the crawlers used by Facebook) do not evaluate JavaScript, making server-side rendering beneficial for social searching.

With client-side rendering, the user’s browser must download all of the application JavaScript, and wait for a response from the server with all of the application data. Then, it has to build the application, and finally, show the complete HTML content to the user.

All of which to say, with a complex application, client-side rendering can lead to sloooow initial load times. And, because client-side rendering relies on each individual user’s browser, the developer only has so much control over load time.

Which explains why some developers are choosing to render their SPAs on the server side.

But, server-side rendering can disrupt your testing efforts, if you are using a framework like Angular or React.js. (And the majority of SPAs use these frameworks).

The disruption occurs because the version of your application that exists on the server becomes out of sync with the changes being made by your test scripts on the browser.

NOTE: If your web application uses Angular, React, or a similar framework, you may have already run into client-side testing obstacles. For more on how to overcome these obstacles, and successfully test on AngularJS apps, read this blog post.


Testing on the server side vs. the client side

Client-side testing involves making changes (the variation) within the browser by injecting Javascript after the original page has already loaded.

The original page loads, the content is hidden, the necessary elements are changed in the background, and the ‘new’ version is shown to the user post-change. (Because the page is hidden while these changes are being made, the user is none-the-wiser.)

As I mentioned earlier, the advantages of client-side testing are ease and speed. With a client-side testing tool like VWO, a marketer can set up and execute a simple test using a WYSIWYG editor without involving a developer.

But for complex applications, client-side testing may not be the best option: Layering more JavaScript on top of an already-bulky application means even slower load time, and an even more cumbersome user experience.

A Quick Hack

There is a workaround if you are determined to do client-side testing on a SPA application. Web developers can take advantage of features like Optimizely’s conditional activation mode to make sure that testing scripts are only executed when the application reaches a desired state.

However, this can be difficult as developers will have to take many variables into account, like location changes performed by the $routeProvider, or triggering interaction based goals.

To avoid flicker, you may need to hide content until the front-end application has initialized in the browser, voiding the performance benefits of using server-side rendering in the first place.

WiderFunnel - client side testing activation mode
Activation Mode waits until the framework has loaded before executing your test.



When you do server-side testing, there are no modifications being made at the browser level. Rather, the parameters of the experiment variation (‘User 1 sees Variation A’) are determined at the server route level, and hooked straight into the javascript application through a service provider.

Here is an example where we are testing a pricing change:

“Ok, so, if I want to do server-side testing, do I have to involve my web development team?”

Yep.

But, this means that testing gets folded into your development team’s work flow. And, it means that it will be easier to integrate winning variations into your code base in the end.

If yours is a SPA, server-side testing may be the better choice, despite the work involved. Not only does server-side testing embed testing into your development workflow, it also broadens the scope of what you can actually test.

Rather than being limited to testing page elements, you can begin testing core components of your application’s usability like search algorithms and pricing changes.

A server-side test example!

For web developers who want to do server-side testing on a SPA, Tom has put together a basic example using Optimizely SDK. This example is an illustration, and is not functional.

In it, we are running a simple experiment that changes the color of a button. The example is built using Angular Universal and express JS. A global service provider is being used to fetch the user variation from the Optimizely SDK.

Here, we have simply hard-coded the user ID. However, Optimizely requires that each user have a unique ID. Therefore, you may want to use the user ID that already exists in your database, or store a cookie through express’ Cookie middleware.

Are you currently doing server-side testing?

Or, are you client-side testing on a SPA application? What challenges (if any) have you faced? How have you handled them? Do you have any specific questions? Let us know in the comments!

The post How to do server-side testing for single page app optimization appeared first on WiderFunnel Conversion Optimization.

Continue reading – 

How to do server-side testing for single page app optimization

How to do server-side testing for SPA optimization

Reading Time: 5 minutes

Gettin’ technical.

We talk a lot about marketing strategy on this blog. But today, we are getting technical.

In this post, I team up with WiderFunnel front-end developer, Thomas Davis, to cover the basics of server-side testing from a web development perspective.

The alternative to server-side testing is client-side testing, which has arguably been the dominant testing method for many marketing teams, due to ease and speed.

But modern web applications are becoming more dynamic and technically complex. And testing within these applications is becoming more technically complex.

Server-side testing is a solution to this increased complexity. It also allows you to test much deeper. Rather than being limited to testing images or buttons on your website, you can test algorithms, architectures, and re-brands.

Simply put: If you want to test on an application, you should consider server-side testing.

Let’s dig in!

Note: Server-side testing is a tactic that is linked to single page applications (SPAs). Throughout this post, I will refer to web pages and web content within the context of a SPA. Applications such as Facebook, Airbnb, Slack, BBC, CodeAcademy, eBay, and Instagram are SPAs.


Defining server-side and client-side rendering

In web development terms, “server-side” refers to “occurring on the server side of a client-server system.”

The client refers to the browser, and client-side rendering occurs when:

  1. A user requests a web page,
  2. The server finds the page and sends it to the user’s browser,
  3. The page is rendered on the user’s browser, and any scripts run during or after the page is displayed.
Static app server
A basic representation of server-client communication.

The server is where the web page and other content live. With server-side rendering, the requested web page is sent to the user’s browser in final form:

  1. A user requests a web page,
  2. The server interprets the script in the page, and creates or changes the page content to suit the situation
  3. The page is sent to the user in final form and then cannot be changed using server-side scripting.

To talk about server-side rendering, we also have to talk a little bit about JavaScript. JavaScript is a scripting language that adds functionality to web pages, such as a drop-down menu or an image carousel.

Traditionally, JavaScript has been executed on the client side, within the user’s browser. However, with the emergence of Node.js, JavaScript can be run on the server side. All JavaScript executing on the server is running through Node.js.

*Node.js is an open-source, cross-platform JavaScript runtime environment, used to execute JavaScript code server-side. It uses the Chrome V8 JavaScript engine.

In laymen’s (ish) terms:

When you visit a SPA web application, the content you are seeing is either being rendered in your browser (client-side), or on the server (server-side).

If the content is rendered client-side, JavaScript builds the application HTML content within the browser, and requests any missing data from the server to fill in the blanks.

Basically, the page is incomplete upon arrival, and is completed within the browser.

If the content is being rendered server-side, your browser receives the application HTML, pre-built by the server. It doesn’t have to fill in any blanks.

Why do SPAs use server-side rendering?

There are benefits to both client-side rendering and server-side rendering, but render performance and page load time are two huge pro’s for the server side.

(A 1 second delay in page load time can result in a 7% reduction in conversions, according to Kissmetrics.)

Server-side rendering also enables search engine crawlers to find web content, improving SEO; and social crawlers (like the crawlers used by Facebook) do not evaluate JavaScript, making server-side rendering beneficial for social searching.

With client-side rendering, the user’s browser must download all of the application JavaScript, and wait for a response from the server with all of the application data. Then, it has to build the application, and finally, show the complete HTML content to the user.

All of which to say, with a complex application, client-side rendering can lead to sloooow initial load times. And, because client-side rendering relies on each individual user’s browser, the developer only has so much control over load time.

Which explains why some developers are choosing to render their SPAs on the server side.

But, server-side rendering can disrupt your testing efforts, if you are using a framework like Angular or React.js. (And the majority of SPAs use these frameworks).

The disruption occurs because the version of your application that exists on the server becomes out of sync with the changes being made by your test scripts on the browser.

NOTE: If your web application uses Angular, React, or a similar framework, you may have already run into client-side testing obstacles. For more on how to overcome these obstacles, and successfully test on AngularJS apps, read this blog post.


Testing on the server side vs. the client side

Client-side testing involves making changes (the variation) within the browser by injecting Javascript after the original page has already loaded.

The original page loads, the content is hidden, the necessary elements are changed in the background, and the ‘new’ version is shown to the user post-change. (Because the page is hidden while these changes are being made, the user is none-the-wiser.)

As I mentioned earlier, the advantages of client-side testing are ease and speed. With a client-side testing tool like VWO, a marketer can set up and execute a simple test using a WYSIWYG editor without involving a developer.

But for complex applications, client-side testing may not be the best option: Layering more JavaScript on top of an already-bulky application means even slower load time, and an even more cumbersome user experience.

A Quick Hack

There is a workaround if you are determined to do client-side testing on a SPA application. Web developers can take advantage of features like Optimizely’s conditional activation mode to make sure that testing scripts are only executed when the application reaches a desired state.

However, this can be difficult as developers will have to take many variables into account, like location changes performed by the $routeProvider, or triggering interaction based goals.

To avoid flicker, you may need to hide content until the front-end application has initialized in the browser, voiding the performance benefits of using server-side rendering in the first place.

WiderFunnel - client side testing activation mode
Activation Mode waits until the framework has loaded before executing your test.



When you do server-side testing, there are no modifications being made at the browser level. Rather, the parameters of the experiment variation (‘User 1 sees Variation A’) are determined at the server route level, and hooked straight into the javascript application through a service provider.

Here is an example where we are testing a pricing change:

“Ok, so, if I want to do server-side testing, do I have to involve my web development team?”

Yep.

But, this means that testing gets folded into your development team’s work flow. And, it means that it will be easier to integrate winning variations into your code base in the end.

If yours is a SPA, server-side testing may be the better choice, despite the work involved. Not only does server-side testing embed testing into your development workflow, it also broadens the scope of what you can actually test.

Rather than being limited to testing page elements, you can begin testing core components of your application’s usability like search algorithms and pricing changes.

A server-side test example!

For web developers who want to do server-side testing on a SPA, Tom has put together a basic example using Optimizely SDK. This example is an illustration, and is not functional.

In it, we are running a simple experiment that changes the color of a button. The example is built using Angular Universal and express JS. A global service provider is being used to fetch the user variation from the Optimizely SDK.

Here, we have simply hard-coded the user ID. However, Optimizely requires that each user have a unique ID. Therefore, you may want to use the user ID that already exists in your database, or store a cookie through express’ Cookie middleware.

Are you currently doing server-side testing?

Or, are you client-side testing on a SPA application? What challenges (if any) have you faced? How have you handled them? Do you have any specific questions? Let us know in the comments!

The post How to do server-side testing for SPA optimization appeared first on WiderFunnel Conversion Optimization.

Link to original: 

How to do server-side testing for SPA optimization

How to get evergreen results from your landing page optimization

Reading Time: 7 minutes

Landing page optimization is old news.

Seriously. A quick google will show you that Unbounce, QuickSprout, Moz, Qualaroo, Hubspot, Wordstream, Optimizely, CrazyEgg, VWO (and countless others), have been writing tips and guides on how to optimize your landing pages for years.

Not to mention the several posts we have already published on the WiderFunnel blog since 2008.

And yet. This conversation is so not over.

Warning: If your landing page optimization goals are short-term, or completely focused on conversion rate lift, this post may be a waste of your time. If your goal is to continuously have the best-performing landing pages on the internet, keep reading.



Marketers are funnelling more and more money into paid advertising, especially as Google allocates more and more SERP space to ads.

In fact, as an industry, we are spending upwards of $92 billion annually on paid search advertising alone.

landing-page-optimization-SERP-space
The prime real estate on a Google search results page often goes to paid.

And it’s not just search advertising that is seeing an uptick in spend, but social media advertising too.

It makes sense that marketers are still obsessing over their landing page conversion rates: this traffic is costly and curated. These are visitors that you have sought out, that share characteristics with your target market. It is extremely important that these visitors convert!

But, there comes a time in every optimizer’s life, when they face the cruel reality of diminishing returns. You’ve tested your landing page hero image. You’ve tested your value proposition. You’ve tested your form placement. And now, you’ve hit a plateau.

So, what next? What’s beyond the tips and guides? What is beyond the optimization basics?

1) Put on your customer’s shoes.

First things first: Let’s do a quick sanity check.

When you test your hero image, or your form placement, are you testing based on tips and recommended best practices? Or, are you testing based on a specific theory you have about your page visitors?

landing-page-optimization-customer-shoes
Put on your customer’s shoes.

Tips and best practices are a fine place to start, but the insight behind why those tactics work (or don’t work) for your visitors is where you find longevity.

The best way to improve experiences for your visitors is to think from their perspective. And the best way to do that is to use frameworks, and framework thinking, to get robust insights about your customers.

– Chris Goward, Founder & CEO, WiderFunnel

Laying the foundation

It’s very difficult to think from a different perspective. This is true in marketing as much as it is in life. And it’s why conversion optimization and A/B testing have become so vital: We no longer have to guess at what our visitors want, but can test instead!

That said, a test requires a hypothesis. And a legitimate hypothesis requires a legitimate attempt to understand your visitor’s unique perspective.

To respond to this need for understanding, WiderFunnel developed the LIFT Model® in 2008: our foundational framework for identifying potential barriers to conversion on a page from the perspective of the page visitor.

Get optimization ideas with the LIFT poster!

Get the LIFT Model poster, and challenge yourself to keep your visitor’s perspective in mind at all times. Use the six conversion factors to analyze your pages, and get optimization ideas!



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.


The LIFT Model attempts to capture the idea of competing forces in communication, narrowing them down to the most salient aspects of communication that marketers should consider.

I wanted to apply the principles of Relevance, Clarity, Distraction, Urgency and Anxiety to what we were delivering to the industry and not just to our clients. And the LIFT Model is a part of that: making something as simple as possible but no simpler.

– Chris Goward

When you look at your page through a lens like the LIFT Model, you are forced to question your assumptions about what your visitors want when they land on your page.

landing-page-optimization-LIFT-Model
View your landing pages through a framework lens.

You may love an interactive element, but is it distracting your visitors? You may think that your copy creates urgency, but is it really creating anxiety?

If you are an experienced optimizer, you may have already incorporated a framework like the LIFT Model into your optimization program. But, after you have analyzed the same page multiple times, how do you continue to come up with new ideas?

Here are a few tips from the WiderFunnel Strategy team:

  1. Bring in fresh eyes from another team to look at and use your page
  2. User test, to watch and record how actual users are using your page
  3. Sneak a peek at your competitors’ landing pages: Is there something they’re doing that might be worth testing on your site?
  4. Do your page analyses as a team: many heads are better than one
  5. Brainstorm totally new, outside-the-box ideas…and test one!

You should always err on the side of “This customer experience could be better.” After all, it’s a customer-centric world, and we’re just marketing in it.

2) Look past the conversion rate.

“Landing page optimization”, like “conversion rate optimization”, is a limiting term. Yes, on-page optimization is key, but mature organizations view “landing page optimization” as the optimization of the entire experience, from first to last customer touchpoint.

Landing pages are only one element of a stellar, high-converting marketing campaign. And focusing all of your attention on optimizing only one element is just foolish.

From testing your featured ads, to tracking click-through rates of Thank You emails, to tracking returns and refunds, to tracking leads through the rest of the funnel, a better-performing landing page is about much more than on-page conversion rate lift.

landing-page-optimization-big-picture
On-page optimization is just one part of the whole picture.

An example is worth 1,000 words

One of our clients is a company that provides an online consumer information service—visitors type in a question and get an Expert answer. One of the first zones (areas on their website) that we focused on was a particular landing page funnel.

Visitors come from an ad, and land on page where they can ask their question. They then enter a 4-step funnel: Step 1: Ask the question > Step 2: Add more information > Step 3: Pick an Expert > Step 4: Get an answer (aka the checkout page)

Our primary goal was to increase transactions, meaning we had to move visitors all the way through the funnel. But we were also tracking refunds and chargebacks, as well as revenue per visitor.

More than pushing a visitor to ‘convert’, we wanted to make sure those visitors went on to be happy, satisfied customers.

In this experiment, we focused on the value proposition statements. The control landing page exclaimed, “A new question is answered every 9 seconds!“. Our Strategy team had determined (through user testing) that “speed of answers” was the 8th most valuable element of the service for customers, and that “peace of mind / reassurance” was the most important.

So, they tested two variations, featuring two different value proposition statements meant to create more peace of mind for visitors:

  • “Join 6,152,585 satisfied customers who got professional answers…”
  • “Connect One on One with an Expert who will answer your question”

Both of these variations ultimately increased transactions, by 6% and 9.4% respectively. But! We also saw large decreases in refunds and chargebacks with both variations, and large increases in net revenue per visitor for both variations.

By following visitors past the actual conversion, we were able to confirm that these initial statements set an impactful tone: visitors were more satisfied with their purchases, and comfortable investing more in their expert responses.

3) Consider the big picture.

As you think of landing page optimization as the optimization of a complete digital experience, you should also think of landing page optimization as part of your overall digital optimization strategy.

When you discover an insight about visitors to your product page, feed it into a test on your landing page. When you discover an insight about visitor behavior on your landing page, feed it into a test on your website.

It’s true that your landing pages most likely cater to specific visitor segments, who may behave totally differently than your organic visitors. But, it is also true that landing page wins may be overall wins.

Plus, landing page insights can be very valuable, because they are often new visitor insights. And now, a little more advice from Chris Goward, optimization guru:

“Your best opportunities for testing your value proposition are with first impression visitors. These are usually new visitors to your high traffic landing pages or your home page […]

By split testing your alternative value propositions with new visitors, you’ll reduce your exposure to existing customers or prospects who are already in the consideration phase. New prospects have a blank canvas for you to present your message variations and see what sticks.

Then, from the learning gained on landing pages, you can validate insights with other target audience groups and with your customers to leverage the learning company-wide.

Landing page testing can do more than just improve conversion rates on landing pages. When done strategically, it can deliver powerful, high-leverage marketing insights.”



Just because your landing pages are separate from your website, does not mean that your landing page optimization should be separate from your other optimization efforts. A landing page is just another zone, and you are free to (and should) use insights from one zone when testing on another zone.

4) Go deeper, explore further.

A lot of marketers talk about landing page design: how to build the right landing page, where to position each element, what color scheme and imagery to use, etc.

But when you dig into the why behind your test results, it’s like breaking into a piñata of possibilities, or opening a box of idea confetti.

landing-page-optimization-ideas
Discovering the reason behind the result is like opening a box of idea confetti!

Why do your 16-25 year old, mobile users respond so favorably to a one-minute video testimonial from a past-purchaser? Do they respond better to this indicator of social proof than another?

Why do your visitors prefer one landing page under normal circumstances, and a different version when external factors change (like a holiday, or a crisis)? Can you leverage this insight throughout your website?

Why does one type of urgency phrasing work, while slightly different wording decreases conversions on your page? Are your visitors sensitive to overly salesy copy? Why or why not?

Not only are there hundreds of psychological principles to explore within your landing page testing, but landing page optimization is also intertwined with your personalization strategy.

For many marketers, personalized landing pages are becoming more normal. And personalization opens the door to even more potential customer insights. Assuming you already have visitor segments, you should test the personalized experiences on your landing pages.

For example, imagine you have started using your visitors’ first names in the hero banner of your landing page. Have you validated that this personalized experience is more effective than another, like moving a social proof indicator above the fold? Both can be deemed personalization, but they tap into very different motivations.

From psychological principles, to validating your personalized experiences, the possibilities for testing on your landing pages are endless.

Just keep testing, Dory-style

Your landing page(s) will never be “optimized”. That is the beauty and cruelty of optimization: we are always chasing unattainable perfection.

But your landing pages can definitely be better than they are now. Even if you have a high-converting page, even if your page is listed by Hubspot as one of the 16 best designed landing pages, even if you’ve followed all of the rules…your landing page can be better.

Because I’m not just talking about conversions, I’m talking about your entire customer experience. If you give them the opportunity, your new users will tell you what’s wrong with your page.

They’ll tell you where it is unclear and where it is distracting.

They’ll tell you what motivates them.

They’ll tell you how personal you should get.

They’ll tell you how to set expectations so that they can become satisfied customers or clients.

A well-designed landing page is just the beginning of landing page optimization.

The post How to get evergreen results from your landing page optimization appeared first on WiderFunnel Conversion Optimization.

More: 

How to get evergreen results from your landing page optimization

Beyond A vs. B: How to get better results with better experiment design

Reading Time: 7 minutes

You’ve been pushing to do more testing at your organization.

You’ve heard that your competitors at ______ are A/B testing, and that their customer experience is (dare I say it?) better than yours.

You believe in marketing backed by science and data, and you have worked to get the executive team at your company on board with a tested strategy.

You’re excited to begin! To learn more about your customers and grow your business.

You run one A/B test. And then another. And then another. But you aren’t seeing that conversion rate lift you promised. You start to hear murmurs and doubts. You start to panic a little.

You could start testing as fast as you can, trying to get that first win. (But you shouldn’t).

Instead, you need to reexamine how you are structuring your tests. Because, as Alhan Keser writes,

Alhan Keser

If your results are disappointing, it may not only be what you are testing – it is definitely how you are testing. While there are several factors for success, one of the most important to consider is Design of Experiments (DOE).

This isn’t the first (or even the second) time we have written about Design of Experiments on the WiderFunnel blog. Because that’s how important it is. Seriously.

For this post, I teamed up with Director of Optimization Strategy, Nick So, to take a deeper look at the best ways to structure your experiments for maximum growth and insights.

Discover the best experiment structure for you!

Compare the pro’s and con’s of different Design of Experiment tactics with this simple download. The method you choose is up to you!



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.


Warning: Things will get a teensy bit technical, but this is a vital part of any high-performing marketing optimization program.

The basics: Defining A/B, MVT, and factorial

Marketers often use the term ‘A/B testing’ to refer to marketing experimentation in general. But there are multiple different ways to structure your experiments. A/B testing is just one of them.

Let’s look at a few: A/B testing, A/B/n testing, multivariate (MVT), and factorial design.

A/B test

In an A/B test, you are testing your original page / experience (A) against a single variation (B) to see which will result in a higher conversion rate. Variation B might feature a multitude of changes (i.e. a ‘cluster’) of changes, or an isolated change.

ab test widerfunnel
When you change multiple elements in a single variation, you might see lift, but what about insights?

In an A/B/n test, you are testing more than two variations of a page at once. “N” refers to the number of versions being tested, anywhere from two versions to the “nth” version.

Multivariate test (MVT)

With multivariate testing, you are testing each, individual change, isolated one against another, by mixing and matching every possible combination available.

Imagine you want to test a homepage re-design with four changes in a single variation:

  • Change A: New hero banner
  • Change B: New call-to-action (CTA) copy
  • Change C: New CTA color
  • Change D: New value proposition statement

Hypothetically, let’s assume that each change has the following impact on your conversion rate:

  • Change A = +10%
  • Change B = +5%
  • Change C = -25%
  • Change D = +5%

If you were to run a classic A/B test―your current control page (A) versus a combination of all four changes at once (B)―you would get a hypothetical decrease of -5% overall (10% + 5% – 25% +5%). You would assume that your re-design did not work and most likely discard the ideas.

With a multivariate test, however, each of the following would be a variation:

mvt widerfunnel

Multivariate testing is great because it shows you the positive or negative impact of every single change, and every single combination of every change, resulting in the most ideal combination (in this theoretical example: A + B + D).

However, this strategy is kind of impossible in the real world. Even if you have a ton of traffic, it would still take more time than most marketers have for a test with 15 variations to reach any kind of statistical significance.

The more variations you test, the more your traffic will be split while testing, and the longer it will take for your tests to reach statistical significance. Many companies simply can’t follow the principles of MVT because they don’t have enough traffic.

Enter factorial experiment design. Factorial design allows for the speed of pure A/B testing combined with the insights of multivariate testing.

Factorial design: The middle ground

Factorial design is another method of Design of Experiments. Similar to MVT, factorial design allows you to test more than one element change within the same variation.

The greatest difference is that factorial design doesn’t force you to test every possible combination of changes.

Rather than creating a variation for every combination of changed elements (as you would with MVT), you can design your experiment to focus on specific isolations that you hypothesize will have the biggest impact.

With basic factorial experiment design, you could set up the following variations in our hypothetical example:

VarA: Change A = +10%
VarB: Change A + B = +15%
VarC: Change A + B + C = -10%
VarD: Change A + B + C + D = -5%

Factorial design widerfunnel
In this basic example, variation A features a single change; VarB is built on VarA, and VarC is built on VarB.

NOTE: With factorial design, estimating the value (e.g. conversion rate lift) of each change is a bit more complex than shown above. I’ll explain.

Firstly, let’s imagine that our control page has a baseline conversion rate of 10% and that each variation receives 1,000 unique visitors during your test.

When you estimate the value of change A, you are using your control as a baseline.

factorial testing widerfunnel
Variation A versus the control.

Given the above information, you would estimate that change A is worth a 10% lift by comparing the 11% conversion rate of variation A against the 10% conversion rate of your control.

The estimated conversion rate lift of change A = (11 / 10 – 1) = 10%

But, when estimating the value of change B, variation A must become your new baseline.

factorial testing widerfunnel
Variation B versus variation A.

The estimated conversion rate lift of change B = (11.5 / 11 – 1) = 4.5%

As you can see, the “value” of change B is slightly different from the 5% difference shown above.

When you structure your tests with factorial design, you can work backwards to isolate the effect of each individual change by comparing variations. But, in this scenario, you have four variations instead of 15.

Mike St Laurent

We are essentially nesting A/B tests into larger experiments so that we can still get results quickly without sacrificing insights gained by isolations.

– Michael St Laurent, Optimization Strategist, WiderFunnel

Then, you would simply re-validate the hypothesized positive results (Change A + B + D) in a standard A/B test against the original control to see if the numbers align with your prediction.

Factorial allows you to get the best potential lift, with five total variations in two tests, rather than 15 variations in a single multivariate test.

But, wait…

It’s not always that simple. How do you hypothesize which elements will have the biggest impact? How do you choose which changes to combine and which to isolate?

The Strategist’s Exploration

The answer lies in the Explore (or research gathering) phase of your testing process.

At WiderFunnel, Explore is an expansive thinking zone, where all options are considered. Ideas are informed by your business context, persuasion principles, digital analytics, user research, and your past test insights and archive.

Experience is the other side to this coin. A seasoned optimization strategist can look at the proposed changes and determine which changes to combine (i.e. cluster), and which changes should be isolated due to risk or potential insights to be gained.

At WiderFunnel, we don’t just invest in the rigorous training of our Strategists. We also have a 10-year-deep test archive that our Strategy team continuously draws upon when determining which changes to cluster, and which to isolate.

Factorial design in action: A case study

Once upon a time, we were testing with Annie Selke, a retailer of luxury home-ware goods. This story follows two experiments we ran on Annie Selke’s product category page.

(You may have already read about what we did during this test, but now I’m going to get into the details of how we did it. It’s a beautiful illustration of factorial design in action!)

Experiment 4.7

In the first experiment, we tested three variations against the control. As the experiment number suggests, this was not the first test we ran with Annie Selke, in general. But it is the ‘first’ test in this story.

ab testing marketing control
Experiment 4.7 control product category page.

Variation A featured an isolated change to the “Sort By” filters below the image, making it a drop down menu.

ab testing marketing example
Replaced original ‘Sort By’ categories with a more traditional drop-down menu.

Evidence?

This change was informed by qualitative click map data, which showed low interaction with the original filters. Strategists also theorized that, without context, visitors may not even know that these boxes are filters (based on e-commerce best practices). This variation was built on the control.

Variation B was also built on the control, and featured another isolated change to reduce the left navigation.

ab testing marketing example
Reduced left-hand navigation.

Evidence?

Click map data showed that most visitors were clicking on “Size” and “Palette”, and past testing had revealed that Annie Selke visitors were sensitive to removing distractions. Plus, the persuasion principle, known as the Paradox of Choice, theorizes that more choice = more anxiety for visitors.

Unlike variation B, variation C was built on variation A, and featured a final isolated change: a collapsed left navigation.

Collapsed left-hand filter (built on VarA).
Collapsed left-hand filter (built on VarA).

Evidence?

This variation was informed by the same evidence as variation B.

Results

Variation A (built on the control) saw a decrease in transactions of -23.2%.
Variation B (built on the control) saw no change.
Variation C (built on variation A) saw a decrease in transactions of -1.9%.

But wait! Because variation C was built on variation A, we knew that the estimated value of change C (the collapsed filter), was 19.1%.

The next step was to validate our estimated lift of 19.1% in a follow up experiment.

Experiment 4.8

The follow-up test also featured three variations versus the original control. Because, you should never waste the opportunity to gather more insights!

Variation A was our validation variation. It featured the collapsed filter (change C) from 4.7’s variation C, but maintained the original “Sort By” functionality from 4.7’s control.

ab testing marketing example
Collapsed filter & original ‘Sort By’ functionality.

Variation B was built on variation A, and featured two changes emphasizing visitor fascination with colors. We 1) changed the left nav filter from “palette” to “color”, and 2) added color imagery within the left nav filter.

ab testing marketing example
Updated “palette” to “color”, and added color imagery. (A variation featuring two clustered changes).

Evidence?

Click map data suggested that Annie Selke visitors are most interested in refining their results by color, and past test results also showed visitor sensitivity to color.

Variation C was built on variation A, and featured a single isolated change: we made the collapsed left nav persistent as the visitor scrolled.

ab testing marketing example
Made the collapsed filter persistent.

Evidence?

Scroll maps and click maps suggested that visitors want to scroll down the page, and view many products.

Results

Variation A led to a 15.6% increase in transactions, which is pretty close to our estimated 19% lift, validating the value of the collapsed left navigation!

Variation B was the big winner, leading to a 23.6% increase in transactions. Based on this win, we could estimate the value of the emphasis on color.

Variation C resulted in a 9.8% increase in transactions, but because it was built on variation A (not on the control), we learned that the persistent left navigation was actually responsible for a decrease in transactions of -11.2%.

This is what factorial design looks like in action: big wins, and big insights, informed by human intelligence.

The best testing framework for you

What are your testing goals?

If you are in a situation where potential revenue gains outweigh the potential insights to be gained or your test has little long-term value, you may want to go with a standard A/B cluster test.

If you have lots and lots of traffic, and value insights above everything, multivariate may be for you.

If you want the growth-driving power of pure A/B testing, as well as insightful takeaways about your customers, you should explore factorial design.

A note of encouragement: With factorial design, your tests will get better as you continue to test. With every test, you will learn more about how your customers behave, and what they want. Which will make every subsequent hypothesis smarter, and every test more impactful.

One 10% win without insights may turn heads your direction now, but a test that delivers insights can turn into five 10% wins down the line. It’s similar to the compounding effect: collecting insights now can mean massive payouts over time.

– Michael St Laurent

The post Beyond A vs. B: How to get better results with better experiment design appeared first on WiderFunnel Conversion Optimization.

More – 

Beyond A vs. B: How to get better results with better experiment design

“The more tests, the better!” and other A/B testing myths, debunked

Reading Time: 8 minutes

Will the real A/B testing success metrics please stand up?

It’s 2017, and most marketers understand the importance of A/B testing. The strategy of applying the scientific method to marketing to prove whether an idea will have a positive impact on your bottom-line is no longer novel.

But, while the practice of A/B testing has become more and more common, too many marketers still buy into pervasive A/B testing myths. #AlternativeFacts.

This has been going on for years, but the myths continue to evolve. Other bloggers have already addressed myths like “A/B testing and conversion optimization are the same thing”, and “you should A/B test everything”.

As more A/B testing ‘experts’ pop up, A/B testing myths have become more specific. Driven by best practices and tips and tricks, these myths represent ideas about A/B testing that will derail your marketing optimization efforts if left unaddressed.

Avoid the pitfalls of ad-hoc A/B testing…

Get this guide, and learn how to build an optimization machine at your company. Discover how to use A/B testing as part of your bigger marketing optimization strategy!



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.



But never fear! With the help of WiderFunnel Optimization Strategist, Dennis Pavlina, I’m going to rebut four A/B testing myths that we hear over and over again. Because there is such a thing as a successful, sustainable A/B testing program…

Into the light, we go!

Myth #1: The more tests, the better!

A lot of marketers equate A/B testing success with A/B testing velocity. And I get it. The more tests you run, the faster you run them, the more likely you are to get a win, and prove the value of A/B testing in general…right?

Not so much. Obsessing over velocity is not going to get you the wins you’re hoping for in the long run.

Mike St Laurent

The key to sustainable A/B testing output, is to find a balance between short-term (maximum testing speed), and long-term (testing for data-collection and insights).

Michael St Laurent, Senior Optimization Strategist, WiderFunnel

When you focus solely on speed, you spend less time structuring your tests, and you will miss out on insights.

With every experiment, you must ensure that it directly addresses the hypothesis. You must track all of the most relevant goals to generate maximum insights, and QA all variations to ensure bugs won’t skew your data.

Dennis Pavlina

An emphasis on velocity can create mistakes that are easily avoided when you spend more time on preparation.

Dennis Pavlina, Optimization Strategist, WiderFunnel

Another problem: If you decide to test many ideas, quickly, you are sacrificing your ability to really validate and leverage an idea. One winning A/B test may mean quick conversion rate lift, but it doesn’t mean you’ve explored the full potential of that idea.

You can often apply the insights gained from one experiment, when building out the strategy for another experiment. Plus, those insights provide additional evidence for testing a particular concept. Lining up a huge list of experiments at once without taking into account these past insights can result in your testing program being more scattershot than evidence-based.

While you can make some noise with an ‘as-many-tests-as-possible’ strategy, you won’t see the big business impact that comes from a properly structured A/B testing strategy.

Myth #2: Statistical significance is the end-all, be-all

A quick definition

Statistical significance: The probability that a certain result is not due to chance. At WiderFunnel, we use a 95% confidence level. In other words, we can say that there is a 95% chance that the observed result is because of changes in our variation (and a 5% chance it is due to…well…chance).

If a test has a confidence level of less than 95% (positive or negative), it is inconclusive and does not have our official recommendation. The insights are deemed directional and subject to change.

Ok, here’s the thing about statistical significance: It is important, but marketers often talk about it as if it is the only determinant for completing an A/B test. In actuality, you cannot view it within a silo.

For example, a recent experiment we ran reached statistical significance three hours after it went live. Because statistical significance is viewed as the end-all, be-all, a result like this can be exciting! But, in three hours, we had not gathered a representative sample size.

Claire Vignon Keser

You should not wait for a test to be significant (because it may never happen) or stop a test as soon as it is significant. Instead, you need to wait for the calculated sample size to be reached before stopping a test. Use a test duration calculator to understand better when to stop a test.

After 24 hours, the same experiment had dropped to a confidence level of 88%, meaning that there was now only an 88% likelihood that the difference in conversion rates was not due to chance – i.e. statistically significant.

Traffic behaves differently over time for all businesses, so you should always run a test for full business cycles, even if you have reached statistical significance. This way, your experiment has taken into account all of the regular fluctuations in traffic that impact your business.

For an e-commerce business, a full business cycle is typically a one-week period; for subscription-based businesses, this might be one month or longer.

Myth #2, Part II: You have to run a test until reaches statistical significance

As Claire pointed out, this may never happen. And it doesn’t mean you should walk away from an A/B test, completely.

As I said above, anything below 95% confidence is deemed subject to change. But, with testing experience, an expert understanding of your testing tool, and by observing the factors I’m about to outline, you can discover actionable insights that are directional (directionally true or false).

  • Results stability: Is the conversion rate difference stable over time, or does it fluctuate? Stability is a positive indicator.
ab testing results stability
Check your graphs! Are conversion rates crossing? Are the lines smooth and flat, or are there spikes and valleys?
  • Experiment timeline: Did I run this experiment for at least a full business cycle? Did conversion rate stability last throughout that cycle?
  • Relativity: If my testing tool uses t-test to determine significance, am I looking at the hard numbers of actual conversions in addition to conversion rate? Does the calculated lift make sense?
  • LIFT & ROI: Is there still potential for the experiment to achieve X% lift? If so, you should let it run as long as it is viable, especially when considering the ROI.
  • Impact on other elements: If elements outside the experiment are unstable (social shares, average order value, etc.) the observed conversion rate may also be unstable.

You can use these factors to make the decision that makes the most sense for your business: implement the variation based on the observed trends, abandon the variation based on observed trends, and/or create a follow-up test!

Myth #3: An A/B test is only as good as its effect on conversion rates

Well, if conversion rate is the only success metric you are tracking, this may be true. But you’re underestimating the true growth potential of A/B testing if that’s how you structure your tests!

To clarify: Your main success metric should always be linked to your biggest revenue driver.

But, that doesn’t mean you shouldn’t track other relevant metrics! At WiderFunnel, we set up as many relevant secondary goals (clicks, visits, field completions, etc.) as possible for each experiment.

Dennis Pavlina

This ensures that we aren’t just gaining insights about the impact a variation has on conversion rate, but also the impact it’s having on visitor behavior.

– Dennis Pavlina

When you observe secondary goal metrics, your A/B testing becomes exponentially more valuable because every experiment generates a wide range of secondary insights. These can be used to create follow up experiments, identify pain points, and create a better understanding of how visitors move through your site.

An example

One of our clients provides an online consumer information service — users type in a question and get an Expert answer. This client has a 4-step funnel. With every test we run, we aim to increase transactions: the final, and most important conversion.

But, we also track secondary goals, like click-through-rates, and refunds/chargebacks, so that we can observe how a variation influences visitor behavior.

In one experiment, we made a change to step one of the funnel (the landing page). Our goal was to set clearer visitor expectations at the beginning of the purchasing experience. We tested 3 variations against the original, and all 3 won resulted in increased transactions (hooray!).

The secondary goals revealed important insights about visitor behavior, though! Firstly, each variation resulted in substantial drop-offs from step 1 to step 2…fewer people were entering the funnel. But, from there, we saw gradual increases in clicks to steps 3 and 4.

Our variations seemed to be filtering out visitors without strong purchasing intent. We also saw an interesting pattern with one of our variations: It increased clicks from step 3 to step 4 by almost 12% (a huge increase), but decreased actual conversions by -1.6%. This result was evidence that the call-to-action on step 4 was extremely weak (which led to a follow-up test!)

ab testing funnel analysis
You can see how each variation fared against the Control in this funnel analysis.

We also saw large decreases in refunds and chargebacks for this client, which further supported the idea that the right visitors (i.e. the wrong visitors) were the ones who were dropping off.

This is just a taste of what every A/B test could be worth to your business. The right goal tracking can unlock piles of insights about your target visitors.

Myth #4: A/B testing takes little to no thought or planning

Believe it or not, marketers still think this way. They still view A/B testing on a small scale, in simple terms.

But A/B testing is part of a greater whole—it’s one piece of your marketing optimization program—and you must build your tests accordingly. A one-off, ad-hoc test may yield short-term results, but the power of A/B testing lies in iteration, and in planning.

ab testing infinity optimization process
A/B testing is just a part of the marketing optimization machine.

At WiderFunnel, a significant amount of research goes into developing ideas for a single A/B test. Even tests that may seem intuitive, or common-sensical, are the result of research.

ab testing planning
The WiderFunnel strategy team gathers to share and discuss A/B testing insights.

Because, with any test, you want to make sure that you are addressing areas within your digital experiences that are the most in need of improvement. And you should always have evidence to support your use of resources when you decide to test an idea. Any idea.

So, what does a revenue-driving A/B testing program actually look like?

Today, tools and technology allow you to track almost any marketing metric. Meaning, you have an endless sea of evidence that you can use to generate ideas on how to improve your digital experiences.

Which makes A/B testing more important than ever.

An A/B test shows you, objectively, whether or not one of your many ideas will actually increase conversion rates and revenue. And, it shows you when an idea doesn’t align with your user expectations and will hurt your conversion rates.

And marketers recognize the value of A/B testing. We are firmly in the era of the data-driven CMO: Marketing ideas must be proven, and backed by sound data.

But results-driving A/B testing happens when you acknowledge that it is just one piece of a much larger puzzle.

One of our favorite A/B testing success stories is that of DMV.org, a non-government content website. If you want to see what a truly successful A/B testing strategy looks like, check out this case study. Here are the high level details:

We’ve been testing with DMV.org for almost four years. In fact, we just launched our 100th test with them. For DMV.org, A/B testing is a step within their optimization program.

Continuous user research and data gathering informs hypotheses that are prioritized and created into A/B tests (that are structured using proper Design of Experiments). Each A/B test delivers business growth and/or insights, and these insights are fed back into the data gathering. It’s a cycle of continuous improvement.

And here’s the kicker: Since DMV.org began A/B testing strategically, they have doubled their revenue year over year, and have seen an over 280% conversion rate increase. Those numbers kinda speak for themselves, huh?

What do you think?

Do you agree with the myths above? What are some misconceptions around A/B testing that you would like to see debunked? Let us know in the comments!

The post “The more tests, the better!” and other A/B testing myths, debunked appeared first on WiderFunnel Conversion Optimization.

Excerpt from:

“The more tests, the better!” and other A/B testing myths, debunked

How to build a high-performance marketing team

Reading Time: 9 minutes

Build a marketing team that gets results

Marketers always want to hear about results: 100% conversion rate lift, doubled revenue year over year, 89% increase in qualified leads, etc.

It makes sense: Results are promising, they’re easy to sell, they encourage you to imagine yourself in that person’s shoes, and to imagine those results at your company.

At WiderFunnel, we obsess about results. That’s why our clients continue to be our clients, because we consistently deliver profitable ‘A-ha!’ moments in the form of insights and revenue lift. In the end, results are what matter, right?

The effort it takes to get great results is less sexy. But it’s what separates the good from the great.

Humans appreciate ease. People love the promise of the silver bullet. We are prone to the cognitive shortcut called Satisficing, which gives us sub-optimal results. It’s difficult for people to push through to the best result.

This is why best practices, tool-centric strategies, ‘expert’ opinions, and 10-steps-to-guaranteed-success blog posts will always be popular.

Satisficing is a cognitive heuristic that encourages a person to stop considering alternatives when they’ve found one that meets the lowest acceptable criteria. It’s why people buy a product when they don’t feel like the additional effort searching for a better alternative is worth the exerting. It can actually be an effective method for optimizing all costs, if it’s done consciously.

The reality is that you reap what you sow: The best results come from a solid foundation. You’ve heard me talk about process and framework thinking as being crucial to getting great marketing results…

…and today, I’m going to talk about another pillar for success: building a high performance marketing team.

Want to join a high-performance marketing team?

Join our team >>

The people who you hire are at the core of what you can achieve. If you want to achieve growth, you have to build a high-performance team. I have spent the last 10 years building the WiderFunnel team; they are a group of experts who deliver consistently amazing results for our clients.

Victoria Petriw

If you have no team, you have no business. People often overlook that simple fact. We want to say it’s the ideas, marketing, sales, etc. that are the number one priority. But in order to achieve any results in any of these areas, you need a solid team.

Victoria Petriw, Manager of Operations, WiderFunnel

In this post, I am going to walk through how to build and maintain your high-performance, results-driving, ‘A-ha!’-creating marketing team.

Let’s start at the beginning.

Lay the foundation

You can’t build anything without a solid foundation, so first things first: Who are you as a company? What is your company identity, the glue that holds everything together?

If you can’t answer that question immediately, you may find it very difficult to hire the right people.

Agnes Tseng

You can have a candidate that is extremely skilled, but if they are not on the same page as you, it won’t be a winning relationship for either of you.

Agnes Tseng, Human Resources, WiderFunnel

Most companies today have created some form of a mission or vision statement, and company values. But I’d argue that most don’t use them to really define what their company does.

Without a shared belief in the types of decisions and behaviors you won’t accept, you’ll accept anything. If you don’t stand for something, you’ll fall for anything.

If you have a clear purpose that is written, repeated, and used for decision-making, you’ll be more likely to attract and retain people that resonate together. When people resonate, the added energy from the coherence multiplies their effect.

Strong core values are a proven way of finding people that resonate with each other. At WiderFunnel, five values sit at the core of our company identity. This is who we are.

We created these values as a team to reflect how we work.

marketing team - WiderFunnel values
Our values are at the core of every decision we make.

These values are embedded into everything we do. They are integral to our hiring decisions, reviewed during onboarding, called out in our weekly team shoutouts, and used to decide on client fit.

Often, companies will grow to a certain number of employees, realize that their company culture is waning, and then scramble to define their identity. But, by then it may be too late.

If you don’t intentionally build the culture you want, the culture you don’t want will create itself.

So, start by identifying your purpose and the values you’ll live by. And, build all of your decisions on that foundation.

Then…

Build the structure

So, you are happy with your team ‘why’, and have begun the hiring process. How do you maintain a satisfied, productive, and high-performing marketing team?

marketing team - build the structure
It takes a solid foundation, and intentional frameworks to build a high-performance marketing team.

I’ve always recognized the value of framework thinking for conversion optimization. And when developing our human resources process, I’ve sought out the best frameworks for that area of the business too.

The best frameworks simplify difficult decisions, focus attention on the right pieces of data, and align team members on the salient criteria.

How to get the right butts in the right seats

For the first couple years at WiderFunnel, I struggled with our hiring failure rate. It was painful to hire and train promising people only to see them flame out in disappointment.

I knew there had to be a better process for improving our success rate. When I found the Topgrading book back in 2009, it gave me the tools I needed to separate the gold from the quartz in those mountains.

The Topgrading process incorporates very specific questions that are meant to reveal whether someone is an A-player, a B-player, a C-player, etc. The secret is in the exact wording of the questions and steps in the process to reveal insights about the candidate.

There’s also a newer and more approachable (i.e. shorter) book that describes the process, called Who.

We have tweaked the framework slightly to fit our needs, but the premise is to filter out the B-players and C-players, and to only engage with A-players. The process looks a little something like this:

  1. Screening call (Conducted by HR)
  2. In-person in-depth “Topgrading” interview (HR)
  3. In-person culture interview (Team Lead)
  4. Team interview (Team)

Only the most promising candidates make it through to meet with a team leader.

On top of the interview process, we use a lightweight work-style behavior and motivation profiling tool called Predictive Index.

This allows us to create behavior profiles for each position, to identify what behaviors define success in any position. I call it “lightweight” because it only takes a few minutes for a candidate to fill out, but the insights it reveals are stunning.

Once a candidate has passed their Topgrading interview, they fill out a quick Predictive Index quiz, which shows us a their natural behavioral patterns and motivations.

marketing team - employee trust
What is the candidate motivated by? What does she like? How does she work, naturally?

This tells us whether the person will naturally be a great fit for that position. If a candidate doesn’t ‘fit’ the profile, we don’t necessarily remove them from consideration. But, we know which questions to ask to ensure we are creating a position that person will be happy with.

Because WiderFunnel is a data-focused marketing agency (as I hope yours is a data-focused marketing team), we also require most candidates to complete various technical tests.

Yes, it takes effort to hire the right people

If this sounds intense, that’s because it is. But it’s worth it!

There is a lot at stake when you are talking about a person’s job and livelihood (not to mention the well-being of your business), and these upfront processes will help you get the right personalities on your team from the outset.

Not only does hiring the right people save you a lot of money on mis-hires, but a team of A-players wants us to hire more A-players. Someone who can’t match the pace of the team’s thinking and work is frustrating to everyone else. A team of stallions doesn’t invite ponies to their party.

Our team members are proud of the day they pass their 90-day probationary period and receive their full fledged WiderFunnel team jacket. They know they have joined an elite team.

marketing team - WiderFunnel jackets
Thumbs up! It was a good day when James and Agnes got their WiderFunnel jackets.

Keep people at the center

All this talk about A-players, stringent hiring processes, and the cost of mis-hires may sound like people are just cost items. But, that is the opposite of how I see our people. And that wouldn’t be the best way to create any high functioning team.

Your team members don’t leave their personal lives at the door when they enter your workplace. They are whole people and all areas of their lives affect how they show up in their day.

I’m a long-time member of Entrepreneur’s Organization (EO) and other similar mastermind groups. At EO, I belong to a small forum group of entrepreneurs who meet monthly to discuss the best and worst things that are happening in our businesses and personal lives. I have learned how important it is to have people I trust that can relate to my experiences. In that forum, I have also learned how tightly business and personal life are intertwined.

A few years ago, I brought some of EO’s perspectives into WiderFunnel’s team. It began as part of our Friday afternoon happy hour, where everyone shares their weekly “Awesomes” with the rest of the team.

At 4:00pm every Friday, we stop working, pour a few beers, and every team member shares a professional awesome and a personal awesome from that week. It’s contagious: If you’ve had a rough week, hearing 25 “Awesome’s” is a pretty cool pick-me-up.

Building on my insight from EO, I also encouraged people to share if they have a weekly “Awful” and the result was powerful. The laughter and tears shared within this forum of support encourage our team to be Real with each other.

Victoria Petriw

I am a firm believer that all people want is to be heard, and to be loved. Companies often act like this doesn’t translate into the professional realm, that it only lives in the personal realm. And that is, I think, the number one mistake a lot of companies make.

– Victoria Petriw

It’s important to create structures that help meet your team’s needs.

Individual needs

How often do each of your team members get a check-in with their boss? As you might have guessed by now, I’m going to recommend a structured process for regular check-ins.

marketing team - check in
Real, 1-on-1 conversation can solve emerging issues before they become real problems.

A few years ago, we implemented the Entrepreneurial Operating System (EOS), based on Gino Wickman’s book Traction, which shows a structure for communicating throughout the company. Part of that system defines a regular check-in rhythm.

Some companies take an ad hoc or “as needed” approach to meetings, but I’ve found that team members often feel neglected if they aren’t regularly scheduled.

If you are not checking in with the people on your team, regularly, you should rethink your management strategy. We ensure that each WiderFunnel team member has, at the very least, a monthly check-in with their team lead.

These check-ins are a space for personal and professional review, for project updates, and value-based feedback. Are your team members being heard? Do they feel appreciated and successful?

In tough times, real, 1-on-1 conversation can solve emerging issues before they become real problems.

Team needs

To make sure your team as a whole is jiving, you need to facilitate the right meetings at the right times.

marketing team - alignment
Keeping your team aligned requires intentional meetings.

Within the Traction system, we’ve set up daily huddles, weekly working meetings, quarterly priority-setting meetings, and annual planning meetings within each team. This creates a consistent rhythm and flow of information for the entire company.

This system helps us make sure that the projects each individual is working on come to fruition.

Agnes Tseng

Many of our meetings are recurring, but they all have a specific ‘why’. No one here has time to waste, and each meeting has a purpose, agenda, and priorities.

– Agnes Tseng

I encourage you to look at your meeting schedule and ask yourself whether each meeting is intentional? Does it have a clear purpose? If not, it may be worth your while to test a system like Traction.

A culture of personal ownership

The Dilbert era is over, for big and small companies alike. People want to love where they work. So, how do you make your team attractive to A-players? And how do you retain your A-players?

Do you need more perks? Beer on Fridays? Exotic company retreats? Company bowling night? It can feel overwhelming to keep up with the perks some companies offer. And last week’s perks are today’s entitlement.

Some time ago, we decided to change how WiderFunnel-ers view company culture. In the past, the task of planning fun, culture-stimulating social activities fell to the Operations team.

But it began to feel like team members were sitting back, waiting for Operations to deliver happiness. And if they didn’t like what was happening, morale waned.

Victoria Petriw

It felt like everyone was sitting around the dinner table waiting to be served, expecting ‘culture’ to be provided on a silver platter. But culture is like happiness: You can’t inject it into a company. It has to live in each individual.

– Victoria Petriw

So, we decided to shift the perspective. We encouraged team members to contribute to company culture and activities. Now, we have a WiderFun initiative where we have team-planned monthly fun-tivities, and the change is palpable.

From events like WiderFunnel-themed jeopardy, to WiderFun-lympics, to spontaneous game nights and jam sessions, I have seen the team commit to creating the culture they want to work in. And, they love it even more because they’ve had a part in creating it. (Which, by the way is a great example of the IKEA Effect cognitive bias.)

The IKEA Effect says that people are more likely to love something if they’ve had a part in creating it. I’m no longer surprised when my daughters most-raved-about meals are the ones that they’ve helped cook.

Rather than taking a top-down approach to culture, challenge your team to own it!

What does this mean for your bottom-line?

A happy, smart, engaged team wants to deliver great work. Structures and frameworks like the ones I’ve shared are a starting point. You may find others that work for you, but the principle is the same.

When you have put rigorous thought into building a well-oiled machine, when individuals are in the right jobs, in the right culture, you will see the effects in your bottom-line.

So, now it’s your turn.

What do you do to build and maintain a high-performing marketing team? How does your company create, maintain and enhance culture? Add your comments below.

And, if you know someone you think would be a great fit for our team, please send them our way. WiderFunnel is hiring!

The post How to build a high-performance marketing team appeared first on WiderFunnel Conversion Optimization.

Link to original: 

How to build a high-performance marketing team

Build the most effective personalization strategy: A 4-step roadmap

Reading Time: 11 minutes

Whaddya mean, ‘personalization strategy

It’s Groundhog Day again.

Do you remember the Groundhog Day movie? You know… the one where Bill Murray’s character repeats the same day over and over again, every day. He had to break the pattern by convincing someone to fall in love with him, or something like that.

What an odd storyline.

Yet today, it’s reminding me of a pattern in marketing. Marketing topics seem to be pulled by an unstoppable force through fad cycles of hype, over-promise, disappointment, and decline – usually driven by some new technology.

I’ve watched so many fad buzzwords come and go, it’s dizzying. Remember Customer Relationship Marketing? Integrated Marketing? Mobile First? Omnichannel?

A few short years ago, everyone was talking about social media as the only topic that mattered. Multivariate testing was sexy for about five minutes.

Invariably, similar patterns of mistakes appear within each cycle.

Tool vendors proliferate on trade show floors, riding the wave and selling a tool that checks the box of the current fad. Marketers invest time, energy, and budget hoping for a magic bullet without a strategy.

But, without a strategy, even the best tools can fail to deliver the promised results.

(Side note: That’s why I’ve been advocating for years for marketers to start their conversion optimization programs with a strategy in addition to the best tools.)

Now, everyone is swooning for Personalization. And, so they should! It can deliver powerful results.

PDF Bonus: Personalization Roadmap

This post is over 3,000 words of personalization goodness. Get the full PDF now so that you can reference it later, and share it with your co-workers.



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.



From simple message segmentation to programmatic ad buying and individual-level website customization, the combination of big data and technology is transforming the possibilities of personalization.

But the rise of personalization tools and popularity has meant the rise of marketers doing personalization the wrong way. I’ve lost track of the number of times we’ve seen:

  • Ad hoc implementation of off-the-shelf features without understanding what need they are solving.
  • Poor personalization insights with little data analysis and framework thinking driving the implementation.
  • Lack of rigorous process to hypothesize, test, and validate personalization ideas.
  • Lack of resources to sustain the many additional marketing messages that must be created to support multiple, personalized target segments.

That’s why, in collaboration with our partners at Optimizely, we have created a roadmap for creating the most effective personalization strategy:

Featured_Roadmap

  • Step 1: Defining personalization
  • Step 2: Is a personalization strategy right for you?
  • Step 3: Personalization ideation
  • Step 4: Personalization prioritization

Step 1: Defining personalization

Personalization and segmentation are often used interchangeably, and are arguably similar. Both use information gathered about the marketing prospect to customize their experience.

While segmentation attempts to bucket prospects into similar aggregate groups, personalization represents the ultimate goal of customizing the person’s experience to their individual needs and desires based on in-depth information and insights about them.

You can think of them as points along a spectrum of customized messaging.

Personalization spectrum
The marketing customization spectrum.

You’ve got the old mass marketing approach on one end, and the hyper-personalized, 1:1, marketer-to-customer nirvana on the other end. Segmentation lies somewhere in the middle. We’ve been doing it for decades, but now we have the technology to go deeper, to be more granular.

Every marketer wants to provide the perfect message for each customer — that’s the ultimate goal of personalization.

The problem personalization solves

Personalization solves the problem of Relevance (one of 6 conversion factors in the LIFT Model®). If you can increase the Relevance of your value proposition to your visitor, by speaking their language, matching their expectations, and addressing their unique fears, needs and desires, you will see an increase in conversions.

Let me show you an example.

Secret Escapes is a flash-sale luxury travel company. The company had high click-through rates on their search ads and directed all of this traffic to a single landing page.

Personalization strategy ad
Secret Escapes “spa” PPC ad in Google.

The ad copy read:

“Spa Vacations
Save up to 70% on Spa Breaks. Register for free with your email.”

But, the landing page didn’t reflect the ad copy. When visitors landed on the page, they saw this:

personalization strategy secret escapes
Original landing page for Secret Escapes.

Not super relevant to visitors’ search intent, right? There’s no mention of the keyword “spa” or imagery of a spa experience. Fun fact: When we are searching for something, our brains rely less on detailed understanding of the content, and more on pattern matching, or a scent trail.

(Note: some of the foundational research for this originated with Peter Pirolli at PARC as early as the 90’s. See this article, for example.)

In an attempt to convert more paid traffic, Secret Escapes tested two variations, meant to match visitor intent with expectations.

personalization strategy secret escapes 1
Variation 1 used spa imagery and brought the keyword “spa” into the sub-head.
personalization strategy secret escapes 2
Variation 2 used the same imagery, but mirrored the ad copy with the headline copy.

By simply maintaining the scent trail, and including language around “spa breaks” in the signup form, Secret Escapes was able to increase sign-ups by 32%. They were able to make the landing page experience sticky for this target audience segment, by improving Relevance.

Step 2: Is a personalization strategy right for me?

Pause. Before you dig any deeper into personalization, you should determine whether or not it is the right strategy for your company, right now.

Here are 3 questions that will help you determine your personalization maturity and eligibility.

Do I have enough data about my customers?

Hudson Arnold Personalization

Personalization is not a business practice for companies with no idea of how they want to segment, but for businesses that are ready to capitalize on their segments.

Hudson Arnold, Strategy Consultant, Optimizely

For companies getting started with personalization, we recommend that you at least have fundamental audience segments in place. These might be larger cohorts at first, focused on visitor location, visitor device use, single visitor behaviors, or visitors coming from an ad campaign.

Personalization Strategy Segments
Where is your user located? Did they arrive on your page via Facebook ad? Are they browsing on a tablet?

If you haven’t categorized your most important visitor segments, you should focus your energies on segmentation first, before moving into personalization.

Do I have the resources to do personalization?

  • Do you have a team in place that can manage a personalization strategy?
  • Do you have a personalization tool that supports your strategy?
  • Do you have an A/B testing team that can validate your personalization approach?
  • Do you have resources to maintain updates to the segments that will multiply as you increase your message granularity?

Personalization requires dedicated resources and effort to sustain all of your segments and personalized variations. To create a truly effective personalization strategy, you will need to proceduralize personalization as its own workstream and implement an ongoing process.

Which leads us to question three…

Do I have a process for validating my personalization ideas?

Personalization is a hypothesis until it is tested. Your assumptions about your best audience segments, and the best messaging for those segments, are assumptions until they have been validated.

Hudson Arnold Personalization

Personalization requires the same inputs and workflow as testing; sound technical implementation, research-driven ideation, a clear methodology for translating concepts into test hypotheses, and tight technical execution. In this sense, personalization is really just an extension of A/B testing and normal optimization activities. What’s more, successful personalization campaigns are the result of testing and iteration.

– Hudson Arnold

Great personalization strategy is about having a rigorous process that allows for 1) gathering insights about your customers, and then 2) validating those insights. You need a structured process to understand which insights are valid for your target audience and create growth for your business.

WiderFunnel’s Infinity Optimization Process™ represents these two mindsets. It is a proven process that has been refined over many years and thousands of tests. As you build your personalization strategy, you can adopt parts or all of this process.

infinity optimization process
The Infinity Optimization Process is iterative and leads to continuous growth and insights.

There are two critical phases to an effective personalization strategy: Explore and Validate. Explore uses an expansive mindset to consider all of your data, and all of your potential personalization ideas. Validate is a structured process of A/B testing that uses a reductive mindset to refine and select only those ideas that produce value.

Without a process in place to prove your personalization hypotheses, you will end up wasting time and resources sending the wrong messages to the wrong audience segments.

Personalization without validation is simply guesswork.

Step 3: Personalization ideation

If you have answered “Yes” to those three questions, you are ready to do personalization: You are confident in your audience segments, you have dedicated resources, perhaps you’re already doing basic personalization. Now, it’s time to build your personalization strategy by gathering insights from your data.

personalization strategy curiosity
“How do I get ideas for customized messaging that will work?”

One of the questions we hear most often when it comes to personalization is, “How do I get ideas for customized messaging that will work?” This is the biggest area of ongoing work and your biggest opportunity for business improvement from personalization.

The quality of your insights about your customers directly impacts the quality of your personalization results.

Here are the 3 types of personalization insights to explore:

  • Deductive research
  • Inductive research
  • Customer self-selected

You can mix and match these types within your program. We have plenty of examples of how. Let’s look at a few now.

1) Deductive research and personalization insights

Are there general theories that apply to your particular business situation?

Psychological principles? UX principles? General patterns in your data? ‘Best’ practices?

Deductive personalization starts with your assumptions about how your customers will respond to certain messaging based on existing theories…but it doesn’t end there. With deductive research, you should always feed your ideas into experiments that either validate or disprove your personalization approach.

Let’s look at an example:

Heifer International is a charity organization that we have been working with to increase their donations and their average donation value per visitor.

In one experiment, we decided to test a psychological principle called the “rule of consistency”. This principle states that people want to be consistent in all areas of life; once someone takes an action, no matter how small, they strive to make future behavior match that past behavior.

We asked visitors to the Heifer website to identify themselves as a donor type when they land on the site, to trigger this need to remain consistent.

client spotlight psychological persuasion
What kind of donor are you?

Notice there’s no option to select “I’m not a donor.” We were testing what would happen when people self-identified as donors.

The results were fascinating. This segmenting pop up increased donations by nearly 2%, increased the average donation value per visitor by 3%, and increased the revenue per visitor by more than 5%.

There’s more. In looking at the data, we saw that just 14% of visitors selected one of the donation identifications. But, that 14% was actually 68% of Heifer’s donors: The 14% who responded represent a huge percentage of Heifer’s most valuable audience.

personalization strategy heifer donors
Visitors who self-identify as ‘Donors’ are a valuable segment.

Now, Heifer can change the experience for visitors who identify as a type of donor and use that as one piece of data to personalize their experience. Currently, we’re testing which messages will maximize donations even further within each segment.

2) Inductive research and personalization insights

Are there segments within your data and test results that you can analyze to gather personalization insights?

If you are already optimizing your site, you may have seen segments naturally emerge through A/B testing. A focused intention to find these insights is called inductive research.

Inductive personalization is driven by insights from your existing A/B test data. As you test, you discover insights that point you toward generalizable personalization hypotheses.

Here’s an example from one of WiderFunnel’s e-commerce clients that manufactures and sells weather technology products. This company’s original product page was very cluttered, and we decided to test it against a variation that emphasized visual clarity.

personalization strategy variations
We tested the original page (left) against a variation emphasizing clarity (right).

Surprisingly, the clear variation lost to the original, decreasing order completions by -6.8%. WiderFunnel Strategists were initially perplexed by the result, but they didn’t rest until they had uncovered a potential insight in the data.

They found that visitors to the original page saw more pages per session, while visitors to the variation spent a 7.4% higher average time on page. This could imply that shoppers on the original page were browsing more, while shoppers on our variation spent more time on fewer pages.

Research published by the NN Group describes teen-targeted websites, suggesting that younger users enjoy searching and are impatient, while older users enjoy searching but are also much more patient when browsing.

With this research in mind, the Strategists dug in further and found that the clear variation actually won for older users to this client’s site, increasing transactions by +24%. But it lost among younger users, decreasing transactions by -38%.

So, what’s the takeaway? For this client, there are potentially new ways of customizing the shopping experience for different age segments, such as:

  1. Reducing distractions and adding clarity for older visitors
  2. Providing multiple products in multiple tabs for younger visitors

This client can use these insights to inform their age-group segmentation efforts across their site.

(Also, this is a great example of why one of WiderFunnel’s five core values says “Grit – We don’t quit until we find an answer.”)

3) Customer self-selected personalization

Ask your prospects to tell you about themselves. Then, test the best marketing approach for each segment.

Customer self-selected personalization is potentially the easiest strategy to conceptualize and implement. You are asking your users to self-identify, and segment themselves. This triggers specific messaging based on how they self-identified. And then you can test the best approach for each of those segments.

Here’s an example to help you visualize what I mean.

One of our clients is a Fortune 500 healthcare company — they use self-selected personalization to drive more relevant content and offers, in order to grow their community of subscribers.

This client had created segments that were focused on a particular health situation, that people could click on:

  • “Click on this button to get more information,”
  • “I have early stage disease,”
  • “I have late stage disease,”
  • “I manage the disease while I’m working,”
  • “I’m a physician treating the disease,” and,
  • “I work at a hospital treating the disease.”

These segments came from personas that this client had developed about their subscriber base.

personalization strategy messaging
The choices in the header triggered the messaging in the side bar.

Once a user self-identified, the offers and messaging that were featured on the page were adjusted accordingly. But, we wouldn’t want to assume the personalized messages were the best for each segment. You should test that!

In self-selected personalization, there are two major areas you should test. You want to find out:

  1. What are the best segments?
  2. What is the best messaging for each segment?

For this healthcare company, we didn’t simply assume that those 5 segments were the best segments, or that the messages and offers triggered were the best messages and offers. Instead, we tested both.

A series of A/B tests within their segmentation and personalization efforts resulted in a doubling of this company’s conversion rate.

Developing an audience strategy

Developing a personalization strategy requires an audience-centric approach. The companies that are succeeding at personalization are not picking segments ad hoc from Google Analytics or any given study, but are looking to their business fundamentals.

Once you believe you have identified the most important segments for your business, then you can begin to layer on more tactical segments. These might be qualified ‘personas’ that inform your content strategy, UX design, or analytical segments.

Step 4: Personalization prioritization

If this whole thing is starting to feel a little complex, don’t worry. It is complex, but that’s why we prioritize. Even with a high-functioning team and an advanced tool, it is impossible to personalize for all of your audience segments simultaneously. So, where do you start?

Optimizely uses a simple axis to conceptualize how to prioritize personalization hypotheses. You can use it to determine the quantity and the quality of the audiences you would like to target.

Personalization strategy matrix

The x-axis refers to the size of your audience segment, while the y-axis refers to an obvious need to personalize to a group vs. the need for creative personalization.

For instance, the blue bubble in the upper left quadrant of the chart represents a company’s past purchasers. Many clients want to start personalizing here, saying, “We want to talk to people who have spent $500 on leather jackets in the last three months. We know exactly what we wanna show to them.”

But, while you might have a solid merchandising strategy or offer for that specific group, it represents a really, really, really small audience.

That is not to say you shouldn’t target this group, because there is an obvious need. But it needs to be weighed against how large that group is. Because you should be treating personalization like an experiment, you need to be sensitive to statistical significance.

The net impact of any personalization effort you use will only be as significant as the size of the segment, right? If you improve the conversion rate 1000% for 10 people, that is going to have a relatively small impact on your business.

personalization strategy matrix 2

Now, move right on the x-axis; here, you are working with larger segments. Even if the personalized messaging is less obvious (and might require more experimentation), your efforts may be more impactful.

Food for thought: Most companies we speak to don’t have a coherent geographical personalization strategy, but it’s a large way of grouping people and, therefore, may be worth exploring!

You may be more familiar with WiderFunnel’s PIE framework, which we use to prioritize our ideas.

How does Optimizely’s axis relate? It is a simplified way to think about personalization ideas to help you ideate quickly. Its two inputs, “Obvious Need” and “Audience Size” are essentially two inputs we would use to calculate a thorough PIE ranking of ideas.

The “Obvious Need” axis would influence the “Potential” ranking, and “Audience Size” would influence “Importance”. It may be helpful to consider the third PIE factor, “Ease”, if some segmentation data is more difficult to track or otherwise acquire, or if the maintenance cost of ongoing messaging is high.

To create the most effective personalization strategy for your business, you must remember what you already know. For some reason, when companies start personalization, the lessons they have learned about testing all of their assumptions are sometimes forgotten.

You probably have some great personalization ideas, but it is going to take iteration and experimentation to get them right.

A final note on personalization: Always think of it in the context of the bigger picture of marketing optimization.

Insights gained from A/B testing inform future audience segments and personalized messaging, while insights derived from personalization experimentation informs future A/B testing hypotheses. And on and on.

Don’t assume that insights gained during personalization testing are only valid for those segments. These wins may be overall wins.

The best practice when it comes to personalization is to take the insights you validate within your tests and use them to inform your hypotheses in your general optimization strategy.

** Note: This post was originally published on May 3, 2016 as “How to succeed at segmentation and personalization” but has been wholly updated to reflect new personalization frameworks, case studies, and insights from Optimizely. **

Still have questions about personalization? Ask ’em in the comments, or contact us to find out how WiderFunnel can help you create a personalization strategy that will work for your company.

The post Build the most effective personalization strategy: A 4-step roadmap appeared first on WiderFunnel Conversion Optimization.

Continue at source: 

Build the most effective personalization strategy: A 4-step roadmap

5 test results that made us say ‘A-ha!’ in 2016

Reading Time: 10 minutes

‘A-ha!’ moment (n.): An insight that leads to more substantial revenue lift and profitable growth for your company (e.g. the moment all Optimizers live for).

At WiderFunnel, our mission is create profitable ‘A-ha!’ moments for our clients every day.

Last year, I created a five-part ‘A-ha!’ moments series: Five mini blog posts focused on five of our favorite insights from 2015. Well, turns out 2016 was also full of ‘A-ha!’ moments that were too good to keep to ourselves.

This post explores five of WiderFunnel’s favorite ‘A-ha!’s from the past year. I hope that they inspire you as you begin planning your 2017 experiments!

‘A-ha!’ #1: Using color psychology to increase conversions

If you follow WiderFunnel, you probably know that we are not big fans of conversion optimization ‘best practices’ like “all calls-to-action should be orange”.

Because, frankly, best practices may not be the best thing for your business. They must be proven in your business context, for your users.

That said, this first ‘A-ha!’ moment comes from a color isolation test. But, the ‘A-ha’ isn’t the result, it’s the why behind the hypothesis.

The strategy

One of our clients provides an online consumer information service — users type in a question and get an Expert answer. Once a user asks their question, they have entered a four-step funnel:

  • Step 1: Ask the question
  • Step 2: Add more information
  • Step 3: Pick an Expert
  • Step 4: Get an answer (aka the checkout page)

We have been testing on each step of this funnel, but this particular experiment was on the all-important checkout page, the final conversion.

What can the right color do?

For each WiderFunnel client, we create a customized growth program, however, each program is built with our proven Infinity Optimization Process™. The process cycles between two phases: Explore (information-gathering) and Validate (testing and proving).

Research on consumer behavior, psychological principles, and persuasion techniques is a huge part of the Explore phase. Our Strategists use this research, along with several other information touchpoints, when developing hypotheses.

This past year, one of WiderFunnel’s favorite bloggers and researchers, Nick Kolenda, published a giant piece on color psychology. Kolenda looked at 50 academic studies on color, and compiled his findings. According to him, certain colors can inspire certain actions.

Aha! #1 color spectrum
Can certain colors influence your users’ behavior?

In the case of this client, Optimization Strategist, Nick So, wanted to see if adding a subtle, subconscious visual cue to the checkout page would be more motivational for users. He was looking, specifically, at warm colors.

Persuasion principle
: Warm colors (with high saturation and low brightness) increase arousal because they trigger impulsivity, and tend to increase behavioral responses.

The test: Isolation I and isolation II

In the first isolation, Nick decided to put warm colors to the test.

Hypothesis: Increasing prominence of the checkout area by using a color linked to increasing action and responses will improve visual clarity of the page and increase conversions.

Aha! #1 Control
The client’s original checkout page.
Aha! 1 VarA
Our variation, which emphasized the payment section with a warm color background.

In the variation, Nick removed all other background colors and added a warm orange background to the payment section. And it worked! This variation saw a statistically significant 2.82% increase in conversions.

We wanted to validate this insight across audiences, so Nick created a second isolation for this client’s mobile users.

Aha! #1 mobile
From right to left: the Control, VarA, and the winning VarB.

He tested the Control against two variations: Variation B (the warm color isolation) was built on variation A, so Nick was able to track the isolation properly. In this experiment, the color change was responsible for a 2.7% lift in conversions, almost the exact same increase as in the desktop test.

A-ha!

Nick So WiderFunnel

It’s always amazing how such seemingly subtle psychological cues and persuasion elements can have a big potential impact on user behavior. We are fortunate to be able to have a client that has the traffic, trusts us, and understands testing enough to allow us to run an isolation on such an interesting concept.

– Nick So

‘A-ha!’ #2: Sometimes, all your users need is a clear next step

You may have heard the phrase “if content is king, revenue is queen”…

WiderFunnel Founder & CEO, Chris Goward, wrote, “Content is important for getting people to your site, from search algorithms to social share to links to your site, but content alone doesn’t make you revenue. Content without conversions is just free publishing.

Our second ‘A-ha!’ moment comes from testing we have been doing with one WiderFunnel client: A content site that provides information for the individual investor. This client offers a ton of free resources on its website to help users stay on top of their finances.

Of course, they also offer subscription services, such as their newsletter and professional advisor service, which provides premium stock-picking advice to users. Our goal is to help this client increase profitable conversions.

The strategy

When we began testing with this client, there were many different paths that users could take after landing on an investing article. And there was almost no indication that there were professional services available (which is how this client makes money!)

The WiderFunnel Strategy team did an initial LIFT analysis of the site-wide navigation, which revealed several problems, like:

  • There was not a clear, primary call-to-action in the nav (Clarity)
  • There was a general lack of urgency (Urgency)
  • The menu drop-down for “Stock Picks” had one, ambiguous dropdown (Anxiety)
  • If someone is ready to spend money, it is not clear how to do so (Clarity)
Aha! #2 Control
The original navigation didn’t have a clear call-to-action.

We wanted to test giving users a clear action to take in the site-wide navigation. This way, a user who wanted more would know which path to take.

We tested adding a “Latest Stock Picks” call-to-action in the nav (replacing the “Stock Picks” dropdown); the assumption was that users of this client’s site are looking for stock-picking advice, specifically.

Hypothesis: Creating a clear “Latest Stock Picks” CTA in the site-wide navigation will cause more users to enter a revenue-driving funnel from all parts of the site.

The variations

We tested two variations, each of which featured the “Latest Stock Picks” call-to-action. But, in each variation this CTA took the user to a different page. Our ultimate goal was to find out:

  1. If users were even aware that there are premium paid services offered, and
  2. Which funnel is best to help users make a decision and, ultimately, a purchase?

With variation A, we added the “Latest Stock Picks” CTA in the nav. This call-to-action sent users to the homepage and anchored them in the premium services section. (This is how the functionality of the original dropdown worked.)

This section provides a lot of detail about this client’s different offerings, along with a “Sign Up Today” call-to-action.

Aha! #2 VarA
The winning variation featured a very clear call-to-action, while maintaining the same functionality as the Control.

With variation B, we wanted to test limiting choice. Rather than showing users a bunch of product options, the “Latest Stock Picks” CTA sent them directly to the professional advisor sign up page (this client’s most popular product).

Aha! #2 VarB
In this variation, the CTA sent users to a product page.

A-ha!

Both variations beat the control, with variation A resulting in an 11.17% lift in transactions with 99% confidence and variation B resulting in a 7.9% increase in transactions with 97% confidence.

Interestingly, because variation B was built on variation A, we were able to see that it actually decreased transactions by 3.3%.

So, what does this mean? Here are a few takeaways we plan to explore further in 2017:

  • Users may have been unsure of how to sign up (or that they could sign up) due to lack of CTA prominence on the original site-wide navigation
    • It is also possible that Urgency was a motivator for this client’s users: Changing the “Stock Picks” drop down to a “Latest Stock Picks” CTA increased urgency and led to more conversions. This wasn’t a clear isolation but it’s good evidence to follow-up with!
  • Users prefer some degree of choice over being sent to one product (as seen with the decrease in transactions caused by variation B)

But the main moral of this ‘A-ha!’? Make sure your users know exactly where to find what you’re selling. ‘Cause content without conversions is just free publishing.

‘A-ha!’ #3: The power of proper Design of Experiments

Earlier this year, I published a case study on WiderFunnel client, weBoost. WeBoost is an e-commerce retailer and manufacturer of cellular signal boosters.

This case study explored several tests that we had run on multiple areas of the weBoost site, including a series of design tests we ran on their product category page. Our third A-ha! moment takes up where the case study left off in this series…

A quick refresher

Originally, the weBoost product category pages featured a non-traditional design layout. A large image in the top left corner, very tall product modules, and right-hand filters made these pages unique among e-commerce catalog pages.

Aha! #3 Original
The original product category page layout.

We decided to test displaying products in landscape versus the long, portrait-style modules. According to a Baymard study of e-commerce sites, technical products are easier to compare in a horizontal layout because there is more space to include specs. This was variation A.

Aha! #3 Horizontal
Variation A featured a simple change: vertical modules to horizontal.

In variation B, we wanted to explore the idea that users didn’t need to see a product details page at all. Maybe the information on the category page was all users needed to make a confident purchase.

Variation B was built on variation A, with one isolated change: We changed the primary visual call-to-action from “View Details” to “Add To Cart”.

Aha! #3 Add To Cart
Note the primary CTA in this variation: “Add To Cart”

In a backward ‘A-ha!’ moment, variation A (based on the Baymard study) decreased transactions by -9.6%. Despite our intentions, the horizontal layout might have made it more difficult for users to compare products.

But! Variation B, with the add-to-cart focus, saw a 16.4% increase in transactions against the control page. It turns out that many users are actually comfortable adding products to their cart right from the category page.

Variation B moved more users further through the funnel and ultimately resulted in a large uptick in transactions, despite the negative impact of the horizontal layout.

After comparing variation A to variation B, WiderFunnel Optimization Strategist, Michael St Laurent, estimated that the “Add To Cart” call-to-action was actually worth a lift of 28.7% in transactions.

The follow-up (and subsequent ‘A-ha!’)

We knew that the horizontal layout led to a decrease in transactions and we knew that the horizontal layout plus the isolated CTA change led to a sizable increase in transactions.

So, we ran the obvious follow-up experiment: We tested a variation featuring the vertical module design with the add-to-cart focused call-to-action. We expected to see at least a 29% increase in transactions. We used variation B from the previous test as the Control, following proper Design of Experiments.

Aha! #3 Final
This variation reverted to the vertical modules from the original page, and featured the “Add To Cart” CTA.

As predicted, when we tested the “Add To Cart” call-to-action on the vertical modules, we saw a whopping 38.1% increase in transactions (more than double the 16.4% increase we observed with the horizontal layout, and 9 percentage points more than the estimate).

A-ha!

It never gets old to see isolations at work. The ‘A-ha!’ moment here is that no test ever has to be a ‘loser’. If you structure your tests using isolations, you will be able to track the potential impact of each change.

Michael St Laurent WiderFunnel

This entire time, we were assuming that users needed more information to make a technical product selection. We were focused on making the specs easier to compare, when there was an entire segment of the audience that was ready to put the product in their cart without more investigation. Sometimes you have to challenge your assumptions. In this case it paid off!

– Michael St Laurent, Optimization Strategist, WiderFunnel

‘A-ha!’ #4: De-emphasizing price reduces user anxiety

One of our clients is Vital Choice, a trusted source for fast home delivery of the world’s finest wild seafood and organic fare, harvested from healthy, well-managed wild fisheries and farms.

Our fourth ‘A-ha!’ moment from 2016 came out of the testing we did with Vital Choice on their product detail pages and revolves around de-emphasizing price, in favor of value proposition points.

While the results may not be surprising, the WiderFunnel Strategy team would not have prioritized this particular test if they hadn’t done extensive user research beforehand. Because we took the pulse of Vital Choice users, we were able to reduce anxiety and provide more motivation to purchase.

The strategy

Let’s say you wanted to order a few organic, grass-fed American Wagyu beef patties from the Vital Choice website. You would have eventually landed on a detail page that looked like this (the Control in this experiment):

Aha! #4 Control
Note the prominence of price on the original detail page.

As you can see, price is displayed prominently near the ‘Add To Cart’ call-to-action. But, during the Explore (information gathering) phase, WiderFunnel Optimization Strategist, Dennis Pavlina, identified several common themes of barriers to conversion in user survey responses, including:

  1. Price: Users love Vital Choice and the excellent quality of their products, but they often mention the premium they are paying. For many users, it is a ‘treat’ and a ‘luxury’ to buy from Vital Choice. Price-related themes, such as discount codes or coupons, also came up often in surveys.
  2. Shipping: Users often express concern about how frozen perishable items are shipped, particularly in warmer climates in the U.S.

If we could reduce user anxiety in these two areas, we believed Vital Choice would see a surge in conversions.

The test

Hypothesis: Adding relevant value proposition points that justify the price and quality of the product, and adding copy to reduce anxiety around shipping in close proximity of the order area on the product page, will increase conversions.

With our variation, we wanted to address the following barriers to conversion on the original page:

  • It was unclear what users would receive in their shipment i.e. how it would be shipped to them, how long it would take, etc. (Anxiety)
  • There were no prominently displayed value proposition points to justify the price of the product. (Value Proposition)
  • There was a lot of emphasis on the price of the product. (Anxiety)
Aha! #4 VarA
This variation addressed user anxieties by de-emphasizing price, and reassuring users of shipping guarantees.

A-ha!

This variation led to a 3.3% increase in conversions and a 2.7% increase in average order value, resulting in almost $250,000 in estimated additional annual revenue.

Conversions were up for almost every goal we tracked: Visits to checkout (step 2), visits to checkout (step 3), visits to checkout (step 4), total visits to cart, and average order value. But they were down to unique visits to cart.

Dennis Pavlina WiderFunnel

The most interesting part of analyzing results was noticing that, although unique visits to cart were slightly down, there was a large increase in total visits to cart. It’s a surprising pattern. We hypothesize that users may have been more confident and willing to purchase more items at once, when anxiety was reduced.

– Dennis Pavlina, Optimization Strategist, WiderFunnel

The fact that de-emphasizing price worked for Vital Choice users isn’t what made us say, ‘A-ha!’. But, the proven power of listening to, and addressing their users’ stated concerns, did. When in doubt, ask your users.

A-ha! #5: Quick view, long delay

A-ha! number 5 comes from testing we did with another one of our clients, a large retailer of sports goods, footwear, and apparel. We have been working with this company for more than a year to optimize their e­-commerce experiences, with the goal of increasing transactions.

Like on many e-commerce sites, users on this client’s site could view product details directly on the category page, using a Quick View functionality. When a user hovered over a product, they would see the product details in a Quick View window.

In our final ‘a-ha!’, we explore what (so often) happens when you test a common practice.

The strategy

Distraction is a very common barrier to conversion; often, there are elements on a client’s page that are diverting visitors away the from the ultimate goal.

For Michael St Laurent, the Quick View option on this client’s category page was a potential distraction.

Michael St Laurent WiderFunnel

The more visual cues and action options your visitor has to process, the less likely they are to make a conversion decision. At WiderFunnel, we have found that minimizing distractions such as unnecessary product options, links, and extraneous information will increase your conversion rate.

– Michael St Laurent

So, he decided to put his theory that the Quick View is an unnecessary distraction to the test.

The test

Hypothesis: Disabling the Quick View functionality will result in reduced distraction and ultimately, more conversions.

The Control in this test was the client’s original category page, featuring the Quick View functionality.

Aha! #5 Control
The original Quick View functionality.

In the Quick View, users could quickly move from product to product on the category page without going to a product page itself.

We tested this control against a variation that removed the Quick View functionality completely.

Aha! #5 No Quick View
In our variation, we eliminated the Quick View functionality entirely.

A-ha!

It turns out the Quick View functionality was, indeed, distracting. Disabling it resulted in more product exploration as well as more transactions; transactions increased by 4% (a big lift for a high-traffic company like this one!)

If your site has a functionality, like Quick View or a rotating banner, you should probably test it! While ‘flashy’ functionalities are…well…flashy, they are rarely what your users want, and may be preventing your users from actually purchasing.

At the end of every month, the WiderFunnel Strategy team shares their favorite ‘A-ha!’ moments from the past four weeks. Sometimes, the ‘A-ha!’ is an exciting result and big lift for a client, sometimes it’s a twist insight, sometimes it’s a ‘losing’ test that inspired a winning test.

As Chris Goward explains,

There’s no downside to communicating what you’ve learned from every test. If you view your optimization program as a strategic method for learning about your customers and prospects – for truly understanding their mindset – rather than a tactical tweaking program, you can take a broader perspective and find the gains in every test.

I hope that these ‘A-ha!’ moments inspire you to do the work, structure your tests properly, and learn constantly in 2017. And I encourage you to share your favorite ‘A-ha!’ moments in the comments section below.

The post 5 test results that made us say ‘A-ha!’ in 2016 appeared first on WiderFunnel Conversion Optimization.

Link to article: 

5 test results that made us say ‘A-ha!’ in 2016

Building a next-level optimization program with Heifer International

Reading Time: 5 minutes

A little over a year ago, Harper Grubbs, Director of Digital Marketing at Heifer International, began his search for a conversion optimization partner.

As luck would have it, one of Heifer’s partners pointed Harper in the direction of WiderFunnel. Harper did some research and decided to reach out…and so began a very exciting partnership.

This is the tale of how an international charity organization and a growth agency joined forces to the benefit of all: client, agency, donors, and so many in need.

Heifer’s Director of Digital Marketing, Harper Grubbs, tells his side of the story.

Who is Heifer International?

Founded in 1944, Heifer International is a charity organization working to end hunger and poverty around the world by providing livestock and training to struggling communities.

Based in the “teach a man to fish” philosophy, Heifer International provides livestock and environmentally sound agricultural training to help farmers around the world who are struggling for reliable sources of food and income.

client spotlight teach a man to fish
Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime.

Their holistic approach is designed to help communities become drivers of their own change by cultivating income and nutrition, improving the environment, empowering women around the world, helping communities create social capital, and encouraging families that receive support from Heifer to “Pass on the Gift®” to others in their communities.

Of course, to do what they do, Heifer relies heavily on donations made on their website.

Harper Grubbs

We receive a significant percentage of our donations as an organization through the website. So, it’s very important that our website be effective, that it work as well as possible and provide our visitors with an optimal experience. And that’s where WiderFunnel has come in for us.

– Harper Grubbs

Searching for the right partner

Harper and his team had been doing conversion optimization in-house and with the help of external digital marketing consultants for a number of years, but they just weren’t seeing the desired effect.

They found that they were running a lot of experiments, but were not getting significant, projectable results that would actually impact Heifer’s bottom line.

Harper wanted to take Heifer’s conversion optimization efforts to the next level. So, he began the search for an external partner: a new team and new resources that he could rely on to supplement his existing team.

Harper Grubbs

We realized we needed a more structured approach to our optimization efforts, and we needed more resources to do it. We were really taxing the resources we had and weren’t devoting as much time into conversion optimization as it needed.

– Harper Grubbs

When Harper heard about WiderFunnel, he did his due diligence and discovered our LIFT Model framework and structured approach to optimization: “We were really impressed with the strategic approach [WiderFunnel] brings to optimization,” says Harper.

WiderFunnel's LIFT Model details the 6 conversion factors.
WiderFunnel’s LIFT Model details the 6 conversion factors.

Harper reached out, and before long, we were LIFT-ing Heifer’s website, together.

Getting in the donor’s head

With Heifer, the WiderFunnel strategy team really wanted to dig into donor motivations and develop experiments that would help answer questions like:

  1. What fundamentally motivates someone to feel charitable?
  2. Are donors sensitive to how others site visitors are donating? Do they respond to social proof?
  3. How many product choices is too many?
  4. Does positive versus negative messaging/imagery have an effect on donations?

Some of the experiments we have run tap into psychological principles, such as Robert Cialdini’s Rule of Consistency, which states that people want to be consistent in all areas of life; once someone takes an action, no matter how small, they strive to make future behavior match that past behavior.

We put this principle to the test in one experiment, asking Heifer users to self-identify as a donor type when they land on the site.

client spotlight psychological persuasion
What kind of donor are you?

We found that once someone starts thinking of themselves as a “donor”, their need to remain consistent kicks in, resulting in more donors for Heifer! This is just one of the many psychology-inspired tests we have run and are running with Heifer.

“WiderFunnel has incorporated some really interesting psychological and behavioral principles into the strategy of our optimization efforts. This has given us the ability to learn things we never would have understood on our own, it has given us better insights into how people are thinking and what they’re doing and why they do what they do on our website.

This has given us the ability to apply these aspects to other parts of our work so that when we are developing new features and functionality, we can consider ‘How do these principles apply to what we are doing currently?‘” explains Harper.

Harper emphasizes that the psychological testing we have done has yielded some of the clearest test results he’s ever seen, which has led to better overall website results.

The science of partnership

Several months into the engagement, the partnership had really begun to gel.

WiderFunnel Optimization Strategist, Michael St Laurent, and Optimization Coordinator, Ervin Cho work closely with Harper and his team to continue to transform Heifer’s conversion optimization program and overall user experience.

“We aren’t just optimizing their digital experiences, we are optimizing the relationship,” explains Ervin. The two teams need to be properly integrated to ensure that all of the valuable ideas from each side are being heard and tested.

We credit several of WiderFunnel’s consistent process requirements for the partnership’s success.

client spotlight partnership
A good partnership has many touchpoints.

Weekly meetings

Each week, Mike and Ervin meet with Harper and his team. Together, they review the experiment pipeline: What’s in the pipe right now? What are the new ideas each of us have? After some discussion, they prioritize what to tackle next and funnel each test idea into their shared pipeline.

Mike also recently went on-site with Heifer and some of their partners to brainstorm ideas and determine the best ways to share WiderFunnel’s learnings among partners (and the best ways to learn from what Heifer’s partners are doing).

One key decision-maker

Rather than going through multiple contacts for experiment approval, Mike and Harper are able to determine next steps together.

Open communication

Transparency is a necessity. Both the WiderFunnel strategy team and Heifer’s team are open and communicative about what is working and what isn’t working. We don’t hide anything from them and they don’t hide anything from us.

Collaboration and fresh ideas

Harper Grubbs

The team that we’ve worked with has been really impressive in terms of their level of knowledge and their sophistication with how they approach testing. They have brought a lot of insights to the team that we would not have had ourselves through their experience with other companies they have worked with.

They have just really exceeded our expectations in terms of delivering good strategy and delivering a sound project management approach, and having a good structure for how our website is optimized.

– Harper Grubbs

Harper and his team know their mission and their donors, but, at WiderFunnel, we have run thousands of tests. We have seen what works over and over again, and we have seen what fails over and over again. When Heifer approaches us with upcoming changes, we are able to give their team expert advice on how to improve the experience.

“Behavioral research drives a lot of what we do with Heifer,” explains Mike. “It’s not necessarily about knowing the specific industry, it’s about understanding what fundamentally motivates people to feel charitable, and looking at ways we can leverage that across the site.”

Harper explained that he is really excited about the level of partnership his team is achieving with the WiderFunnel team. “We’re really working to integrate the WiderFunnel team with our own team and are working more collaboratively, to integrate the strategy, approach, and ideas that they have.”

Harper’s goal for the future of Heifer’s partnership with WiderFunnel: “Every piece of work that we do informs the next, and we have one, single, combined approach to our website strategy.”

The post Building a next-level optimization program with Heifer International appeared first on WiderFunnel Conversion Optimization.

Visit link:  

Building a next-level optimization program with Heifer International

Get your website testing-ready with the Technical Optimizer’s Checklist

Reading Time: 9 minutes

If you were planning to race your car, you would want to make sure it could handle the road, right?

Imagine racing a car that is not ready for the surprises of the road. A road that is going to require you to twist and turn constantly, and react quickly to the elements.

You would find yourself on the side of the road in no time.

A well-outfitted car, on the other hand, is able to handle the onslaught of the road and, when the dust settles, reach the finish line.

Well, think of your website like the car and conversion optimization like the race. Too many companies jump into conversion optimization without preparing their website for the demands that come with testing.

Get the Technical Optimizer’s Checklist

Download and print off this checklist for your technical team. Check off each item and get prepared for smooth A/B testing ahead!



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.

But proper technical preparation can mean a world of difference when you are trying to develop tests quickly, and with as few QA issues as possible. In the long-run, this leads to a better testing rhythm that yields results and insights.

With 2017 just around the corner, now is a good time to look ‘under the hood’ of your website and make sure it is testing-ready for the New Year. To make sure you have built your website to stand the tests to come, pun intended.

In order to test properly, and validate the great hypotheses you have, your site must be flexible and able to withstand changes on the fly.

With the help of the WiderFunnel web development team, I have put together a shortlist to help you get your website testing-ready. Follow these foundational steps and you’ll soon be racing through your testing roadmap with ease.

To make these digestible for your website’s mechanics, I have broken them down to three categories: back-end, front-end, and testing best practices.

Back-end setup a.k.a. ‘Under the hood’

Many websites were not built with conversion optimization in mind. So, it makes sense for you to revisit the building blocks of your website and make some key changes on the back-end that will make it much easier for you to test.

1) URL Structure

Just as having a fine-tuned transmission for your vehicle is important, so is having a well-written URL structure for your website. Good URL structure equals easier URL targeting. (‘Targeting’ is the feature you use to tell your testing tool where your tests will run on your website.) This makes targeting your tests much simpler and reduces the possibility of including the wrong pages in a test.

Let’s look at an example of two different URL targeting options that you might use. One is a RegEx, which in JavaScript is used for text-based pattern matching. The other is Substring match, which in this case is the category name with two slashes on each side.

RegEx Example

Products to include:

  • www.test.com/ab82
  • www.test.com/F42
  • www.test.com/september/sale98

Products to exclude:

  • www.test.com/F4255

Targeting:

  • RegEx: (ab82|F42|sale98)


Substring Example

Products to include:

  • www.test.com/products/engines/brandengine/
  • www.test.com/products/engines/v6turbo
  • www.test.com/products/sale/september/engines/v8

Products to exclude:

  • www.test.com/products/sale/september/wheel/alloy

Targeting:

  • Substring: /engines/

In the first example, the company assigned URLs for their product pages based on their in-house product numbers. While writing a targeting rule based on RegEx is not difficult (if you know JavaScript), it is still time consuming. In fact, the targeting on the first example is wrong. Tell us why in the comments!

On the other hand, the second example shows a company that structured all of their product URLs and categories. Targeting in this case uses a match for the substring “/engines/” and allows you to exclude other categories, such as ‘wheels’. Proper URL structure means smoother and faster testing.

2) Website load time or ‘Time to first paint’

Time to first paint‘ refers to the initial load of your page, or the moment your user sees that something is happening. Of course, today, people have very short attention spans and can get frustrated with slow load times. And when you are testing, ‘time to first paint’ can become even more of a concern with things like FOOC and even slower load times.

So, how do you reduce your website’s time to first paint? Glad you asked:

  • Within the HTML of your page:
    • Move any JavaScript that influences content below the fold to the bottom of the body, and make these sections load asynchronously (meaning these sections will execute after the code above it). This includes any external functionality that your development team is bringing from outside the basic HTML and CSS such as interactive calendars, sliders, etc.
    • Within the head tag, move the code snippet of your testing tool as high as you can―the higher the better.
  • Minify* your JS and CSS files so that they load into your visitor’s browser faster. Then, bring all JS and CSS into a single file for each type. This will allow your user’s browser to pull content from two files instead of having to refer to too many files for the instructions it needs. The difference is reading from 15 different documents or two condensed ones.
  • Use sprites for all your images. Loading in a sprite means you’re loading multiple images one time into the DOM*, as opposed to loading each image individually. If you did the latter, the DOM would have to load each image separately, slowing load time.
Technical_Testing_Sprite
Load all of your images in sprites.

While these strategies are not exhaustive, if you do all of the above, you’ll be well on your way to reducing your site load time.

3) Make it easy to differentiate between logged-in and logged-out users

Many websites have logged-in and logged-out states. However, few websites make it easy to differentiate between these states in the browser. This can be problematic when you are testing, if you want to customize experiences for both sets of users.

The WiderFunnel development team recommends using a cookie or JavaScript method that returns True or False. E.g. when a user is logged-in, it would return ‘True’, and when a user is logged-out, ‘False’.

This will make it easier for you to customize experiences and implement variations for both sets of users. Not doing so will make the process more difficult for your testing tool and your developers. This strategy is particularly useful if you have an e-commerce website, which may have different views and sections for logged-in versus logged-out users.

4) Reduce clunkiness a.k.a. avoid complex elements

Here, I am referring to reducing the number of special elements and functionalities that you add to your website. Examples might include date-picking calendars, images brought in from social media, or an interactive slider.

This calendar widget might look nice, but is it valuable enough to merit inclusion?
This calendar widget might look nice, but is it valuable enough to merit inclusion?

While elements like these can be cool, they are difficult to work with when developing tests. For example, let’s say you want to test a modal on one of your pages, and have decided to use an external library which contains the code for the modal (among other things). By using an external library, you are adding extra code that makes your website more clunky. The better bet would be to create the modal yourself.

Front-end setup

The front-end of your website is not just the visuals that you see, but the code that executes behind the scenes in your user’s browser. The changes below are web development best practices that will help you increase the speed of developing tests, and reduce stress on you and your team.

1) Breakpoints – Keep ’em simple speed racer!

Assuming your website is responsive, it will respond to changes in screen sizes. Each point at which the layout of the page changes visually is known as a breakpoint. The most common breakpoints are:

  • Mobile – 320 pixels and 420 pixels
  • Desktop and Tablet – 768px, 992px, 1024px and 1200px
Each point at which the layout of your page changes visually is known as a 'breakpoint'.
Each point at which the layout of your page changes visually is known as a ‘breakpoint’.

Making your website accessible to as many devices as possible is important. However, too many breakpoints can make it difficult to support your site going forward.

When you are testing, more breakpoints means you will need to spend more time QA-ing each major change to make sure it is compatible in each of the various breakpoints. The same applies to non-testing changes or additions you make to your website in the future.

Spending a few minutes looking under to hood at your analytics will give you an idea of the top devices and their breakpoints that are important for your users.

Technical_testing_analytics
Source: Google Analytics demo account.

Above, you can see an example taken from the Google Analytics demo account: Only 2% of sessions are Tablet, so planning for a 9.5 inch screen may be a waste of this team’s time.

Use a standard, minimal number of breakpoints instead of many. You don’t need eight wheels, when four will easily get the job done. Follow the rule of “designing for probabilities not possibilities”.

2) Stop using images in place of text in your UI

Let’s say your website works in the many breakpoints and browsers you wish to target. However, you’re using images for your footer and main calls-to-action.

  • Problem 1: Your site may respond to each breakpoint, but the images you are using may blur.
  • Problem 2: If you need to add a link to your footer or change the text of your call-to-action, you have to create an entirely new image.
Buttons_Technical_Testing
Avoid blurry calls-to-action: Use buttons, not images.

Use buttons instead of images for your calls-to-action, use SVGs instead of icons, use code to create UI elements instead of images. Only use images for content or UI that may be too technically difficult or impossible to write in code.

3) Keep your HTML and CSS simple:

Keep it simple: Stop putting CSS within your HTML. Use div tags sparingly. Pledge to not put everything in tables. Simplicity will save you in the long run!

No extra  tags! Source: 12 Principles for Keeping your Code Clean
No extra div tags! Source: 12 Principles for Keeping your Code Clean

Putting CSS in a separate file keeps your HTML clean, and you will know exactly where to look when you need to make CSS changes. Reducing the number of div tags, which are used to create sections in code, also cleans up your HTML.

These are general coding best practices, but they will also ensure you are able to create test variations faster by decreasing the time needed to read the code.

Tables, on the other hand, are just bad news when you are testing. They may make it easy to organize elements, but they increase the chance of something breaking when you are replacing elements using your testing tool. Use a table when you want to display information in a table. Avoid using tables when you want to lay out information while hiding borders.

Bonus tip: Avoid using iFrames* unless absolutely necessary. Putting a page within a page is difficult: don’t do it.

4) Have a standard for naming classes and IDs

Classes and IDs are the attributes you add to HTML tags to organize them. Once you have added Classes and IDs in your HTML, you can use these in your CSS as selectors, in order to make changes to groups of tags using the attributed Class or ID.

You should implement a company-wide standard for your HTML tags and their attributes. Add in standardized attribute names for Classes and IDs, even for list tags. Most importantly, do not use the same class names for elements that are unrelated!

Example:

technical_testing_naming

Looking at the above example, let’s say I am having a sale on apples and want to make all apple-related text red to bring attention to apples. I can do that, by targeting the “wf-apples” class!

Not only is this a great decision for your website, it also makes targeting easier during tests. It’s like directions when you’re driving: you want to be able to tell the difference between the second and third right instead of just saying “Turn right”.

Technical testing ‘best practices’ for when you hit the road

We have written several articles on testing best practices, including one on the technical barriers to A/B testing. Below are a couple of extra tips that will improve your current testing flow without requiring you to make changes to your website.

1) If you can edit in CSS, then do it

See the Pen wf-css-not-js by Ash (@ashwf) on CodePen.

Above is an animation that WiderFunnel Developer Thomas Davis created. One tab shows you the code written as a stylesheet in CSS. The tab on the right shows the same animation written in JavaScript.

JavaScript is 4-wheel drive. Don’t turn it on unless you absolutely need to, ‘cause you’re going to get a lot more power than you need. CSS effects are smoother, easier to work with, and execute faster when you launch a test variation.

2) Don’t pull content from other pages while testing

When you are creating a variation, you want to avoid bringing in unnecessary elements from external pages. This approach requires more time in development and may not be worth. You have already spent time reducing the clunkiness of your code, and bringing in external content will reverse that.

The important question when you are running a test is the ‘why’ behind it, and the ‘what’ you want to get out of it. Sometimes, it is ok to test advanced elements to get an idea of whether your customers respond to them. My colleague Natasha expanded on this tactic in her article “Your growth strategy and the true potential of A/B testing”.

3) Finally, a short list of do’s and dont’s for your technical team

  • Don’t just override CSS or add CSS to an element, put it in the variation CSS file (don’t use !important)
  • Don’t just write code that acts as a ‘band-aid’ over the current code. Solve the problem, so there aren’t bugs that come up for unforeseen situations.
  • Do keep refactoring
  • Do use naming conventions
  • Don’t use animations: You don’t know how they will render in other browsers

Glossary

DOM: The Document Object Model (DOM) is a cross-platform and language-independent convention for representing and interacting with objects in HTML, XHTML, and XML documents

iFrame: The iframe tag specifices and inline frame. An inline frame is used to embed another document within the current HTML document

Minification of files makes them smaller in size and therefore reduces the amount of time needed for downloading them.


What types of problems does your development team tackle when testing? Are there any strategies that make testing easier from a technical standpoint that are missing from this article? Let us know in the comments!

The post Get your website testing-ready with the Technical Optimizer’s Checklist appeared first on WiderFunnel Conversion Optimization.

From:

Get your website testing-ready with the Technical Optimizer’s Checklist