Tag Archives: test

How to Micro Test New Product/Service Ideas Using AdWords

Launching a new business idea or deciding to develop a new product for your company is not without risk. Many of the best business ideas have come from inspiration, intuition or in-depth insight into an industry. While some of these ideas have risen to dominate the modern world, such as search engines, barcodes and credit card readers, many fine ideas still result in bankruptcy for their company, due to insufficient demand or failure to properly research customer desire. If you build it will they come? Often smart business entrepreneurs can still make big mistakes. With new product, service or business…

The post How to Micro Test New Product/Service Ideas Using AdWords appeared first on The Daily Egg.

See the original article here: 

How to Micro Test New Product/Service Ideas Using AdWords

How to do server-side testing for single page app optimization

Reading Time: 5 minutes

Gettin’ technical.

We talk a lot about marketing strategy on this blog. But today, we are getting technical.

In this post, I team up with WiderFunnel front-end developer, Thomas Davis, to cover the basics of server-side testing from a web development perspective.

The alternative to server-side testing is client-side testing, which has arguably been the dominant testing method for many marketing teams, due to ease and speed.

But modern web applications are becoming more dynamic and technically complex. And testing within these applications is becoming more technically complex.

Server-side testing is a solution to this increased complexity. It also allows you to test much deeper. Rather than being limited to testing images or buttons on your website, you can test algorithms, architectures, and re-brands.

Simply put: If you want to test on an application, you should consider server-side testing.

Let’s dig in!

Note: Server-side testing is a tactic that is linked to single page applications (SPAs). Throughout this post, I will refer to web pages and web content within the context of a SPA. Applications such as Facebook, Airbnb, Slack, BBC, CodeAcademy, eBay, and Instagram are SPAs.


Defining server-side and client-side rendering

In web development terms, “server-side” refers to “occurring on the server side of a client-server system.”

The client refers to the browser, and client-side rendering occurs when:

  1. A user requests a web page,
  2. The server finds the page and sends it to the user’s browser,
  3. The page is rendered on the user’s browser, and any scripts run during or after the page is displayed.
Static app server
A basic representation of server-client communication.

The server is where the web page and other content live. With server-side rendering, the requested web page is sent to the user’s browser in final form:

  1. A user requests a web page,
  2. The server interprets the script in the page, and creates or changes the page content to suit the situation
  3. The page is sent to the user in final form and then cannot be changed using server-side scripting.

To talk about server-side rendering, we also have to talk a little bit about JavaScript. JavaScript is a scripting language that adds functionality to web pages, such as a drop-down menu or an image carousel.

Traditionally, JavaScript has been executed on the client side, within the user’s browser. However, with the emergence of Node.js, JavaScript can be run on the server side. All JavaScript executing on the server is running through Node.js.

*Node.js is an open-source, cross-platform JavaScript runtime environment, used to execute JavaScript code server-side. It uses the Chrome V8 JavaScript engine.

In laymen’s (ish) terms:

When you visit a SPA web application, the content you are seeing is either being rendered in your browser (client-side), or on the server (server-side).

If the content is rendered client-side, JavaScript builds the application HTML content within the browser, and requests any missing data from the server to fill in the blanks.

Basically, the page is incomplete upon arrival, and is completed within the browser.

If the content is being rendered server-side, your browser receives the application HTML, pre-built by the server. It doesn’t have to fill in any blanks.

Why do SPAs use server-side rendering?

There are benefits to both client-side rendering and server-side rendering, but render performance and page load time are two huge pro’s for the server side.

(A 1 second delay in page load time can result in a 7% reduction in conversions, according to Kissmetrics.)

Server-side rendering also enables search engine crawlers to find web content, improving SEO; and social crawlers (like the crawlers used by Facebook) do not evaluate JavaScript, making server-side rendering beneficial for social searching.

With client-side rendering, the user’s browser must download all of the application JavaScript, and wait for a response from the server with all of the application data. Then, it has to build the application, and finally, show the complete HTML content to the user.

All of which to say, with a complex application, client-side rendering can lead to sloooow initial load times. And, because client-side rendering relies on each individual user’s browser, the developer only has so much control over load time.

Which explains why some developers are choosing to render their SPAs on the server side.

But, server-side rendering can disrupt your testing efforts, if you are using a framework like Angular or React.js. (And the majority of SPAs use these frameworks).

The disruption occurs because the version of your application that exists on the server becomes out of sync with the changes being made by your test scripts on the browser.

NOTE: If your web application uses Angular, React, or a similar framework, you may have already run into client-side testing obstacles. For more on how to overcome these obstacles, and successfully test on AngularJS apps, read this blog post.


Testing on the server side vs. the client side

Client-side testing involves making changes (the variation) within the browser by injecting Javascript after the original page has already loaded.

The original page loads, the content is hidden, the necessary elements are changed in the background, and the ‘new’ version is shown to the user post-change. (Because the page is hidden while these changes are being made, the user is none-the-wiser.)

As I mentioned earlier, the advantages of client-side testing are ease and speed. With a client-side testing tool like VWO, a marketer can set up and execute a simple test using a WYSIWYG editor without involving a developer.

But for complex applications, client-side testing may not be the best option: Layering more JavaScript on top of an already-bulky application means even slower load time, and an even more cumbersome user experience.

A Quick Hack

There is a workaround if you are determined to do client-side testing on a SPA application. Web developers can take advantage of features like Optimizely’s conditional activation mode to make sure that testing scripts are only executed when the application reaches a desired state.

However, this can be difficult as developers will have to take many variables into account, like location changes performed by the $routeProvider, or triggering interaction based goals.

To avoid flicker, you may need to hide content until the front-end application has initialized in the browser, voiding the performance benefits of using server-side rendering in the first place.

WiderFunnel - client side testing activation mode
Activation Mode waits until the framework has loaded before executing your test.



When you do server-side testing, there are no modifications being made at the browser level. Rather, the parameters of the experiment variation (‘User 1 sees Variation A’) are determined at the server route level, and hooked straight into the javascript application through a service provider.

Here is an example where we are testing a pricing change:

“Ok, so, if I want to do server-side testing, do I have to involve my web development team?”

Yep.

But, this means that testing gets folded into your development team’s work flow. And, it means that it will be easier to integrate winning variations into your code base in the end.

If yours is a SPA, server-side testing may be the better choice, despite the work involved. Not only does server-side testing embed testing into your development workflow, it also broadens the scope of what you can actually test.

Rather than being limited to testing page elements, you can begin testing core components of your application’s usability like search algorithms and pricing changes.

A server-side test example!

For web developers who want to do server-side testing on a SPA, Tom has put together a basic example using Optimizely SDK. This example is an illustration, and is not functional.

In it, we are running a simple experiment that changes the color of a button. The example is built using Angular Universal and express JS. A global service provider is being used to fetch the user variation from the Optimizely SDK.

Here, we have simply hard-coded the user ID. However, Optimizely requires that each user have a unique ID. Therefore, you may want to use the user ID that already exists in your database, or store a cookie through express’ Cookie middleware.

Are you currently doing server-side testing?

Or, are you client-side testing on a SPA application? What challenges (if any) have you faced? How have you handled them? Do you have any specific questions? Let us know in the comments!

The post How to do server-side testing for single page app optimization appeared first on WiderFunnel Conversion Optimization.

Continue reading – 

How to do server-side testing for single page app optimization

Can Your Audience and Google Love the Same Page Title?

The Thinker

“What Do Department Store Santas and Prostitutes Have in Common?” “Why Do Drug Dealers Still Live at Home with Their Mothers?” These are two chapter titles from the Steven Levitt and Stephen Dubner book Freakonomics, a work that has captured the interest of hundreds of thousands of readers. One of the big draws of this book is the catchy and intriguing title for each chapter. You just want to read on. But how would Google rate those titles in terms of SEO? Where are the keywords/keyword phrases that are popular and commonly used by generic searches? These titles would be…

The post Can Your Audience and Google Love the Same Page Title? appeared first on The Daily Egg.

Read more: 

Can Your Audience and Google Love the Same Page Title?

Optimizing Mobile Home Page Increases Conversions for Wedding Shoes Website

Elegant Steps offers a large selection of wedding shoes in the UK, both online and in store. More than 50% of its users are new, female users discovering the website organically through mobile. The bulk of them are brides-to-be who are looking for wedding shoes.

Problem

After looking at Elegant Steps’ Google Analytics (GA) data, it was found that while its desktop website was converting at 2%, the mobile version was converting at a much lower 0.6%.

Observations

Hit Search, a digital marketing agency, used VWO to help Elegant Steps dig deep into the problem. They used GA, heuristic analysis, and VWO’s scrollmaps and heatmaps capabilities to find that:

  • Hardly any visitors were scrolling enough to reach the Shop by Brand section on the home page.
  • Elegant Steps’ 3 main USPs, including free shipping, weren’t appearing above the fold on mobile.
  • The text on the home page image was hard to read because it was the same color as the background.

This is how the home page looked on mobile:

elegant_control_jpg

Hypothesis

Armed with these observations, Niall Brooke from Hit Search set about optimizing the mobile home page to fix the problems. It was decided to:

  • Introduce the Shop by Brand section higher up on the page, as the presence of an established name is known to help instill trust and assuage fears.
  • Many studies have found that unexpected shipping cost is the biggest reason for cart abandonment. It was hypothesized that displaying “Free Shipping” above the fold will help reduce bounce and encourage users to continue down the conversion funnel.
  • Change the CTA copy from the generic “Shop Wedding Shoes” to the possessive, “Find my new wedding shoes.”
  • Change the text color on the image for the text to be readable.

This is how the variation looked:

elegant_variation_jpg

Test

Hit Search ran the new version of the home page against the original only for mobile visitors, using VWO’s targeting capability. Niall set VWO’s Bayesian-powered statistics engine to “High-Certainty” mode, and the results kicked in within a month.

Results

“The results were positive with almost a threefold increase in conversions and almost a 50% drop in bounce rate,” said Niall.

In his closing thoughts, Niall had this to say, “VWO is a brilliant all-round conversion optimization platform which we use on a daily basis to perform user analysis, A/B and split tests,” he added.

Mobile an afterthought?

According to a 2015 report, the average conversion rate for mobile websites in the US was 1.32%, significantly lower than its desktop counterpart (3.82%). Though studies have suggested that visitors mostly use mobile for research purposes and make the actual purchase through desktop website, there’s no denying that online retailers are still leaving money on the table. We would love to your thoughts about optimizing mobile websites. When does it become important for you to start looking at mobile optimization? Just hit us the comment section below.

5

1 ratings

How will you rate this content?

Please choose a rating

The post Optimizing Mobile Home Page Increases Conversions for Wedding Shoes Website appeared first on VWO Blog.

Link: 

Optimizing Mobile Home Page Increases Conversions for Wedding Shoes Website

How Tough Mudder Gained a 9% Session Uplift by Optimizing for Mobile Users

The following is a case study about how Tough Mudder achieved a 9% session uplift by optimizing for mobile. With the help of altima° and VWO, they identified and rectified pain points for their mobile users, to provide seamless event identification and sign-ups. 


About the Company

Tough Mudder offers a series of mud and obstacle courses designed to test physical strength, stamina, and mental grit. Events aren’t timed races, but team activities that promote camaraderie and accomplishment as a community.

Objective

Tough Mudder wanted to ensure that enrolment on their mobile website was smooth and easy for their users. They partnered with altima°, a digital agency specializing in eCommerce, and VWO to ensure seamless event identification and sign-ups.

Research on Mobile Users

The agency first analyzed Tough Mudder’s Google Analytics data to identify any pain points across participants’ paths to enrollment. They analyzed existing rates from the Event List, which demonstrated that interested shoppers were not able to identify the events appropriate for them. The agency began to suspect that customers on mobile might not be discovering events easily enough.

Test

On the mobile version of the original page, most relevant pieces of information like the event location and date, were being pushed too far down below the fold. In addition, lesser relevant page elements were possibly distracting users from the mission at hand. This is how it looked like:

tough mudder
Event location and date way below the fold on ‘original’

The agency altima° decided to make the following changes in the variation:

  1. Simplified header: Limiting the header copy to focus on the listed events. The following image shows how this looked.

    img2
    Simplified header copy
  2. List redesign: Redesigning the filter and event list to prominently feature the events themselves. The following image shows the same:
    List redesign to optimize event location and date
  3. Additionally, an Urgency Message was added to encourage interested users to enroll in events nearing their deadline. See the following image to know how it was done:
    Urgency message to push quicker enrollments

For these three variations, seven different combinations were created and a multivariate test was run using VWO. The test experienced over 2k event sign-ups across 4 weeks. The combinations of variations are shown below:

Test Results

After 4 weeks, Variation 2, which included the redesigned event list, proved to be the winning variation. This is not to say that other test variations were not successful. Variation 2 was just the MOST successful:

The winning variation produced a session value uplift of 9%! Combined with the next 2 rounds of optimization testing, altima° helped Tough Mudder earn a session value uplift of over 33%!

Why Did Variation 2 Win?

altima° prefers to let the numbers speak for themselves and not dwell on subjective observations. After all, who needs opinions when you’ve got data-backed results? altima°, however, draws the following conclusions on why Variation 2 won:

Simplified header:

Social proof has demonstrated itself to be a worthy component of conversion optimization initiatives. These often include customer reviews and/or indications of popularity across social networks.

In fact, Tough Mudder experienced a significant lift in the session value due to the following test involving the addition of Facebook icons. It’s likely that the phrase Our Events Have Had Over 2 Million Participants Across 3 Continents warranted its own kind of social proof. 

List redesign:

The most ambitious testing element to design and develop was also the most successful.

It appeared that an unnecessary amount of real estate was being afforded to the location filter. This was resolved by decreasing margins above and below the filter, along with removing the stylized blue graphic.

The events themselves now carried a more prominent position relative to the fold on mobile devices. Additionally, the list itself was made to be more easily read, with a light background and nondistracting text.

Urgency message:

The underperformance of the urgency message came as a surprise. It was believed that this element would prove to be valuable, further demonstrating the importance of testing with VWO.

Something to consider is that not every event included an urgency message. After all, not every enrolment period was soon to close. Therefore, it could be the case that some customers were less encouraged to click through and enroll in an individually relevant event if they felt that they had more time to do so later.

They might have understood that their event of interest wasn’t promoting urgency and was, therefore, not a priority. It also might have been the case that an urgency message was introduced too early in the steps to event enrolment.

Let’s Talk

How did you find this case study? There are more testing theories to discuss! Please reach out to altima° and VWO to discuss. You could also drop in a line in the Comments section below.

Multivariate Testing CTA

0

0 ratings

How will you rate this content?

Please choose a rating

The post How Tough Mudder Gained a 9% Session Uplift by Optimizing for Mobile Users appeared first on VWO Blog.

See the article here: 

How Tough Mudder Gained a 9% Session Uplift by Optimizing for Mobile Users

[Gifographic] Better Website Testing – A Simple Guide to Knowing What to Test

Note: This marketing infographic is part of KlientBoost’s 25-part series. You can subscribe here to access the entire series of gifographics.


If you’ve ever tested your website, you’ve probably been in the unfortunate situation of running out of ideas on what to test.

But don’t worry – it happens to everybody.

That’s of course, unless you have a website testing plan.

That’s why KlientBoost has teamed up with VWO to bring to you a gifographic that provides a simple guide on knowing the what, how, and why when it comes to testing your website.

21-vwo-website-testing2

Setting Your Testing Goals

Like a New Year’s resolution around getting fitter, if you don’t have any goals tied to your website testing plan, then you may be doing plenty of work, with little results to show.

With your goals in place, you can focus on the website tests that will help you achieve those goals –the fastest.

Testing a button color on your home page when you should be testing your checkout process, is a sure sign that you are heading to testing fatigue or the disappointment of never wanting to run a test again.

But let’s take it one step further.

While it’s easy to improve click-through rates, or CTRs, and conversion rates, the true measure of a great website testing plan comes from its ability to increase revenue.

No optimization efforts matter if they don’t connect to increased revenue in some shape or form.

Whether you improve the site user experience, your website’s onboarding process, or get more conversions from your upsell thank you page, all those improvements compound into incremental revenue gains.

Lesson to be learned?

Don’t pop the cork on the champagne until you know that an improvement in the CTRs or conversion rates would also lead to increased revenue.

Start closest to the money when it comes to your A/B tests.

Knowing What to Test

When you know your goals, the next step is to figure out what to test.

You have two options here:

  1. Look at quantitative data like Google Analytics that show where your conversion bottlenecks may be.
  2. Or gather qualitative data with visitor behavior analysis where your visitors can tell you the reasons for why they’re not converting.

Both types of data should fall under your conversion research umbrella. In addition to this gifographic, we created another one, all around the topic of CRO research.

When you’ve done your research, you may find certain aspects of a page that you’d like to test. For inspiration, VWO has created The Complete Guide To A/B Testing – and in it, you’ll find some ideas to test once you’ve identified which page to test:

  • Headlines
  • Subheads
  • Paragraph Text
  • Testimonials
  • Call-to-Action text
  • Call-to-Action button
  • Links
  • Images
  • Content near the fold
  • Social proof
  • Media mentions
  • Awards and badges

As you can see, there are tons of opportunities and endless ideas to test when you decide what to test and in what order.

website-testing
A quick visual for what’s possible

So now that you know your testing goals and what to test, the last step is forming a hypothesis.

With your hypothesis, you’re able to figure out what you think will have the biggest performance lift with the thought of effort in mind as well (easier to get quicker wins that don’t need heaps of development help).

Running an A/B Test

Alright, so you have your goals, list of things to test, and hypotheses to back these up, the next task now is to start testing.

With A/B testing, you’ll always have at least one variant running against your control.

In this case, your control is your actual website as it is now and your variant is the thing you’re testing.

With proper analytics and conversion tracking along with the goal in place, you can start seeing how each of these two variants (hence the name A/B) is doing.

a_b-testing
Consider this a mock-up of your conversion rate variations

When A/B testing, there are two things you may want to consider before you call winners or losers of a test.

One is statistical significance. Statistical significance gives you the thumbs up or thumbs down around whether your test results can be tied to a random chance. If a test is statistically significant, then the chances of the results are ruled out.

And VWO has created its own calculator so that you can see how your test is doing.

The second one is confidence level. It helps you decide whether you can replicate the results of your test again and again.

A confidence level of 95% tells you that your test will achieve the same results 95% of the time if you run it repeatedly. So, as you can tell, the higher your confidence level, the surer you can be that your test truly won or lost.

You can see the A/B test that increased revenue for Server Density by 114%.

Multivariate Testing for Combination of Variations

Let’s say you have multiple ideas to test, and your testing list is looking way too long.

Wouldn’t it be cool if you could test multiple aspects of your page at once to get faster results?

That’s exactly what multivariate testing is.

Multivariate testing allows you to test which combinations of different page elements affect each other when it comes to CTRs, conversion rates, or revenue gains.
Look at the multivariate pizza example below:

multivariate-testing-example
Different headlines, CTAs, and colors are used

The recipe for multivariate testing is simple and delicious.

multivariate-testing-formula
Different elements increase the combination size

And the best part is that VWO can automatically run through all the different combinations you set so that your multivariate test can be done without the heavy lifting.

If you’re curious about whether you should A/B test or run multivariate tests, then look at this chart that VWO created:

multivariate-testing-software-visual-website-optimizer
Which one makes the most sense for you?

Split URL Testing for Heavier Variations

If you find that your A/B or multivariate tests lead you to the end of the rainbow that shows bigger initiatives in backend development or major design changes are needed, then you’re going to love split URL testing.

As VWO states:

“If your variation is on a different address or has major design changes compared to control, we’d recommend that you create a Split URL Test.”

what-is-split-testing-explained-by-vwo

Split URL testing allows you to host different variations of your website test without changing the actual URL.

As the visual shows above, you can see that the two different variations are set up in a way that the URL is different as well.

URL testing is great when you want to test some major redesigns such as your entire website built from scratch.

By not changing your current website code, you can host the redesign on a different URL and have VWO split the traffic between the control and the variant—giving you clear insight whether your redesign will perform better.

Over to You

Now that you have a clear understanding on different types of website tests to run, the only thing left is to, well, run some tests.

Armored with quantitative and qualitative knowledge of your visitors, focus on the areas that have the biggest and quickest impact to strengthen your business.

And I promise, when you finish your first successful website test, you’ll get hooked on.

I know I was.

0

0 ratings

How will you rate this content?

Please choose a rating

The post [Gifographic] Better Website Testing – A Simple Guide to Knowing What to Test appeared first on VWO Blog.

Continue reading: 

[Gifographic] Better Website Testing – A Simple Guide to Knowing What to Test

A 4-Fold Approach to Increasing Conversion Rate on your Website

The problem with a traffic graph that’s going upward is that it’s not determinant of the number of customers. You can keep investing in traffic acquisition strategies until the cows come home, but that won’t yield any tangible results if you don’t optimize your website for conversions.

But how do you go about adopting conversion optimization and increasing conversion on your website?

A formalized conversion optimization program works like this:

  1. Researching into the existing data and finding gaps in the conversion funnel
  2. Planning and developing testable hypotheses
  3. Creating test variations and executing those tests
  4. Analyzing the tests and using the analysis in subsequent tests

In this post, we are going to run you through the ways to increase conversion rate through this scientific process:

Fold 1 – Digging Deep into Research

Research is needed to figure out your current situation and which among the existing processes need to be changed, or completely removed. Here are some steps that you can start with.

  • Finding the Current Conversion Funnel and Leaks
  • Performing Qualitative and Quantitative Data Analysis
  • Setting Goals that Prioritize ROI

Find the Current Conversion Funnel and Leaks

First and foremost, it is imperative to take stock of your current performance and workflows. You can apply an as-is analysis to gather insights on the current conversion rates, user’s journey, and the leaks in the conversion funnel.

Begin with the mapping of your company’s conversion funnel. You can visualize specific sequences in which users are becoming paying customers. This process will help you create a blueprint of how “strangers” can be turned into “promoters.”

Build Customer Journey to Increase Conversion Rate
Source

Peep Laja, conversion optimization expert and founder at ConversionXL has put together a step-by-step guide to creating user flows that are truly consumer-oriented.

In addition to identifying user flows, it is also important to study whether these are working. Are you experiencing churn in an area where you don’t expect to see it? Are you noticing less churn than you originally expected? Is your conversion funnel measuring the full customer journey or is it potentially missing a step?

Eric Fettman, developer of GoogleAnalyticsTest.com, a free resource for Google Analytics training and GA Individual Qualification preparation, makes some interesting observations on conversion funnels and customer journey:

  • Funnels help you visualize the process by providing a step-by-step breakup of the conversion data and churn.
  • User flow analysis helps your company understand points of customer confusion, and refine web copy and product positioning that affect your customer behavior. This analysis also highlights any “bugs” in the sequence that you may not have previously caught.

Perform Qualitative and Quantitative Data Analysis

After finding the workflow and gaps, the next step is to dive deeper into their causes. You can do this by researching on the What, How, and Why or what is often called the Simon Sinek’s golden circle:

Research to Increase Conversion Rate
Source
  • WHAT are users doing on your website
    This includes quantitative analysis of the amount of traffic landing, dropping off or converting from different pages of your website. You can use tools like Google Analytics (GA) for this purpose.
  • HOW are they behaving
    Now that you know a certain number of people are landing on your website, it’d next be useful to know what they are doing there. For instance, if they’re clicking a link or CTA, scrolling down, filling a form, or the like. Various visitor behavior analysis tools like heatmaps, visitor recordings, and form analysis can help achieve this.
  • WHY are they behaving that way
    You can also find out why your users are performing the way they are by qualitative on-page surveys and heuristic analysis.

Set Goals that Prioritize ROI

After peeking into the gaps with your conversion strategy, you should set clear goals for optimization.

It is important to arrive at a quantified expected conversion rate because that gives your testing efforts a direction. Otherwise, you might end up improving the conversion rate on a page by 1% and sit cozy without realizing its actual potential.

You can use benchmarking studies to decide the improvement you can expect through the proposed change. MarketingSherpa defines the following benchmarks for conversion rate optimization:

Conversion Rate Benchmarking

You should find the main goals of your business, based on the current strategy. What are you focused on now? Is it the total users acquired, is it the number of photos uploaded, or is it the revenue generated?

Whatever it is, you want to focus on something that’s neither too soft (“increase brand recognition”) nor too tactical (“increase page views per session”).

Fold 2 – Planning your A/B Tests

Based on this research, you should next plan your A/B tests to increase your conversion rate.

By now, you should have received enough insights to make an educated guess about what changes to your pages or funnel can bring about a desired change.

Construct a Strong Hypothesis

A structured hypothesis paves the direction for your optimization efforts. Even if the hypothesis fails, you can retrace your steps and correct it wherever it went wrong. Without this structured process, optimization efforts may go astray and lose their purpose.

At its core, a hypothesis is a statement that consists of three parts:

You believe that if we [make a change], we expect [a desirable result] because of [corresponding research].

Here’s an example of a good hypothesis.

I believe moving trust signals closer to the billing form will result in 5% more checkouts because the 56% bounce rate from that page could be due to lack of confidence.

For more information, read this post on building strong testing hypotheses.

Prioritize Your Hypotheses

After you have a list of testing hypotheses, the next step is to zero in on the hypothesis to test first. Here is a list of prioritization frameworks such as:

For detailed knowledge, you can read the post on prioritizing A/B testing hypotheses.

Fold 3 – Executing A/B Tests to Increase Conversion Rate

After the planning, it’s time for application. The plan that you’ve charted to optimize your business process needs to be deployed.

Which Type of Test to Run

A/B, Split, and Multivariate are not different alternatives to do a task. These are methods to do different tasks, so choosing any of these should depend entirely on the task at hand.

Split testing (or split URL testing) is used when:

  • Design needs major changes to the original page such that creating a separate page (housed on a different URL) is easier.
  • Back-end changes are necessary.
  • Pages to be tested already exist on different URLs.

Multivariate testing is used when multiple changes are proposed for a single page and you want to test each combination of these changes.

You should opt for an A/B test when the variations are few and not distinct.

How Long Should You Run the Test

You also need to decide the test duration before you start running the test.

The test duration is dependent on the number of visitors your website receives and the expected conversion rate you are looking for. You can use this free test duration calculator to find the duration you should run your tests for.

After you’re clear on these, you can begin creating variations and start running your tests.

Fold 4 – Analyzing Test Results

To conclude, you should also be able to check and analyze test results. This will arm you with information that you can not only apply to the current pages but also use as future learning.

No matter what the result—positive, negative, or inconclusive—it is imperative to delve deeper and gather insights.

When you are analyzing A/B test results, check if you are looking for the correct metric. If multiple metrics (secondary metrics along with the primary) are involved, you need to analyze all of them individually.

You should also create different segments from your A/B tests and analyze them separately to get a clear picture. The results you derive from generic, non-segmented testing will may lead to skewed actions.

Look at how experts derive insights from A/B Test results in this post.

Your Thoughts

How do you increase conversion rate on your website? Write to us in the comments below.

cta2

0

0 ratings

How will you rate this content?

Please choose a rating

The post A 4-Fold Approach to Increasing Conversion Rate on your Website appeared first on VWO Blog.

Originally from – 

A 4-Fold Approach to Increasing Conversion Rate on your Website

Running an A/A Test Before A/B Testing – Wise or Waste?

To A/A test or not is a question that invites conflicting opinions. Enterprises when faced with the decision of implementing an A/B testing tool do not have enough context on whether they should A/A test. Knowing the benefits and loopholes of A/A testing can help organizations make better decisions.

In this blog post we explore why some organizations practice A/A testing and the things they need to keep in mind while A/A testing. We also discuss other methods that can help enterprises decide whether or not to invest in a certain A/B testing tool.

Why Some Organizations Practice A/A Testing

A/A testing is done when organizations are taking up new implementation of an A/B testing tool. Running an A/A test at that time can help them with:

  • Checking the accuracy of an A/B Testing tool
  • Setting a baseline conversion rate for future A/B tests
  • Deciding a minimum sample size

Checking the Accuracy of an A/B Testing Tool

Organizations who are about to purchase an A/B testing tool or want to switch to a new testing software may run an A/A test to ensure that the new software works fine, and that it has been set up properly.

Tomasz Mazur, an eCommerce Conversion Rate Optimization expert, explains further: “A/A testing is a good way to run a sanity check before you run an A/B test. This should be done whenever you start using a new tool or go for new implementation. A/A testing in these cases will help check if there is any discrepancy in data, let’s say, between the number of visitors you see in your testing tool and the web analytics tool. Further, this helps ensure that your hypothesis are verified.”

In an A/A test, a web page is A/B tested against an identical variation. When there is absolutely no difference between the control and the variation, it is expected that the result will be inconclusive. However, in cases where an A/A test provides a winner between two identical variations, there is a problem. The reasons could be the following:

  • The tool has not been set up properly.
  • The test hasn’t been conducted correctly.
  • The testing tool is inefficient.

Here’s what Corte Swearingen, Director, A/B Testing and Optimization at American Eagle, has to say about A/A testing: “I typically will run an A/A test when a client seems uncertain about their testing platform, or needs/wants additional proof that the platform is operating correctly. There really is no better way to do this than to take the exact same page and test it against itself with no changes whatsoever. We’re essentially tricking the platform and seeing if it catches us! The bottom line is that while I don’t run A/A tests very often, I will occasionally use it as a proof of concept for a client, and to help give them confidence that the split testing platform they are using is working as it should.”

Determining the Baseline Conversion Rate

Before running any A/B test, you need to know the conversion rate that you will be benchmarking the performance results against. This benchmark is your baseline conversion rate.

An A/A test can help you set the baseline conversion rate for your website. Let’s explain this with the help of an example. Suppose you are running an A/A test where the control gives 303 conversions out of 10,000 visitors and the identical variation B gives 307 out of 10,000 conversions. The conversion rate for A is 3.03%, and that for B is 3.07%, when there is no difference between the two variations. Therefore, the conversion rate range that can be set as a benchmark for future A/B tests can be set at 3.03–3.07%. If you run an A/B test later and get an uplift within this range, this might mean that this result is not significant.

Deciding a Minimum Sample Size

A/A testing can also help you get an idea about the minimum sample size from your website traffic. A small sample size would not include sufficient traffic from multiple segments. You might miss out on a few segments which can potentially impact your test results. With a larger sample size, you have a greater chance of taking into account all segments that impact the test.

Corte says, “A/A testing can be used to make a client understand the importance of getting enough people through a test before assuming that a variation is outperforming the original.” He explains this with an A/A testing case study that was done for Sales Training Program landing pages for one of his clients, Dale Carnegie. The A/A test that was run on two identical landing pages got test results indicating that a variation was producing an 11.1% improvement over the control. The reason behind this was that the sample size being tested was too small.

a/a test initial results

After having run the A/A test for a period of 19 days and with over 22,000 visitors, the conversion rates between the two identical versions were the same.

a/a test results with more data

Michal Parizek, Senior eCommerce & Optimization Specialist at Avast, shares similar thoughts. He says, “At Avast, we did a comprehensive A/A test last year. And it gave us some valuable insights and was worth doing it!” According to him, “It is always good to check the statistics before final evaluation.”

At Avast, they ran an A/A test on  two main segments—customers using the free version of the product and customers using the paid version. They did so to get a comparison.

The A/A test had been live for 12 days, and they managed to get quite a lot of data. Altogether, the test involved more than 10 million users and more than 6,500 transactions.

In the “free” segment, they saw a 3% difference in the conversion rate and 4% difference in Average Order Value (AOV). In the “paid” segment, they saw a 2% difference in conversion and 1% difference in AOV.

“However, all uplifts were NOT statistically significant,” says Michal. He adds, “Particularly in the ‘free’ segment, the 7% difference in sales per user (combining the differences in the conversion rate and AOV) might look trustworthy enough to a lot of people. And that would be misleading. Given these results from the A/A test, we have decided to implement internal A/B testing guidelines/lift thresholds. For example, if the difference in the conversion rate or AOV is lower than 5%, be very suspicious that the potential lift is not driven by the difference in the design but by chance.”

Michal sums up his opinion by saying, “A/A testing helps discover how A/B testing could be misleading if they are not taken seriously. And it is also a great way to spot any bugs in the tracking and setup.”

Problems with A/A Testing

In a nutshell, the two main problems inherent in A/A testing are:

  • Everpresent element of randomness in any experimental setup
  • Requirement of a large sample size

We will consider these one by one:

Element of Randomness

As pointed out earlier in the post, checking the accuracy of a testing tool is the main reason for running an A/A test. However, what if you find out a difference between conversions of control and an identical variation? Do you always point it out as a bug in the A/B testing tool?

The problem (for the lack of a better word) with A/A testing is that there is always an element of randomness involved. In some cases, the experiment acquires statistical significance purely by chance, which means that the change in the conversion rate between A and its identical version is probabilistic and does not denote absolute certainty.  

Tomaz Mazur explains randomness with a real-world example. “Suppose you set up two absolutely identical stores in the same vicinity. It is likely, purely by chance or randomness, that there is a difference in results reported by the two. And it doesn’t always mean that the A/B testing platform is inefficient.”

Requirement of a Large Sample Size

Following the example/case study provided by Corte above, one problem with A/A testing is that it can be time-consuming. When testing identical versions, you need a large sample size to find out if A is preferred to its identical version. This in turn will take too much time.

As explained in one of the ConversionXL’s posts, “The amount of sample and data you need to prove that there is no significant bias is huge by comparison with an A/B test. How many people would you need in a blind taste testing of Coca-Cola (against Coca-Cola) to conclude that people liked both equally? 500 people, 5000 people?” Experts at ConversionXL explain that entire purpose of an optimization program is to reduce wastage of time, resources, and money.  They believe that even though running an A/A test is not wrong, there are better ways to use your time when testing.  In the post they mention, “The volume of tests you start is important but even more so is how many you *finish* every month and from how many of those you *learn* something useful from. Running A/A tests can eat into the “real” testing time.”

VWO’s Bayesian Approach and A/A Testing

VWO uses a Bayesian-based statistical engine for A/B testing. This allows VWO to deliver smart decisions–it tells you which variation will minimize potential loss.

Chris Stucchio, Director of Data Science at VWO, shares his viewpoint on how A/A testing is different in VWO than typical frequentist A/B testing tools.

Most A/B testing tools are seeking truth. When running an A/A test in a frequentist tool, an erroneous “winner” should only be reported 5% of the time. In contrast, VWO’s SmartStats is attempting to make a smart business decision. We report a smart decision when we are confident that a particular variation is not worse than all the other variations, that is, we are saying “you’ll leave very little money on the table if you choose this variation now.” In an A/A test, this condition is always satisfied—you’ve got nothing to lose by stopping the test now.

The correct way to evaluate a Bayesian test is to check whether the credible interval for lift contains 0% (the true value).

He also says that the possible and simplest reason for A/A tests to provide a winner

is random chance. “With a frequentist tool, 5% of A/A tests will return a winner due to bad luck. Similarly, 5% of A/A tests in a Bayesian tool will report erroneous lifts. Another possible reason is the configuration error; perhaps the javascript or HTML is incorrectly configured.”

Other Methods and Alternatives to A/A Testing

A few experts believe that A/A testing is inefficient as it consumes a lot of time that could otherwise be used in running actual A/B tests. However, there are others who say that it is essential to run a health check on your A/B testing tool. That said, A/A testing alone is not sufficient to establish whether one testing tool should be prefered over another. When making a critical business decision such as buying a new tool/software application for A/B testing, there are a number of other things that should be considered.

Corte points out that though there is no replacement or alternative to A/A testing, there are other things that must be taken into account when a new tool is being implemented. These are listed as follows:

  1.  Will the testing platform integrate with my web analytics program so that I can further slice and dice the test data for additional insight?
  2.  Will the tool let me isolate specific audience segments that are important to my business and just test those audience segments?
  3.  Will the tool allow me to immediately allocate 100% of my traffic to a winning variation? This feature can be an important one for more complicated radical redesign tests where standardizing on the variation may take some time. If your testing tool allows immediate 100% allocation to the winning variation, you can reap the benefits of the improvement while the page is built permanently in your CMS.
  4. Does the testing platform provide ways to collect both quantitative and qualitative information about site visitors that can be used for formulating additional test ideas? These would be tools like heatmap, scrollmap, visitor recordings, exit surveys, page-level surveys, and visual form funnels. If the testing platform does not have these integrated, do they allow integration with third-party tools for these services.
  5. Does the tool allow for personalization? If test results are segmented and it is discovered that one type of content works best for one segment and another type of content works better for a second segment, does the tool allow you to permanently serve these different experiences for different audience segments”?

That said, there is still a set of experts or people who would opt for alternatives such as triangulating data over an A/A test. Using this procedure means you have two sets of performance data to cross-check with each other. Use one analytics platform as the base to compare all other outcomes against, to check if there is something wrong or something that needs fixing.

And then there is the argument—why just A/A test when you can get more meaningful insights by running an A/A/B test. Doing this, you can still compare two identical versions while also testing some changes in the B variant.

Conclusion

When businesses face the decision of implementing a new testing software application, they need to run a thorough check on the tool. A/A testing is one method that some organizations use for checking the efficiency of the tool. Along with personalization and segmentation capabilities and some other pointers mentioned in this post, this technique can help check if the software application is good for implementation.

Did you find the post insightful? Drop us a line in the comments section with your feedback.

Free-trial CTA

The post Running an A/A Test Before A/B Testing – Wise or Waste? appeared first on VWO Blog.

Read original article:  

Running an A/A Test Before A/B Testing – Wise or Waste?

How “Your Tea” Boosted Revenue by 28% Through Structured Conversion Optimization

An increasing number of companies and agencies are following a structured approach to Conversion Rate Optimization (CRO). Presently, we will be looking at how a tea eCommerce website increased revenue using conversion optimization.

About the Company

Your Tea is an online tea eCommerce site serving health and lifestyle-focused consumers. Tiny Tea Teatox is one of their largest sellers in their diversifying everyday tea product ranges.

Your Tea signed on We Are Visionists (WAV), a digital agency that partners with eCommerce agencies and startups, to help solve their clients’ digital problems ranging from paid advertising to conversion rate optimization.

We got in touch with Joel Hauer, founder at WAV, to know all about their successful optimization exercise that resulted in a 28% improvement in revenue.

Onboarding Your Tea

WAV pitched CRO as part of a raft of complementary services, including SEO and PPC, to improve Your Tea’s online presence.

Joel says, “It made business sense and so it was a straightforward decision for Your Tea. If you can create an uplift in your revenue by improving your product page, why wouldn’t you? We were able to make projections based on anticipated improvements to the site, and those projections were what got us over the line. We are lucky to have such a pragmatic client!

Process of Optimization

What WAV wanted to do was to insulate Your Tea’s revenue stream against any potential declines in traffic and maximize revenues in the periods of high traffic.

While doing so, they decided to follow a formalized approach to CRO, that is, researching their website data and visitors’ behavior intently to create hypothesis and running A/B tests that would impact revenues the most.

The Research Phase

To begin with, they analyzed their website data using Google Analytics (GA) to understand the journey of the visitors. They detected a large number of drop-offs on the product pages of the website, that is, a lot of people were landing on the product pages but not adding anything to the cart. They discovered that the Tiny Tea Teatox product page in particular was attracting the largest amount of traffic, and decided to optimize it first.

On further research on that page, they found that more than 50% of visitors were browsing through mobile. This information compelled WAV to closely analyze the mobile version of Tiny Tea Teatox. They found multiple optimization opportunities. For instance, the CTA was not prominent, there was no detailed description of the products, and more.

Here’s how the original page looked:

A/B test Control

Hypothesis Creation

Since a majority of traffic was coming from mobile in particular, WAV decided to optimize both the desktop and mobile versions of the Your Tea website. They hypothesized that adding a more prominent CTA, along with a detailed description of the product and user reviews would increase add-to-cart from the product page.

Using Visitor Behavior Analysis, they were able to develop their hypotheses further. For instance, by looking into heatmap analysis, they realized that visitors mostly browsed the product description and its benefits.

A large number of visitors also visited the reviews section, thereby making it clear that they were looking for trust elements. WAV decided to add more product information and benefits, along with credible “before and after” images and testimonials to the page. WAV also conducted website surveys and user testing sessions, which confirmed their hypothesis of adding more “credibility proofs” to the page.

The Test

WAV concluded that a full redesign of the product pages could yield better results than a series of incremental improvements from smaller tests. Such a massive redesign required heavy technical work, and WAV used VWO’s Ideact service to create a variation. Below is the screenshot of the control and variation:

Your Tea Control Variation Here’s how the Before And After section in the variation looked like:

Here’s the Why Buy From Us section in the variation that aimed to improve the website’s credibility :

Credibility Proof in the VariationResults

With the tests, they tracked two goals, that is, the add to cart conversion rate and the revenue.

The improvement in add-to-cart actions led to an impressive 28% increase in the revenue. In terms of add-to-cart conversions, control of the test was yielding a conversion rate of 11.3% in contrast to the variation which emerged to be the winner with a conversion rate of 14.5%.

Road Ahead

To capitalize on these higher conversions, an optimized checkout experience is required.

The agency could identify that the checkout pages were receiving multiple views from the same visitors. Users were getting stuck in loops around the checkout page. After they identified what to look for, the data from analytics supported it. Currently, they are testing to optimize the mobile experience on parameters such as anxiety and trust signals.

When asked about his biggest learning of the test, Joel responded: “One thing that came out of this test was learning more about the checkout experience—particularly on mobile.”

Experience Using VWO

Joel remarks, “The work of VWO’s Ideact team in setting up the tests on the technical front to help us record users through the checkout experience was invaluable.”

“We loved working with Rauhan and Harinder from VWO. The willingness to go the extra mile and help us get the maximum insight from our tests was fantastic. Having spoken about the features in the pipeline, we’re excited to see what’s to come.”

What Do You Think?

Do you have any similar experiments to share? Tell us in the comments below.

cta

The post How “Your Tea” Boosted Revenue by 28% Through Structured Conversion Optimization appeared first on VWO Blog.

Read original article:

How “Your Tea” Boosted Revenue by 28% Through Structured Conversion Optimization

How to A/B test for long-term success (don’t underestimate insights!)

Reading Time: 6 minutes

Imagine you’re a factory manager.

You’re under pressure from your new boss to produce big results this quarter. (Results were underwhelming last quarter). You have a good team with high-end equipment, and can meet her demands if you ramp up your production speed over the coming months.

Production

You’re eager to impress her and you know if you reduce the time you spend on machine maintenance you can make up for the lacklustre results from last quarter.

Flash forward: The end of the Q3 rolls around, and you’ve met your output goals! You were able to meet your production levels by continuing to run the equipment during scheduled down-time periods. You’ve achieved numbers that impress your boss…

…but in order to maintain this level of output you will have to continue to sacrifice maintenance.

In Q4, disaster strikes! One of your 3 machines breaks down leaving you with zero output, and no way to move the needle forward for your department. Your boss gets on your back for your lack of foresight, and eventually your job is given to the young hot-shot on your team and you are left searching for a new gig.

A sad turn of events, right? Many people would label this a familiar tale of poor management (and correctly so!). Yet, when it comes to conversion optimization, there are many companies making the same mistake.

Optimizers are so often under pressure to satisfy the speed side of the equation that they are sacrificing its equally important counterpart…

Insights.

Consider the following graphic.

Growth-insights-spectrum
The spectrum ranges from straight forward growth-driving A/B tests, to multivariate insight-driving tests.

If you’ve got Amazon-level traffic and proper Design of Experiments (DOE), you may not have to choose between growth and insights. But in smaller organizations this can be a zero-sum equation. If you want fast wins, you sacrifice insights, and if you want insights, you may have to sacrifice a win or two.

Sustainable, optimal progress for any organization will fall somewhere in the middle. Companies often put so much emphasis on reaching certain testing velocities that they shoot themselves in the foot for long-term success.

Maximum velocity does not equal maximum impact

Sacrificing insights in the short-term may lead to higher testing output this quarter, but it will leave you at a roadblock later. (Sound familiar?) One 10% win without insights may turn heads your direction now, but a test that delivers insights can turn into five 10% wins down the line. It’s similar to the compounding effect: collecting insights now can mean massive payouts over time.

As with factory production, the key to sustainable output is to find a balance between short-term (maximum testing speed) and long-term (data collection/insights).

Growth vs. Insights

Christopher Columbus had an exploration mindset.

He set sail looking to find a better trade-route to India. He had no expectation of what that was going to look like, but he was open to anything he discovered and his sense of adventure rewarded him with what is likely the largest geographical discovery in History.

insight-driving-mindset
Have a Christopher Columbus mindset: test in pursuit of unforeseeable insights.

Exploration often leads to the biggest discoveries. Yet this is not what most companies are doing when it comes to conversion optimization. Why not?

Organizations tend to view testing solely as a growth-driving process— a way of settling long-term discussions between two firmly held opinions. No doubt growth is an important part of testing, but you can’t overlook exploration.

This is the testing that will propel your business forward and lead to the kind of conversion rate lift you keep reading about in case studies. Those companies aren’t achieving that level of lift on their first try; it’s typically the result of a series of insight-driving experiments that help the tester land on the big insight.

At WiderFunnel we classify A/B tests into two buckets: growth-driving and insight-driving…and we consider them equally important!

Growth-driving experiments (Case study here)

During our partnership with Annie Selke, a retailer of home-ware goods, we ran a test featuring a round of insight-driving variations. We were testing different sections on the product category page for sensitivity: Were users sensitive to changes to the left-hand filter? How might users respond to new ‘Sort By’ functionality?

Insight-driving-test
Round I of testing for Annie Selke: Note the left-hand filter and ‘Sort By’ functionality.

Neither of our variations led to a conversion rate lift. In fact, both lost to the Control page. But the results of this first round of testing revealed key, actionable insights ― namely that the changes we had made to the left-hand filter might actually be worth significant lift, had they not been negatively impacted by other changes.

We took these insights and, combined with supplementary heatmap data, we designed a follow-up experiment. We knew exactly what to test and we knew what the projected lift would be. And we were right. In the end, we turned insights into results, getting a 23.6% lift in conversion rate for Annie Selke.

In Round II of testing, we reverted to the original 'Sort By' functionality.
In Round II of testing, we reverted to the original ‘Sort By’ functionality.

For more on the testing we did with Annie Selke, you should read this post >> “A-ha! Isolations turn a losing experiment into a winner

This follow-up test is what we call a growth-driving experiment. We were armed with compelling evidence and we had a strong hypothesis which proved to be true.

But as any optimizer knows, it can be tough to gather compelling evidence to inform every hypothesis. And this is where a tester must be brave and turn their attention to exploration. Be like Christopher.

Insight-driving experiments

The initial round of testing we did for Annie Selke, where we were looking for sensitivities, is a perfect example of an insight-driving experiment. In insight-driving experiments, the primary purpose of your test is to answer a question, and lifting conversion rates is a secondary goal.

This doesn’t mean that the two cannot go hand-in-hand. They can. But when you’re conducting insight-driving experiments, you should be asking “Did we learn what we wanted to?” before asking “What was the lift?”. This is your factory down-time, the time during which you restock the cupboard with ideas, and put those ideas into your testing piggy-bank.

We’ve seen entire organizations get totally caught up on the question “How is this test going to move the needle?”

But here’s the kicker: Often the right answer is “It’s not.”

At least not right away. This type of testing has a different purpose. With insight-driving experiments, you’re setting out on a quest for your unicorn insight.

unicorn insight
What’s your unicorn insight?

These are the ideas that aren’t applicable to any other business. You can’t borrow them from industry-leading websites, and they’re not ideas a competitor can steal.

Your unicorn insight is unique to your business. It could be finding that magic word that helps users convert all over your site, or discovering that key value proposition that keeps customers coming back. Every business has a unicorn insight, but you are not going to find it by testing in your regular wheelhouse. It’s important to think differently, and approach problem solving in new ways.

We sometimes run a test for our clients where we take the homepage and isolate, removing every section of that page individually. Are we expecting this test to deliver a big lift? Nope, but we are expecting this test to teach us something.

We know that this is the fastest possible way to answer the question “What do users care about most on this page?” After this type of experiment, we suddenly have a lot of answers to our questions.

That’s right: no lift, but we have insights and clear next steps. We can then rank the importance of every element on the page and start to leverage the things that seem to be important to users on the homepage on other areas of a site. Does this sound like a losing test to you?

Rather than guessing at what we think users are going to respond to best, we run an insight-driving test and let the users give us the insights that can then be applied all over a site.

The key is to manage your expectations during a test like this. This variation won’t be your homepage for eternity. Rather, it should be considered a temporary experiment to generate learning for your business. By definition it is an experiment.

Optimization is an infinite process, and what your page looks like today is not what it will look like in a few months.

Proper Design of Experiments (DOE)

It’s important to note that these experimental categories do have grey lines. With proper DOE and high enough traffic levels, both growth-driving and insight-driving strategies can be executed simultaneously. This is what we call “Factorial Design”.

Factorial design
Factorial design allows you to test with both growth and insights in mind.

Factorial design allows you to test more than one element change within the same experiment, without forcing you to test every possible combination of changes.

Rather than creating a variation for every combination of changed elements (as you would with multivariate testing), you can design a test to focus on specific isolations that you hypothesize will have the biggest impact or drive insights.

How to get started with Factorial Design

Start by making a cluster of changes in one variation (producing variations that are significantly different from the control), and then isolate these changes within subsequent variations (to identify the elements that are having the greatest impact). This hybrid test, using both “variable cluster” with “isolation” variations gives you the best of both worlds: radical change options and the ability to gain insights from the results.

For more on proper Design of Experiments, you should read this post >> “Design your A/B tests to get consistently better results

We see Optimization Managers make the same mistakes over and over again, discounting the future for results today. If you overlook testing “down-time” (those insight-driving experiments), you’ll prevent your testing program from reaching its full potential.

You wouldn’t run a factory without down-time, you don’t collect a paycheck without saving for the future, so why would you run a testing program without investing in insight exploration?

Rather, find the balance between speed and insights with proper factorial design that promises growth now as well as in the future.

How do you ensure your optimization program is testing for both growth and insights? Let us know in the comments!

The post How to A/B test for long-term success (don’t underestimate insights!) appeared first on WiderFunnel Conversion Optimization.

Continue reading here – 

How to A/B test for long-term success (don’t underestimate insights!)