Tag Archives: test

[Case Study] Ecwid sees 21% lift in paid plan upgrades in one month

Reading Time: 2 minutes

What would you do with 21% more sales this month?

I bet you’d walk into your next meeting with your boss with an extra spring in your step, right?

Well, when you implement a strategic marketing optimization program, results like this are not only possible, they are probable.

In this new case study, you’ll discover how e-commerce software supplier, Ecwid, ran one experiment for four weeks, and saw a 21% increase in paid upgrades.

Get the full Ecwid case study now!

Download a PDF version of the Ecwid case study, featuring experiment details, supplementary takeaways and insights, and a testimonial from Ecwid’s Sr. Director, Digital Marketing.



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.

A little bit about Ecwid

Ecwid provides easy-to-use online store setup, management, and payment solutions. The company was founded in 2009, with the goal of enabling business-owners to add online stores to their existing websites, quickly and without hassle.

The company has a freemium business model: Users can sign up for free, and unlock more features as they upgrade to paid packages.

Ecwid’s partnership with WiderFunnel

In November 2016, Ecwid partnered with WiderFunnel with two primary goals:

  1. To increase initial signups for their free plan through marketing optimization, and
  2. To increase the rate of paid upgrades, through platform optimization

This case study focuses on a particular experiment cycle that ran on Ecwid’s step-by-step onboarding wizard.

The methodology

Last Winter, the WiderFunnel Strategy team did an initial LIFT Analysis of the onboarding wizard, and identified several potential barriers to conversion. (Both in terms of completing steps to setup a new store, and in terms of upgrading to a paid plan.)

The lead WiderFunnel Strategist for Ecwid, Dennis Pavlina, decided to create an A/B cluster test to 1) address the major barriers simultaneously, and 2) to get major lift for Ecwid, quickly.

The overarching goal was to make the onboarding process smoother. The WiderFunnel and Ecwid optimization teams hoped that enhancing the initial user experience, and exposing users to the wide range of Ecwid’s features, would result in more users upgrading to paid plans.

Dennis Pavlina

Ecwid’s two objectives ended up coming together in this test. We thought that if more new users interacted with the wizard and were shown the whole ‘Ecwid world’ with all the integrations and potential it has, they would be more open to upgrading. People needed to be able to see its potential before they would want to pay for it.

Dennis Pavlina, Optimization Strategist, WiderFunnel

The Results

This experiment ran for four weeks, at which point the variation was determined to be the winner with 98% confidence. The variation resulted in a 21.3% increase in successful paid account upgrades for Ecwid.

Read the full case study for:

  • The details on the initial barriers to conversion
  • How this test was structured
  • Which secondary metrics we tracked, and
  • The supplementary takeaways and customer insights that came from this test

The post [Case Study] Ecwid sees 21% lift in paid plan upgrades in one month appeared first on WiderFunnel Conversion Optimization.

See original article:

[Case Study] Ecwid sees 21% lift in paid plan upgrades in one month

How to Create, Track and Rank CRO Hypotheses So You Know What to Test

CRO hypothesis ranking

CRO makes big promises. But the way people get to those 300% lifts in conversions is by being organized. Otherwise, you find yourself in the position that a lot of marketers do: you do a test, build on the result, wait a while, do another test, wait a while… meanwhile, the big jumps in conversions, leads and revenue never really seem to manifest. That’s because only a structured approach can get you in position to make the best use of your testing time and budget. This isn’t something you want to be doing by the seat of your pants. In…

The post How to Create, Track and Rank CRO Hypotheses So You Know What to Test appeared first on The Daily Egg.

Follow this link:

How to Create, Track and Rank CRO Hypotheses So You Know What to Test

How pilot pesting can dramatically improve your user research

Reading Time: 6 minutes

Today, we are talking about user research, a critical component of any design toolkit. Quality user research allows you to generate deep, meaningful user insights. It’s a key component of WiderFunnel’s Explore phase, where it provides a powerful source of ideas that can be used to generate great experiment hypothesis.

Unfortunately, user research isn’t always as easy as it sounds.

Do any of the following sound familiar:

  • During your research sessions, your participants don’t understand what they have been asked to do?
  • The phrasing of your questions has given away the answer or has caused bias in your results?
  • During your tests, it’s impossible for your participants to complete the assigned tasks in the time provided?
  • After conducting participants sessions, you spend more time analyzing the research design rather than the actual results.

If you’ve experienced any of these, don’t worry. You’re not alone.

Even the most seasoned researchers experience “oh-shoot” moments, where they realize there are flaws in their research approach.

Fortunately, there is a way to significantly reduce these moments. It’s called pilot testing.

Pilot testing is a rehearsal of your research study. It allows you to test your research approach with a small number of test participants before the main study. Although this may seem like an additional step, it may, in fact, be the time best spent on any research project.
Just like proper experiment design is a necessity, investing time to critique, test, and iteratively improve your research design, before the research execution phase, can ensure that your user research runs smoothly, and dramatically improves the outputs from your study.

And the best part? Pilot testing can be applied to all types of research approaches, from basic surveys to more complex diary studies.

Start with the process

At WiderFunnel, our research approach is unique for every project, but always follows a defined process:

  1. Developing a defined research approach (Methodology, Tools, Participant Target Profile)
  2. Pilot testing of research design
  3. Recruiting qualified research participants
  4. Execution of research
  5. Analyzing the outputs
  6. Reporting on research findings
website user research in conversion optimization
User Research Process at WiderFunnel

Each part of this process can be discussed at length, but, as I said, this post will focus on pilot testing.

Your research should always start with asking the high-level question: “What are we aiming to learn through this research?”. You can use this question to guide the development of research methodology, select research tools, and determine the participant target profile. Pilot testing allows you to quickly test and improve this approach.

WiderFunnel’s pilot testing process consists of two phases: 1) an internal research design review and 2) participant pilot testing.

During the design review, members from our research and strategy teams sit down as a group and spend time critically thinking about the research approach. This involves reviewing:

  • Our high-level goals for what we are aiming to learn
  • The tools we are going to use
  • The tasks participants will be asked to perform
  • Participant questions
  • The research participant sample size, and
  • The participant target profile

Our team often spends a lot of time discussing the questions we plan to ask participants. It can be tempting to ask participants numerous questions over a broad range of topics. This inclination is often due to a fear of missing the discovery of an insight. Or, in some cases, is the result of working with a large group of stakeholders across different departments, each trying to push their own unique agenda.

However, applying a broad, unfocused approach to participant questions can be dangerous. It can cause a research team to lose sight of its original goals and produce research data that is difficult to interpret; thus limiting the number of actionable insights generated.

To overcome this, WiderFunnel uses the following approach when creating research questions:

Phase 1: To start, the research team creates a list of potential questions. These questions are then reviewed during the design review. The goal is to create a concise set of questions that are clearly written, do not bias the participant, and compliment each other. Often this involves removing a large number of the questions from our initial list and reworking those that remain.

Phase 2: The second phase of WiderFunnel’s research pilot testing consists of participant pilot testing.

This follows a rapid and iterative approach, where we pilot our defined research approach on an initial 1 to 2 participants. Based on how these participants respond, the research approach is evaluated, improved, and then tested on 1 to 2 new participants.

Researchers repeat this process until all of the research design “bugs” have been ironed out, much like QA-ing a new experiment. There are different criteria you can use to test the research experience, but we focus on testing three main areas: clarity of instructions, participant tasks and questions, and the research timing.

  • Clarity of instructions: This involves making sure that the instructions are not misleading or confusing to the participants
  • Testing of the tasks and questions: This involves testing the actual research workflow
  • Research timing: We evaluate the timing of each task and the overall experiment

Let’s look at an example.

Recently, a client approached us to do research on a new area of their website that they were developing for a new service offering. Specifically, the client wanted to conduct an eye tracking study on a new landing page and supporting content page.

With the client, we co-created a design brief that outlined the key learning goals, target participants, the client’s project budget, and a research timeline. The main learning goals for the study included developing an understanding of customer engagement (eye tracking) on both the landing and content page and exploring customer understanding of the new service.

Using the defined learning goals and research budget, we developed a research approach for the project. Due to the client’s budget and request for eye tracking we decided to use Sticky, a remote eye tracking tool to conduct the research.

We chose Sticky because it allows you to conduct unmoderated remote eye tracking experiments, and follow them up with a survey if needed.

In addition, we were also able to use Sticky’s existing participant pool, Sticky Crowd, to define our target participants. In this case, the criteria for the target participants were determined based on past research that had been conducted by the client.

Leveraging the capabilities of Sticky, we were able to define our research methodology and develop an initial workflow for our research participants. We then created an initial list of potential survey questions to supplement the eye tracking test.

At this point, our research and strategy team conducted an internal research design review. We examined both the research task and flow, the associated timing, and finalized the survey questions.

In this case, we used open-ended questions in order to not bias the participants, and limited the total number of questions to five. Questions were reworked from the proposed lists to improve the wording, ensure that questions complimented each other, and were focused on achieving the learning goals: exploring customer understanding of the new service.

To help with question clarity, we used Grammarly to test the structure of each question.

Following the internal design review, we began participant pilot testing.

Unfortunately, piloting an eye tracking test on 1 to 2 users is not an affordable option when using the Sticky platform. To overcome this we got creative and used some free tools to test the research design.

We chose to use Keynote presentation (timed transitions) and its Keynote Live feature to remotely test the research workflow, and Google Forms to test the survey questions. GoToMeeting was used to observe participants via video chat during the participant pilot testing. Using these tools we were able to conduct a quick and affordable pilot test.

The initial pilot test was conducted with two individual participants, both of which fit the criteria for the target participants. The pilot test immediately pointed out flaws in the research design, which included confusion regarding the test instructions and issues with the timing for each task.

In this case, our initial instructions did not provide our participants with enough information on the context of what they were looking for, resulting in confusion of what they were actually supposed to do. Additionally, we made an initial assumption that 5 seconds would be enough time for each participant to view and comprehend each page. However, the supporting content page was very context rich and 5 seconds did not provide participants enough time to view all the content on the page.

With these insights, we adjusted our research design to remove the flaws, and then conducted an additional pilot with two new individual participants. All of the adjustments seemed to resolve the previous “bugs”.

In this case, pilot testing not only gave us the confidence to move forward with the main study, it actually provide its own “A-ha” moment. Through our initial pilot tests, we realized that participants expected a set function for each page. For the landing page, participants expected a page that grabbed their attention and attracted them to the service, whereas, they expect the supporting content page to provide more details on the service and educate them on how it worked. Insights from these pilot tests reshaped our strategic approach to both pages.

Nick So

The seemingly ‘failed’ result of the pilot test actually gave us a huge Aha moment on how users perceived these two pages, which not only changed the answers we wanted to get from the user research test, but also drastically shifted our strategic approach to the A/B variations themselves.

Nick So, Director of Strategy, WiderFunnel

In some instances, pilot testing can actually provide its own unique insights. It is a nice bonus when this happens, but it is important to remember to always validate these insights through additional research and testing.

Final Thoughts

Still not convinced about the value of pilot testing? Here’s one final thought.

By conducting pilot testing you not only improve the insights generated from a single project, but also the process your team uses to conduct research. The reflective and iterative nature of pilot testing will actually accelerate the development of your skills as a researcher.

Pilot testing your research, just like proper experiment design, is essential. Yes, this will require an investment of both time and effort. But trust us, that small investment will deliver significant returns on your next research project and beyond.

Do you agree that pilot testing is an essential part of all research projects?

Have you had an “oh-shoot” research moment that could have been prevented by pilot testing? Let us know in the comments!

The post How pilot pesting can dramatically improve your user research appeared first on WiderFunnel Conversion Optimization.

Credit: 

How pilot pesting can dramatically improve your user research

How pilot testing can dramatically improve your user research

Reading Time: 6 minutes

Today, we are talking about user research, a critical component of any design toolkit. Quality user research allows you to generate deep, meaningful user insights. It’s a key component of WiderFunnel’s Explore phase, where it provides a powerful source of ideas that can be used to generate great experiment hypothesis.

Unfortunately, user research isn’t always as easy as it sounds.

Do any of the following sound familiar:

  • During your research sessions, your participants don’t understand what they have been asked to do?
  • The phrasing of your questions has given away the answer or has caused bias in your results?
  • During your tests, it’s impossible for your participants to complete the assigned tasks in the time provided?
  • After conducting participants sessions, you spend more time analyzing the research design rather than the actual results.

If you’ve experienced any of these, don’t worry. You’re not alone.

Even the most seasoned researchers experience “oh-shoot” moments, where they realize there are flaws in their research approach.

Fortunately, there is a way to significantly reduce these moments. It’s called pilot testing.

Pilot testing is a rehearsal of your research study. It allows you to test your research approach with a small number of test participants before the main study. Although this may seem like an additional step, it may, in fact, be the time best spent on any research project.
Just like proper experiment design is a necessity, investing time to critique, test, and iteratively improve your research design, before the research execution phase, can ensure that your user research runs smoothly, and dramatically improves the outputs from your study.

And the best part? Pilot testing can be applied to all types of research approaches, from basic surveys to more complex diary studies.

Start with the process

At WiderFunnel, our research approach is unique for every project, but always follows a defined process:

  1. Developing a defined research approach (Methodology, Tools, Participant Target Profile)
  2. Pilot testing of research design
  3. Recruiting qualified research participants
  4. Execution of research
  5. Analyzing the outputs
  6. Reporting on research findings
website user research in conversion optimization
User Research Process at WiderFunnel

Each part of this process can be discussed at length, but, as I said, this post will focus on pilot testing.

Your research should always start with asking the high-level question: “What are we aiming to learn through this research?”. You can use this question to guide the development of research methodology, select research tools, and determine the participant target profile. Pilot testing allows you to quickly test and improve this approach.

WiderFunnel’s pilot testing process consists of two phases: 1) an internal research design review and 2) participant pilot testing.

During the design review, members from our research and strategy teams sit down as a group and spend time critically thinking about the research approach. This involves reviewing:

  • Our high-level goals for what we are aiming to learn
  • The tools we are going to use
  • The tasks participants will be asked to perform
  • Participant questions
  • The research participant sample size, and
  • The participant target profile

Our team often spends a lot of time discussing the questions we plan to ask participants. It can be tempting to ask participants numerous questions over a broad range of topics. This inclination is often due to a fear of missing the discovery of an insight. Or, in some cases, is the result of working with a large group of stakeholders across different departments, each trying to push their own unique agenda.

However, applying a broad, unfocused approach to participant questions can be dangerous. It can cause a research team to lose sight of its original goals and produce research data that is difficult to interpret; thus limiting the number of actionable insights generated.

To overcome this, WiderFunnel uses the following approach when creating research questions:

Phase 1: To start, the research team creates a list of potential questions. These questions are then reviewed during the design review. The goal is to create a concise set of questions that are clearly written, do not bias the participant, and compliment each other. Often this involves removing a large number of the questions from our initial list and reworking those that remain.

Phase 2: The second phase of WiderFunnel’s research pilot testing consists of participant pilot testing.

This follows a rapid and iterative approach, where we pilot our defined research approach on an initial 1 to 2 participants. Based on how these participants respond, the research approach is evaluated, improved, and then tested on 1 to 2 new participants.

Researchers repeat this process until all of the research design “bugs” have been ironed out, much like QA-ing a new experiment. There are different criteria you can use to test the research experience, but we focus on testing three main areas: clarity of instructions, participant tasks and questions, and the research timing.

  • Clarity of instructions: This involves making sure that the instructions are not misleading or confusing to the participants
  • Testing of the tasks and questions: This involves testing the actual research workflow
  • Research timing: We evaluate the timing of each task and the overall experiment

Let’s look at an example.

Recently, a client approached us to do research on a new area of their website that they were developing for a new service offering. Specifically, the client wanted to conduct an eye tracking study on a new landing page and supporting content page.

With the client, we co-created a design brief that outlined the key learning goals, target participants, the client’s project budget, and a research timeline. The main learning goals for the study included developing an understanding of customer engagement (eye tracking) on both the landing and content page and exploring customer understanding of the new service.

Using the defined learning goals and research budget, we developed a research approach for the project. Due to the client’s budget and request for eye tracking we decided to use Sticky, a remote eye tracking tool to conduct the research.

We chose Sticky because it allows you to conduct unmoderated remote eye tracking experiments, and follow them up with a survey if needed.

In addition, we were also able to use Sticky’s existing participant pool, Sticky Crowd, to define our target participants. In this case, the criteria for the target participants were determined based on past research that had been conducted by the client.

Leveraging the capabilities of Sticky, we were able to define our research methodology and develop an initial workflow for our research participants. We then created an initial list of potential survey questions to supplement the eye tracking test.

At this point, our research and strategy team conducted an internal research design review. We examined both the research task and flow, the associated timing, and finalized the survey questions.

In this case, we used open-ended questions in order to not bias the participants, and limited the total number of questions to five. Questions were reworked from the proposed lists to improve the wording, ensure that questions complimented each other, and were focused on achieving the learning goals: exploring customer understanding of the new service.

To help with question clarity, we used Grammarly to test the structure of each question.

Following the internal design review, we began participant pilot testing.

Unfortunately, piloting an eye tracking test on 1 to 2 users is not an affordable option when using the Sticky platform. To overcome this we got creative and used some free tools to test the research design.

We chose to use Keynote presentation (timed transitions) and its Keynote Live feature to remotely test the research workflow, and Google Forms to test the survey questions. GoToMeeting was used to observe participants via video chat during the participant pilot testing. Using these tools we were able to conduct a quick and affordable pilot test.

The initial pilot test was conducted with two individual participants, both of which fit the criteria for the target participants. The pilot test immediately pointed out flaws in the research design, which included confusion regarding the test instructions and issues with the timing for each task.

In this case, our initial instructions did not provide our participants with enough information on the context of what they were looking for, resulting in confusion of what they were actually supposed to do. Additionally, we made an initial assumption that 5 seconds would be enough time for each participant to view and comprehend each page. However, the supporting content page was very context rich and 5 seconds did not provide participants enough time to view all the content on the page.

With these insights, we adjusted our research design to remove the flaws, and then conducted an additional pilot with two new individual participants. All of the adjustments seemed to resolve the previous “bugs”.

In this case, pilot testing not only gave us the confidence to move forward with the main study, it actually provide its own “A-ha” moment. Through our initial pilot tests, we realized that participants expected a set function for each page. For the landing page, participants expected a page that grabbed their attention and attracted them to the service, whereas, they expect the supporting content page to provide more details on the service and educate them on how it worked. Insights from these pilot tests reshaped our strategic approach to both pages.

Nick So

The seemingly ‘failed’ result of the pilot test actually gave us a huge Aha moment on how users perceived these two pages, which not only changed the answers we wanted to get from the user research test, but also drastically shifted our strategic approach to the A/B variations themselves.

Nick So, Director of Strategy, WiderFunnel

In some instances, pilot testing can actually provide its own unique insights. It is a nice bonus when this happens, but it is important to remember to always validate these insights through additional research and testing.

Final Thoughts

Still not convinced about the value of pilot testing? Here’s one final thought.

By conducting pilot testing you not only improve the insights generated from a single project, but also the process your team uses to conduct research. The reflective and iterative nature of pilot testing will actually accelerate the development of your skills as a researcher.

Pilot testing your research, just like proper experiment design, is essential. Yes, this will require an investment of both time and effort. But trust us, that small investment will deliver significant returns on your next research project and beyond.

Do you agree that pilot testing is an essential part of all research projects?

Have you had an “oh-shoot” research moment that could have been prevented by pilot testing? Let us know in the comments!

The post How pilot testing can dramatically improve your user research appeared first on WiderFunnel Conversion Optimization.

Source – 

How pilot testing can dramatically improve your user research

How to Micro Test New Product/Service Ideas Using AdWords

Launching a new business idea or deciding to develop a new product for your company is not without risk. Many of the best business ideas have come from inspiration, intuition or in-depth insight into an industry. While some of these ideas have risen to dominate the modern world, such as search engines, barcodes and credit card readers, many fine ideas still result in bankruptcy for their company, due to insufficient demand or failure to properly research customer desire. If you build it will they come? Often smart business entrepreneurs can still make big mistakes. With new product, service or business…

The post How to Micro Test New Product/Service Ideas Using AdWords appeared first on The Daily Egg.

See the original article here: 

How to Micro Test New Product/Service Ideas Using AdWords

How to do server-side testing for single page app optimization

Reading Time: 5 minutes

Gettin’ technical.

We talk a lot about marketing strategy on this blog. But today, we are getting technical.

In this post, I team up with WiderFunnel front-end developer, Thomas Davis, to cover the basics of server-side testing from a web development perspective.

The alternative to server-side testing is client-side testing, which has arguably been the dominant testing method for many marketing teams, due to ease and speed.

But modern web applications are becoming more dynamic and technically complex. And testing within these applications is becoming more technically complex.

Server-side testing is a solution to this increased complexity. It also allows you to test much deeper. Rather than being limited to testing images or buttons on your website, you can test algorithms, architectures, and re-brands.

Simply put: If you want to test on an application, you should consider server-side testing.

Let’s dig in!

Note: Server-side testing is a tactic that is linked to single page applications (SPAs). Throughout this post, I will refer to web pages and web content within the context of a SPA. Applications such as Facebook, Airbnb, Slack, BBC, CodeAcademy, eBay, and Instagram are SPAs.


Defining server-side and client-side rendering

In web development terms, “server-side” refers to “occurring on the server side of a client-server system.”

The client refers to the browser, and client-side rendering occurs when:

  1. A user requests a web page,
  2. The server finds the page and sends it to the user’s browser,
  3. The page is rendered on the user’s browser, and any scripts run during or after the page is displayed.
Static app server
A basic representation of server-client communication.

The server is where the web page and other content live. With server-side rendering, the requested web page is sent to the user’s browser in final form:

  1. A user requests a web page,
  2. The server interprets the script in the page, and creates or changes the page content to suit the situation
  3. The page is sent to the user in final form and then cannot be changed using server-side scripting.

To talk about server-side rendering, we also have to talk a little bit about JavaScript. JavaScript is a scripting language that adds functionality to web pages, such as a drop-down menu or an image carousel.

Traditionally, JavaScript has been executed on the client side, within the user’s browser. However, with the emergence of Node.js, JavaScript can be run on the server side. All JavaScript executing on the server is running through Node.js.

*Node.js is an open-source, cross-platform JavaScript runtime environment, used to execute JavaScript code server-side. It uses the Chrome V8 JavaScript engine.

In laymen’s (ish) terms:

When you visit a SPA web application, the content you are seeing is either being rendered in your browser (client-side), or on the server (server-side).

If the content is rendered client-side, JavaScript builds the application HTML content within the browser, and requests any missing data from the server to fill in the blanks.

Basically, the page is incomplete upon arrival, and is completed within the browser.

If the content is being rendered server-side, your browser receives the application HTML, pre-built by the server. It doesn’t have to fill in any blanks.

Why do SPAs use server-side rendering?

There are benefits to both client-side rendering and server-side rendering, but render performance and page load time are two huge pro’s for the server side.

(A 1 second delay in page load time can result in a 7% reduction in conversions, according to Kissmetrics.)

Server-side rendering also enables search engine crawlers to find web content, improving SEO; and social crawlers (like the crawlers used by Facebook) do not evaluate JavaScript, making server-side rendering beneficial for social searching.

With client-side rendering, the user’s browser must download all of the application JavaScript, and wait for a response from the server with all of the application data. Then, it has to build the application, and finally, show the complete HTML content to the user.

All of which to say, with a complex application, client-side rendering can lead to sloooow initial load times. And, because client-side rendering relies on each individual user’s browser, the developer only has so much control over load time.

Which explains why some developers are choosing to render their SPAs on the server side.

But, server-side rendering can disrupt your testing efforts, if you are using a framework like Angular or React.js. (And the majority of SPAs use these frameworks).

The disruption occurs because the version of your application that exists on the server becomes out of sync with the changes being made by your test scripts on the browser.

NOTE: If your web application uses Angular, React, or a similar framework, you may have already run into client-side testing obstacles. For more on how to overcome these obstacles, and successfully test on AngularJS apps, read this blog post.


Testing on the server side vs. the client side

Client-side testing involves making changes (the variation) within the browser by injecting Javascript after the original page has already loaded.

The original page loads, the content is hidden, the necessary elements are changed in the background, and the ‘new’ version is shown to the user post-change. (Because the page is hidden while these changes are being made, the user is none-the-wiser.)

As I mentioned earlier, the advantages of client-side testing are ease and speed. With a client-side testing tool like VWO, a marketer can set up and execute a simple test using a WYSIWYG editor without involving a developer.

But for complex applications, client-side testing may not be the best option: Layering more JavaScript on top of an already-bulky application means even slower load time, and an even more cumbersome user experience.

A Quick Hack

There is a workaround if you are determined to do client-side testing on a SPA application. Web developers can take advantage of features like Optimizely’s conditional activation mode to make sure that testing scripts are only executed when the application reaches a desired state.

However, this can be difficult as developers will have to take many variables into account, like location changes performed by the $routeProvider, or triggering interaction based goals.

To avoid flicker, you may need to hide content until the front-end application has initialized in the browser, voiding the performance benefits of using server-side rendering in the first place.

WiderFunnel - client side testing activation mode
Activation Mode waits until the framework has loaded before executing your test.



When you do server-side testing, there are no modifications being made at the browser level. Rather, the parameters of the experiment variation (‘User 1 sees Variation A’) are determined at the server route level, and hooked straight into the javascript application through a service provider.

Here is an example where we are testing a pricing change:

“Ok, so, if I want to do server-side testing, do I have to involve my web development team?”

Yep.

But, this means that testing gets folded into your development team’s work flow. And, it means that it will be easier to integrate winning variations into your code base in the end.

If yours is a SPA, server-side testing may be the better choice, despite the work involved. Not only does server-side testing embed testing into your development workflow, it also broadens the scope of what you can actually test.

Rather than being limited to testing page elements, you can begin testing core components of your application’s usability like search algorithms and pricing changes.

A server-side test example!

For web developers who want to do server-side testing on a SPA, Tom has put together a basic example using Optimizely SDK. This example is an illustration, and is not functional.

In it, we are running a simple experiment that changes the color of a button. The example is built using Angular Universal and express JS. A global service provider is being used to fetch the user variation from the Optimizely SDK.

Here, we have simply hard-coded the user ID. However, Optimizely requires that each user have a unique ID. Therefore, you may want to use the user ID that already exists in your database, or store a cookie through express’ Cookie middleware.

Are you currently doing server-side testing?

Or, are you client-side testing on a SPA application? What challenges (if any) have you faced? How have you handled them? Do you have any specific questions? Let us know in the comments!

The post How to do server-side testing for single page app optimization appeared first on WiderFunnel Conversion Optimization.

Continue reading – 

How to do server-side testing for single page app optimization

Can Your Audience and Google Love the Same Page Title?

The Thinker

“What Do Department Store Santas and Prostitutes Have in Common?” “Why Do Drug Dealers Still Live at Home with Their Mothers?” These are two chapter titles from the Steven Levitt and Stephen Dubner book Freakonomics, a work that has captured the interest of hundreds of thousands of readers. One of the big draws of this book is the catchy and intriguing title for each chapter. You just want to read on. But how would Google rate those titles in terms of SEO? Where are the keywords/keyword phrases that are popular and commonly used by generic searches? These titles would be…

The post Can Your Audience and Google Love the Same Page Title? appeared first on The Daily Egg.

Read more: 

Can Your Audience and Google Love the Same Page Title?

Optimizing Mobile Home Page Increases Conversions for Wedding Shoes Website

Elegant Steps offers a large selection of wedding shoes in the UK, both online and in store. More than 50% of its users are new, female users discovering the website organically through mobile. The bulk of them are brides-to-be who are looking for wedding shoes.

Problem

After looking at Elegant Steps’ Google Analytics (GA) data, it was found that while its desktop website was converting at 2%, the mobile version was converting at a much lower 0.6%.

Observations

Hit Search, a digital marketing agency, used VWO to help Elegant Steps dig deep into the problem. They used GA, heuristic analysis, and VWO’s scrollmaps and heatmaps capabilities to find that:

  • Hardly any visitors were scrolling enough to reach the Shop by Brand section on the home page.
  • Elegant Steps’ 3 main USPs, including free shipping, weren’t appearing above the fold on mobile.
  • The text on the home page image was hard to read because it was the same color as the background.

This is how the home page looked on mobile:

elegant_control_jpg

Hypothesis

Armed with these observations, Niall Brooke from Hit Search set about optimizing the mobile home page to fix the problems. It was decided to:

  • Introduce the Shop by Brand section higher up on the page, as the presence of an established name is known to help instill trust and assuage fears.
  • Many studies have found that unexpected shipping cost is the biggest reason for cart abandonment. It was hypothesized that displaying “Free Shipping” above the fold will help reduce bounce and encourage users to continue down the conversion funnel.
  • Change the CTA copy from the generic “Shop Wedding Shoes” to the possessive, “Find my new wedding shoes.”
  • Change the text color on the image for the text to be readable.

This is how the variation looked:

elegant_variation_jpg

Test

Hit Search ran the new version of the home page against the original only for mobile visitors, using VWO’s targeting capability. Niall set VWO’s Bayesian-powered statistics engine to “High-Certainty” mode, and the results kicked in within a month.

Results

“The results were positive with almost a threefold increase in conversions and almost a 50% drop in bounce rate,” said Niall.

In his closing thoughts, Niall had this to say, “VWO is a brilliant all-round conversion optimization platform which we use on a daily basis to perform user analysis, A/B and split tests,” he added.

Mobile an afterthought?

According to a 2015 report, the average conversion rate for mobile websites in the US was 1.32%, significantly lower than its desktop counterpart (3.82%). Though studies have suggested that visitors mostly use mobile for research purposes and make the actual purchase through desktop website, there’s no denying that online retailers are still leaving money on the table. We would love to your thoughts about optimizing mobile websites. When does it become important for you to start looking at mobile optimization? Just hit us the comment section below.

5

1 ratings

How will you rate this content?

Please choose a rating

The post Optimizing Mobile Home Page Increases Conversions for Wedding Shoes Website appeared first on VWO Blog.

Link: 

Optimizing Mobile Home Page Increases Conversions for Wedding Shoes Website

How Tough Mudder Gained a 9% Session Uplift by Optimizing for Mobile Users

The following is a case study about how Tough Mudder achieved a 9% session uplift by optimizing for mobile. With the help of altima° and VWO, they identified and rectified pain points for their mobile users, to provide seamless event identification and sign-ups. 


About the Company

Tough Mudder offers a series of mud and obstacle courses designed to test physical strength, stamina, and mental grit. Events aren’t timed races, but team activities that promote camaraderie and accomplishment as a community.

Objective

Tough Mudder wanted to ensure that enrolment on their mobile website was smooth and easy for their users. They partnered with altima°, a digital agency specializing in eCommerce, and VWO to ensure seamless event identification and sign-ups.

Research on Mobile Users

The agency first analyzed Tough Mudder’s Google Analytics data to identify any pain points across participants’ paths to enrollment. They analyzed existing rates from the Event List, which demonstrated that interested shoppers were not able to identify the events appropriate for them. The agency began to suspect that customers on mobile might not be discovering events easily enough.

Test

On the mobile version of the original page, most relevant pieces of information like the event location and date, were being pushed too far down below the fold. In addition, lesser relevant page elements were possibly distracting users from the mission at hand. This is how it looked like:

tough mudder
Event location and date way below the fold on ‘original’

The agency altima° decided to make the following changes in the variation:

  1. Simplified header: Limiting the header copy to focus on the listed events. The following image shows how this looked.

    img2
    Simplified header copy
  2. List redesign: Redesigning the filter and event list to prominently feature the events themselves. The following image shows the same:
    List redesign to optimize event location and date
  3. Additionally, an Urgency Message was added to encourage interested users to enroll in events nearing their deadline. See the following image to know how it was done:
    Urgency message to push quicker enrollments

For these three variations, seven different combinations were created and a multivariate test was run using VWO. The test experienced over 2k event sign-ups across 4 weeks. The combinations of variations are shown below:

Test Results

After 4 weeks, Variation 2, which included the redesigned event list, proved to be the winning variation. This is not to say that other test variations were not successful. Variation 2 was just the MOST successful:

The winning variation produced a session value uplift of 9%! Combined with the next 2 rounds of optimization testing, altima° helped Tough Mudder earn a session value uplift of over 33%!

Why Did Variation 2 Win?

altima° prefers to let the numbers speak for themselves and not dwell on subjective observations. After all, who needs opinions when you’ve got data-backed results? altima°, however, draws the following conclusions on why Variation 2 won:

Simplified header:

Social proof has demonstrated itself to be a worthy component of conversion optimization initiatives. These often include customer reviews and/or indications of popularity across social networks.

In fact, Tough Mudder experienced a significant lift in the session value due to the following test involving the addition of Facebook icons. It’s likely that the phrase Our Events Have Had Over 2 Million Participants Across 3 Continents warranted its own kind of social proof. 

List redesign:

The most ambitious testing element to design and develop was also the most successful.

It appeared that an unnecessary amount of real estate was being afforded to the location filter. This was resolved by decreasing margins above and below the filter, along with removing the stylized blue graphic.

The events themselves now carried a more prominent position relative to the fold on mobile devices. Additionally, the list itself was made to be more easily read, with a light background and nondistracting text.

Urgency message:

The underperformance of the urgency message came as a surprise. It was believed that this element would prove to be valuable, further demonstrating the importance of testing with VWO.

Something to consider is that not every event included an urgency message. After all, not every enrolment period was soon to close. Therefore, it could be the case that some customers were less encouraged to click through and enroll in an individually relevant event if they felt that they had more time to do so later.

They might have understood that their event of interest wasn’t promoting urgency and was, therefore, not a priority. It also might have been the case that an urgency message was introduced too early in the steps to event enrolment.

Let’s Talk

How did you find this case study? There are more testing theories to discuss! Please reach out to altima° and VWO to discuss. You could also drop in a line in the Comments section below.

Multivariate Testing CTA

0

0 ratings

How will you rate this content?

Please choose a rating

The post How Tough Mudder Gained a 9% Session Uplift by Optimizing for Mobile Users appeared first on VWO Blog.

See the article here: 

How Tough Mudder Gained a 9% Session Uplift by Optimizing for Mobile Users

[Gifographic] Better Website Testing – A Simple Guide to Knowing What to Test

Note: This marketing infographic is part of KlientBoost’s 25-part series. You can subscribe here to access the entire series of gifographics.


If you’ve ever tested your website, you’ve probably been in the unfortunate situation of running out of ideas on what to test.

But don’t worry – it happens to everybody.

That’s of course, unless you have a website testing plan.

That’s why KlientBoost has teamed up with VWO to bring to you a gifographic that provides a simple guide on knowing the what, how, and why when it comes to testing your website.

21-vwo-website-testing2

Setting Your Testing Goals

Like a New Year’s resolution around getting fitter, if you don’t have any goals tied to your website testing plan, then you may be doing plenty of work, with little results to show.

With your goals in place, you can focus on the website tests that will help you achieve those goals –the fastest.

Testing a button color on your home page when you should be testing your checkout process, is a sure sign that you are heading to testing fatigue or the disappointment of never wanting to run a test again.

But let’s take it one step further.

While it’s easy to improve click-through rates, or CTRs, and conversion rates, the true measure of a great website testing plan comes from its ability to increase revenue.

No optimization efforts matter if they don’t connect to increased revenue in some shape or form.

Whether you improve the site user experience, your website’s onboarding process, or get more conversions from your upsell thank you page, all those improvements compound into incremental revenue gains.

Lesson to be learned?

Don’t pop the cork on the champagne until you know that an improvement in the CTRs or conversion rates would also lead to increased revenue.

Start closest to the money when it comes to your A/B tests.

Knowing What to Test

When you know your goals, the next step is to figure out what to test.

You have two options here:

  1. Look at quantitative data like Google Analytics that show where your conversion bottlenecks may be.
  2. Or gather qualitative data with visitor behavior analysis where your visitors can tell you the reasons for why they’re not converting.

Both types of data should fall under your conversion research umbrella. In addition to this gifographic, we created another one, all around the topic of CRO research.

When you’ve done your research, you may find certain aspects of a page that you’d like to test. For inspiration, VWO has created The Complete Guide To A/B Testing – and in it, you’ll find some ideas to test once you’ve identified which page to test:

  • Headlines
  • Subheads
  • Paragraph Text
  • Testimonials
  • Call-to-Action text
  • Call-to-Action button
  • Links
  • Images
  • Content near the fold
  • Social proof
  • Media mentions
  • Awards and badges

As you can see, there are tons of opportunities and endless ideas to test when you decide what to test and in what order.

website-testing
A quick visual for what’s possible

So now that you know your testing goals and what to test, the last step is forming a hypothesis.

With your hypothesis, you’re able to figure out what you think will have the biggest performance lift with the thought of effort in mind as well (easier to get quicker wins that don’t need heaps of development help).

Running an A/B Test

Alright, so you have your goals, list of things to test, and hypotheses to back these up, the next task now is to start testing.

With A/B testing, you’ll always have at least one variant running against your control.

In this case, your control is your actual website as it is now and your variant is the thing you’re testing.

With proper analytics and conversion tracking along with the goal in place, you can start seeing how each of these two variants (hence the name A/B) is doing.

a_b-testing
Consider this a mock-up of your conversion rate variations

When A/B testing, there are two things you may want to consider before you call winners or losers of a test.

One is statistical significance. Statistical significance gives you the thumbs up or thumbs down around whether your test results can be tied to a random chance. If a test is statistically significant, then the chances of the results are ruled out.

And VWO has created its own calculator so that you can see how your test is doing.

The second one is confidence level. It helps you decide whether you can replicate the results of your test again and again.

A confidence level of 95% tells you that your test will achieve the same results 95% of the time if you run it repeatedly. So, as you can tell, the higher your confidence level, the surer you can be that your test truly won or lost.

You can see the A/B test that increased revenue for Server Density by 114%.

Multivariate Testing for Combination of Variations

Let’s say you have multiple ideas to test, and your testing list is looking way too long.

Wouldn’t it be cool if you could test multiple aspects of your page at once to get faster results?

That’s exactly what multivariate testing is.

Multivariate testing allows you to test which combinations of different page elements affect each other when it comes to CTRs, conversion rates, or revenue gains.
Look at the multivariate pizza example below:

multivariate-testing-example
Different headlines, CTAs, and colors are used

The recipe for multivariate testing is simple and delicious.

multivariate-testing-formula
Different elements increase the combination size

And the best part is that VWO can automatically run through all the different combinations you set so that your multivariate test can be done without the heavy lifting.

If you’re curious about whether you should A/B test or run multivariate tests, then look at this chart that VWO created:

multivariate-testing-software-visual-website-optimizer
Which one makes the most sense for you?

Split URL Testing for Heavier Variations

If you find that your A/B or multivariate tests lead you to the end of the rainbow that shows bigger initiatives in backend development or major design changes are needed, then you’re going to love split URL testing.

As VWO states:

“If your variation is on a different address or has major design changes compared to control, we’d recommend that you create a Split URL Test.”

what-is-split-testing-explained-by-vwo

Split URL testing allows you to host different variations of your website test without changing the actual URL.

As the visual shows above, you can see that the two different variations are set up in a way that the URL is different as well.

URL testing is great when you want to test some major redesigns such as your entire website built from scratch.

By not changing your current website code, you can host the redesign on a different URL and have VWO split the traffic between the control and the variant—giving you clear insight whether your redesign will perform better.

Over to You

Now that you have a clear understanding on different types of website tests to run, the only thing left is to, well, run some tests.

Armored with quantitative and qualitative knowledge of your visitors, focus on the areas that have the biggest and quickest impact to strengthen your business.

And I promise, when you finish your first successful website test, you’ll get hooked on.

I know I was.

0

0 ratings

How will you rate this content?

Please choose a rating

The post [Gifographic] Better Website Testing – A Simple Guide to Knowing What to Test appeared first on VWO Blog.

Continue reading: 

[Gifographic] Better Website Testing – A Simple Guide to Knowing What to Test