Tag Archives: a/b split testing

Thumbnail

How De Nieuwe Zaak Improved Productivity Using The VWO API

About De Nieuwe Zaak

De Nieuwe Zaak is a leading full-service digital agency based in Zwolle, Netherlands. With a team of over 90 experts, they provide innovative, high-quality digital commerce solutions for retailers, wholesalers, and brands alike.

They have been using VWO since 2012 to conduct A/B tests and optimize websites for many of their clients. Being such an extensive VWO user, they are constantly investigating how they can make use of the platform to make their processes more efficient and produce faster results.

De Nieuwe Zaak recently started using the VWO Application Programming Interface (API), which has drastically improved the productivity of their development teams with regard to building A/B test campaigns by using VWO. They recently published a blog post sharing their experience using VWO and the API; you can read it here.

Challenges Before Using VWO API

De Nieuwe Zaak has more than 12 years of experience in implementing and creating web applications. In these years, they have standardized their development process.

For them, setting up A/B tests is a collaboration between CRO & UX consultants and developers. The CRO & UX consultant analyzes the user research data and comes up with a hypothesis for an A/B test, and developers write the code for it.

Front-end developers work in their own Integrated Development Environment (IDE), such as Visual Studio, Sublime, or Webstorm, as these editors provide excellent support for writing code in HTML, SCSS, and JavaScript. After a piece of code is complete, it is stored in a version management system such as GIT and Bitbucket so that it is never lost.

Before the front-end developers at De Nieuwe Zaak started using the VWO API, they used to write the code for the test variations on the VWO code editor. However, they wanted to be able to write code in the IDE familiar to them for improved efficiency.

How VWO API Helped Improve Productivity

Developers at De Nieuwe Zaak used the VWO API to visualize tests in dashboards, analyze test results, and implement code changes in their campaigns. Here is how the process worked:

For any API to work, 2 applications are required. With one being VWO, developers at De Nieuwe Zaak wrote a small NodeJS application that now runs on their computers with the help of extensive documentation provided by VWO.

The NodeJS application communicates with VWO by using an automated task runner called GruntJS and an asynchronous request initiated by the browser, also known as an Ajax call.

With the first version of the VWO API, front-end developers at De Nieuwe Zaak were able to retrieve the JavaScript and CSS code pieces from their version management system, and then push the changes to VWO. Further, they could accommodate using SCSS instead of CSS, which is easier to manage and write code in. Below is a schematic representation of the process:

Summary of Benefits

De Nieuwe Zaak is one of the first VWO customers worldwide that started using the VWO API. In addition to improving their efficiency and reducing the overall time spent from scratch till the end for implementing a test, the development team at De Nieuwe Zaak has been able to:

  • Improve code quality by using SCSS, instead of plain CSS.
  • Write code in an environment familiar to them.
  • Ensure safety of their code by using version management.
  • Create and extend the API link to further accommodate their use cases.
  • Follow their existing processes and frameworks to develop websites.

The VWO API is very extensive and is very well documented. At De Nieuwe Zaak, we use the API for visualizing reports in dashboards and implementing test. Particularly, the process of implementing tests with the API made the implementation more sustainable.

– Pascal Alferink, Developer at De Nieuwe Zaak

The post How De Nieuwe Zaak Improved Productivity Using The VWO API appeared first on Blog.

View post – 

How De Nieuwe Zaak Improved Productivity Using The VWO API

Thumbnail

Results From Our Latest A/B Test: Here’s The New VWO Logo!

Over the past 8 years, we’ve made some key (and some minor) changes to the look and feel of our brand. Around this time last year, we revamped our website for the launch of VWO Conversion Optimization Platform™.

As an organization that thrives on a culture of experimentation, we are always looking into data to discover insights for optimization. By turning our opinions into hypotheses, we test changes for almost everything which could have a significant impact on the business, and then derive the next logical step. Based on this simple framework, we recently made a minor change to the VWO logo. Before we delve further into the hypothesis behind this change, look at the logo in its full glory:

The Hypothesis: Making The Letters V, W, and O Prominent Will Improve Readability

In the beginning, our product was called Visual Website Optimizer. However, over the years, people (including us) fondly started abbreviating it to VWO. This is what the VWO logo looked like during this gradual change:

More recently, we dropped the accompanying text “Visual Website Optimizer” completely, and also started referring to our product as just “VWO.”

With this change, we realized that it would be hard for someone unfamiliar with our brand to read or understand our logo. We hypothesized that if the letters “V,” “W,” and “O” were made distinguishable, the brand name VWO would stand out more clearly.

The Test: Conducting an A/B/C Test to Choose a Winner

After the hypothesis was finalized, our design team created a new variation of the logo, per the new specification. Next, we decided to test the hypothesis by conducting extensive user testing through 5-second tests on UsabilityHub.

Five-second tests are a method of usability testing, where the participants are shown a visual for only 5 seconds, and then asked questions corresponding to it.

For our tests, we selected a sample of participants from across the globe, with varying demographics, location, and other attributes. They were showed the 3 variations of the logo—the existing one, the proposed one, and the one with VWO written as well-spaced plain text. Next, we asked the participants the question “What do you read?” to which they had to type in a response.

For the proposed logo, we got 90% of them answering “VWO”, as opposed to only 66% for the existing one. For the variation with VWO written as well-spaced text, the response was around 96%.

The Result: Reinforced Belief in the Potential of Testing

As an obvious next step, we decided to make this minor update to our logo which can now be seen to be live across all our digital properties. We’re proud of the fact that the basic tenets of experimentation continue to give direction to our efforts.

If it wasn’t for validating our initial, seemingly insignificant hypothesis, VWO wouldn’t have got a brand new identity. We strive to uphold this culture in our organization for the years to come.

What do you think of our new logo? Feel free to share your thoughts in the comments below.

The post Results From Our Latest A/B Test: Here’s The New VWO Logo! appeared first on Blog.

Continued: 

Results From Our Latest A/B Test: Here’s The New VWO Logo!

6 Easy Ways To Learn A/B Testing (Number 6 Is Our Favorite)

Have you always wanted to introduce A/B testing into your marketing skill set but are unsure of where to begin?

Do you think A/B testing is for more technical marketers?

If so, you might be worried about nothing. A/B testing, also known as split-testing, is a common feature of almost every marketing tool these days.

Thankfully, many software products with built-in A/B testing functionality have made implementing A/B tests so easy that laypeople can learn to improve their marketing skills by using A/B tests.

To help you get up and running with your first A/B test campaign, here are 6 tools with built-in A/B testing features that are easy to implement for the average nontechnical person.

Let’s take a quick look at what A/B testing is.

What Is A/B Testing?

In this guide to A/B testing from VWO, it is defined as “comparing two versions of a webpage to see which one performs better.”

The reason why you would want to run an A/B test on your website is to improve conversions. For example, you can A/B test the product photos on your e-commerce website to see if models with beards increase conversions compared to models without beards.

As you can see, with A/B testing, you can follow a process to slowly increase the number of website visitors that convert into customers. If done properly, you can be confident that you’ll always get the same results.

So now that you understand what A/B testing is and the potential benefits of doing A/B tests in your marketing, let’s look at 6 tools that make it easy to run your first A/B test.

  1. Google AdWords

Google AdWords may have been the first tool with built-in A/B testing, so it’s likely where most marketers launched their first A/B testing campaign.

As Google gets paid each time someone clicks one of its ads, it’s in Google’s best interests to help improve the quality of its ads. And to help you figure out which ads are the best, you can A/B test your ads by rotating them evenly to see which has a higher click-through rate (CTR).

To get started on A/B testing in AdWords, go to your campaign settings, click to expand the Ad rotation settings, and then select Do not optimize: Rotate ads indefinitely.

If you want Google to pick the winning ad, select the Optimize: Prefer best performing ads radio button. It’s a good idea to have Google rotate the ads indefinitely and then you can manually pick a winner. This would help you make observations about why some ads perform better than others.

top A/B testing tools

Next, make sure that you have at least 2 ads in each ad group, and then start collecting data.

top A/B testing tools

Unfortunately, AdWords won’t tell you if your data is statistically significant, so you’ll need to enter the impressions and clicks each ad received into a tool like VWO’s A/B split test significance calculator to figure out which ad won.

2. Sumo

If you’re not yet collecting email addresses on your website, you should be.

Adding a pop-up to your website is a great way to grow your email list. One of the easiest ways to install a pop-up is with Sumo.com’s suite of free tools.

Its “List Builder” tool makes it easy to strategically add pop-ups to your website to collect email addresses. But what if your pop-ups aren’t converting well?

Fortunately, you can easily A/B test your pop-ups.

To gradually increase the number of email addresses, you can create variations with different text, colors, or calls to action.

Within Sumo, under List Builder,  click the Tests tab, and then create a new form:

top A/B testing tools

top A/B testing tools

Select the form of which you need to create a variation:

After creating the variation, Sumo rotates both versions of the pop-up and collects conversion data, which will be displayed in your dashboard:

top A/B testing tools

Give your A/B test enough time to collect statistically significant data. After getting a clear winner, you can delete the losing pop-up and create a new pop-up to compete against the winner.

3. Drip

Drip.com is marketing automation software that helps you send personalized emails at exactly the right time.

For example, if you want to send an abandoned cart email 30 minutes after your website visitor added a product to the cart but didn’t complete the purchase, you can create an Abandoned Cart campaign within Drip to send the email automatically.

But what happens if your recipient doesn’t open the email? That’s another missed opportunity.

So, to recover such customers, you want to make sure your abandoned cart email stands out in their inbox and gets opened. Fortunately, you can increase the likelihood of that with Drip’s built-in split test feature.

Within Drip, you have the ability to easily split test the subject line, “From” name, and/or delivery time of the emails in your campaign.

In the example below, you can see how easy it is to set up an A/B test of a subject line:

top A/B testing tools

Next, enter an alternate subject line, and then Drip automatically rotates the subject lines in your abandoned cart email campaign:

top A/B testing tools

Drip also tracks how many times the emails associated with each subject line were opened. After gathering a statistically significant amount of data, you can see in your dashboard the confidence level at which you would get the same results if you let the A/B test running.

top A/B testing tools

After you’ve reached a 95% confidence level or higher, you can stop the losing variation and continue with the winning variation, or create a new A/B test to try and beat the winner.

4. Intercom

Next, we’ll look at the ways you can A/B test chat messages. Fortunately, Intercom makes it easy for you to do this.

Chat messages are a great way to engage your website visitors to increase your conversion rate or just get their email address so that you can market to them in the future.

You can think of a chat message the same way as greeting people when they walk into your brick-and-mortar store. It’s their first impression of you and your brand, so the quality of your greeting can be the difference whether they make a purchase or not.

With most chat tools, you can send “proactive messages” to engage your website visitors. Examples of proactive messages are:

  • “Hello, I’m here to answer any questions you may have.”
  • “Can I help you find a product?”
  • “Do you have any questions about shipping?”

If your proactive message isn’t warm or engaging enough, the visitor may not reply and you may lose a chance to convert them into a customer.

With Intercom, you can A/B test your proactive messages to see which ones have a high open rate. Just create your greeting:

top A/B testing toolsThen use the built-in A/B test feature to create a different greeting for your proactive message:

top A/B testing toolsIntercom will then show each greeting 50% of the time and display the results of the A/B test in your message dashboard so that you can see which greeting has the best open rate:

top A/B testing tools

5. Title Experiments

Did you know that 80% of people who read a headline copy won’t read the rest of the blog post? This is why it’s so important to write great blog post titles.

But how do you know what’s considered a good title? Well, you can split-test your blog post titles to find out.

With a WordPress plug-in called Title Experiments, it’s easy to create 2 versions of titles for each of your blog posts.

Every time you publish a new blog post, just click Add New Title, and then you can write a second variation of your blog post title:

top A/B testing tools

Title Experiments automatically A/B tests both variations, and then you can see how well each one is performing until you eventually pick a winner:

top A/B testing tools

6. VWO

So far, I’ve shown you how to run A/B tests within third-party tools, but what about doing actual A/B tests on your website itself?

Increasing conversions by changing your website’s copy, colors, and layout are where the fun begins when it comes to A/B testing.

With VWO, you can create a hypothesis about how to improve website conversions, and then easily create a variant of your webpage by using its WYSIWYG editor to test against your current page (also known as the control.)

The great thing about A/B testing with VWO is that you don’t have to be technical so that you can do it yourself without the need to hire a developer.

Get started by clicking the Create on the A/B Tests page:

top A/B testing tools

Edit the page you want to A/B test by using its WYSIWYG editor to create a variation to test against the control page:

top A/B testing toolsFrom your VWO dashboard, you can view the results of the A/B test. You can see which variation resulted in more conversions and whether the data is statistically significant so that you can be confident of the results.

top A/B testing tools

Just like the other tools mentioned above, VWO tells you when you’ve collected enough data to make a statistically significant decision about the results.

Conclusion

A/B testing isn’t as hard as it seems. It’s pretty easy to give A/B testing a try, thanks to the built-in features found in marketing software these days.

So if you’re ready to take the leap and want to run your first split test campaign, give one of the above-mentioned tools a try. I think you’ll find that it’s easier than you expected!

Over to You

Have you ran A/B tests by using the tools I just shared? Are there other tools with built-in A/B testing features that you think we can talk about?

It would be awesome to hear from you in the comments!

The post 6 Easy Ways To Learn A/B Testing (Number 6 Is Our Favorite) appeared first on Blog.

Link: 

6 Easy Ways To Learn A/B Testing (Number 6 Is Our Favorite)

Thumbnail

How Tough Mudder Gained a 9% Session Uplift by Optimizing for Mobile Users

The following is a case study about how Tough Mudder achieved a 9% session uplift by optimizing for mobile. With the help of altima° and VWO, they identified and rectified pain points for their mobile users, to provide seamless event identification and sign-ups. 


About the Company

Tough Mudder offers a series of mud and obstacle courses designed to test physical strength, stamina, and mental grit. Events aren’t timed races, but team activities that promote camaraderie and accomplishment as a community.

Objective

Tough Mudder wanted to ensure that enrolment on their mobile website was smooth and easy for their users. They partnered with altima°, a digital agency specializing in eCommerce, and VWO to ensure seamless event identification and sign-ups.

Research on Mobile Users

The agency first analyzed Tough Mudder’s Google Analytics data to identify any pain points across participants’ paths to enrollment. They analyzed existing rates from the Event List, which demonstrated that interested shoppers were not able to identify the events appropriate for them. The agency began to suspect that customers on mobile might not be discovering events easily enough.

Test

On the mobile version of the original page, most relevant pieces of information like the event location and date, were being pushed too far down below the fold. In addition, lesser relevant page elements were possibly distracting users from the mission at hand. This is how it looked like:

tough mudder
Event location and date way below the fold on ‘original’

The agency altima° decided to make the following changes in the variation:

  1. Simplified header: Limiting the header copy to focus on the listed events. The following image shows how this looked.

    img2
    Simplified header copy
  2. List redesign: Redesigning the filter and event list to prominently feature the events themselves. The following image shows the same:
    List redesign to optimize event location and date
  3. Additionally, an Urgency Message was added to encourage interested users to enroll in events nearing their deadline. See the following image to know how it was done:
    Urgency message to push quicker enrollments

For these three variations, seven different combinations were created and a multivariate test was run using VWO. The test experienced over 2k event sign-ups across 4 weeks. The combinations of variations are shown below:

Test Results

After 4 weeks, Variation 2, which included the redesigned event list, proved to be the winning variation. This is not to say that other test variations were not successful. Variation 2 was just the MOST successful:

The winning variation produced a session value uplift of 9%! Combined with the next 2 rounds of optimization testing, altima° helped Tough Mudder earn a session value uplift of over 33%!

Why Did Variation 2 Win?

altima° prefers to let the numbers speak for themselves and not dwell on subjective observations. After all, who needs opinions when you’ve got data-backed results? altima°, however, draws the following conclusions on why Variation 2 won:

Simplified header:

Social proof has demonstrated itself to be a worthy component of conversion optimization initiatives. These often include customer reviews and/or indications of popularity across social networks.

In fact, Tough Mudder experienced a significant lift in the session value due to the following test involving the addition of Facebook icons. It’s likely that the phrase Our Events Have Had Over 2 Million Participants Across 3 Continents warranted its own kind of social proof. 

List redesign:

The most ambitious testing element to design and develop was also the most successful.

It appeared that an unnecessary amount of real estate was being afforded to the location filter. This was resolved by decreasing margins above and below the filter, along with removing the stylized blue graphic.

The events themselves now carried a more prominent position relative to the fold on mobile devices. Additionally, the list itself was made to be more easily read, with a light background and nondistracting text.

Urgency message:

The underperformance of the urgency message came as a surprise. It was believed that this element would prove to be valuable, further demonstrating the importance of testing with VWO.

Something to consider is that not every event included an urgency message. After all, not every enrolment period was soon to close. Therefore, it could be the case that some customers were less encouraged to click through and enroll in an individually relevant event if they felt that they had more time to do so later.

They might have understood that their event of interest wasn’t promoting urgency and was, therefore, not a priority. It also might have been the case that an urgency message was introduced too early in the steps to event enrolment.

Let’s Talk

How did you find this case study? There are more testing theories to discuss! Please reach out to altima° and VWO to discuss. You could also drop in a line in the Comments section below.

Multivariate Testing CTA

0

0 ratings

How will you rate this content?

Please choose a rating

The post How Tough Mudder Gained a 9% Session Uplift by Optimizing for Mobile Users appeared first on VWO Blog.

See the article here: 

How Tough Mudder Gained a 9% Session Uplift by Optimizing for Mobile Users

Learn How Experts Derive Insights from A/B Test Results

You conducted an A/B test—great! But what next?

How would you derive valuable insights from the A/B test results? And more importantly, how would you incorporate those insights into subsequent tests?

As Deloitte University Press Research on Industrialized Analytics suggests, acquiring information is just the first step of any robust data analysis program. Transforming that information into insights and eventually, the insights into actions is what yields successful results.

A/B testing Result- Data analytics

This post talks about why and how you should derive insights from your A/B test results and eventually apply them to your conversion rate optimization (CRO) plan.

Analyzing Your A/B Test Results

No matter how the overall result of your A/B test results turned out to be—positive, negative, or inconclusive—it is imperative to delve deeper and gather insights. Not only can this help you to aptly measure the success (or failure) of your A/B test, but also provide you with validations specific to your users.

As Bryan Clayton, CEO of GreenPal puts it, “It amazes me how many organizations conflate the value of A/B testing. They often fail to understand that the value of testing is to get not just a lift but more of learning.

Sure 5% and 10% lifts in conversion are great; however, what you are trying to find out is the learning about what makes your customers say ‘yes’ to your offer.
Only with A/B testing can you close the gap between customer logic and company logic and, gradually, over time, match the internal thought sequence that is going on in your customers’ heads when they are considering your offer on your landing page or within your app.”

Here is what you need to keep in mind while analyzing your A/B test results:

Tracking the Right Metric(s)

When you are analyzing A/B test results, check if you are looking for the correct metric. If multiple metrics (secondary metrics along with the primary) are involved, you need to analyze all of them individually.

Ideally, you should track both micro and macro conversions.

Brandon Seymour, founder of Beymour Consulting rightly points out: “It’s important to never rely on just one metric or data source. When we focus on only one metric at a time, we miss out on the bigger picture. Most A/B tests are designed to improve conversions. But what about other business impacts such as SEO?

It’s important to make an inventory of all metrics that matter to your business, before and after every test that you run. In the case of SEO, it may require you to wait for several months before the impacts surface. The same goes for data sources. Reporting and analytics platforms aren’t accurate 100 percent of the time, so it helps to use different tools to measure performance and engagement. It’s easier to isolate reporting inaccuracies and anomalies when you can compare results across different platforms.”

Most A/B testing platforms have built-in analytics sections to track all the relevant metrics. Moreover, you can also integrate these testing platforms with the most popular website analytics tools such as Google Analytics. To track A/B test results via Google Analytics, you can also refer to this article by ConversionXL.

Conducting Post-Test Segmentation

You should also create different segments from your A/B tests and analyze them separately to gauge a clearer picture of what may be happening. The results you derive from generic nonsegmented testing will provide illusory results that lead to skewed actions.

There are broad types of segmentation that you can create to divide your audience. Here is a set of segmentation approach from Chadwick Martin Bailey:

  • Demographic
  • Attitudinal
  • Geographical
  • Preferential
  • Behavioral
  • Motivational

Post-test segmentation allows you to deploy variation based on a specific user segment. For instance, if you notice that a particular test affected new and returning users differently (and notably), you will want to apply your variation only to that particular user segment.

However, searching through lots of different types of segments after a test means you are assured of seeing a lot of positive results just because of random chance. To avoid that, make sure you have your goal defined clearly.

Delving Deeper into Visitor Behavior Analysis

You should also monitor visitor behavior analysis tools such as  Heatmaps, Scrollmaps, Visitor Recordings and so on to gather further insights into A/B test results. For example, consider a search bar on an eCommerce website. An A/B test on the navigation bar works only if users actually use it. Visitor recordings can reveal if users are finding the navigation bar friendly and engaging. If the bar itself is complex to understand, all variations of it can fail to influence users.

Apart from giving insights on specific pages, visitor recordings can also help you understand user behavior across your entire website (or conversion funnel). You can learn how critical the page on which you are testing, is in your conversion funnel.

Maintaining a Knowledge Repository

After analyzing your A/B tests, it is imperative to document the observations from the tests. This helps you not only in transferring knowledge within the organization but also in using them as reference later.

For instance you are developing a hypothesis for your product page, and want to test the product image size. Using a structured repository, you can easily find similar past tests which could help you estimate patterns on that location.

To maintain a good knowledge base of your past tests, you need to structure it appropriately. You can organize past tests and the associated learning in a matrix, differentiated per their “funnel stage” (ToFu, MoFu or BoFu) and “the elements that were tested.” You can add other customized factors as well to enhance the repository.

Look at how Sarah Hodges, co-founder of Intelligent.ly, maintains track of the A/B test results, “At a previous company, I tracked tests in a spreadsheet on a shared drive that anyone across the organization could access. The document included fields for:

  • Start and end dates
  • Hypotheses
  • Success metrics
  • Confidence level
  • Key takeaways

Each campaign row also linked to a PDF with a full summary of the test hypotheses, campaign creative, and results. This included a high-level overview, as well as detailed charts, graphs, and findings.

At the time of deployment, I sent out a launch email to key stakeholders with a summary of the campaign hypothesis and test details, and attached the PDF. I followed up with a results summary email at the conclusion of each campaign.

Per my experience, concise email summaries were well-received; few users ever took a deep dive into the more comprehensive document.
Earlier, I created PowerPoint decks for each campaign I deployed, but ultimately found that this was time-consuming and impeded the agility of our testing program.”

Applying the Learning to Your Next A/B Test

After you have analyzed the tests and documented them according to a predefined theme, make sure that you visit the knowledge repository before conducting any new test.

The results from past tests shed light on user behavior on a website. With better understanding of the user behavior, your CRO team can have a better idea about building hypotheses. This can help the team create on-page surveys that are contextual to a particular set of site visitors.

Moreover, results from past tests can help your team come up with new hypotheses quickly. The team can identify the areas where the win from a past A/B test can be duplicated. Also, the team can look at failed tests, know the reason for their failure and steer clear of repeating the same mistakes.

Your Thoughts

How do you analyze your A/B test results? Do you base your new test hypothesis on past learning? Write to us in the comments below.

Free-trial CTA

The post Learn How Experts Derive Insights from A/B Test Results appeared first on VWO Blog.

Continued here: 

Learn How Experts Derive Insights from A/B Test Results

Thumbnail

Creating Better A/B Tests for eCommerce | Visitor Behavior Analysis Use Cases

alt : http://feedproxy.google.com/~r/ILoveSplitTesting/~5/LClAaoFWRhc/vba-session-recording02.mp4http://feedproxy.google.com/~r/ILoveSplitTesting/~5/LClAaoFWRhc/vba-session-recording02.mp4

According to a recent report, the total eCommerce sales in the U.S. soared from $60.6 billion to $69.7 billion — by almost 15%.

As an eCommerce business owner, you would agree that these numbers are promising. However, where on one hand there is tremendous growth in sales, on the other, there are an increasing number of challenges.

With growing number of market players competing to win the largest chunk of eCommerce sales, the fundamental challenge for them is to reduce Customer Acquisition Cost (CAC) and increase Customer Lifetime Value (LTV).

According to the 2015 Benchmark report series on eCommerce buyer behavior, the cost of acquiring new visitors through channels such as PPC, social media, and search engines is rapidly surging. The best way for eCommerce marketers to deal with that is by converting more of their existing visitors into customers. They should create a seamless and friction-free experience for their visitors, not only making more visitors convert into customers but also retaining the existing customers.

Creating such an experience requires eCommerce businesses to understand how their visitors behave — what exactly are users looking for, what are their pain points, and what makes them buy.

Understanding User Behavior with Visitor Behavior Analysis Tools

While website analytics tools such as Google Analytics tell who your users are and what they are doing on your website, visitor behavior analysis tools such as Heatmaps, Visitor Recordings and Form Analysis help analyze all interactions users have with specific pages.

For example, from Google Analytics you might get to know that a user bounced off from the product page, but Visitor Recordings will show you the exact drop-off point from the page, i.e., product menu, search bar, etc. In the process of uncovering ‘how users behave,’ these tools dig out the pain points that users face while browsing, searching, or paying.

Understanding user pain points can help eCommerce businesses provide a user-friendly experience, which can ultimately lead to more conversions.

More often than not, eCommerce marketers try to address the website usability issue by running random A/B tests. Only a few smart ones have realized the power of visitor behavior analysis.

The post walks you through multiple use cases of visitor behavior analysis tools, spun out of eCommerce A/B testing case studies. It will help you understand how those tools help in creating smarter, data-backed hypotheses for A/B tests quickly.

Use Cases for Heatmaps

Use Case 1

Company: nameOn (Scandinavia’s leading supplier of personalized gifts with embroidery)

Objective: Increase conversions from the “add-to-cart page” to “checkout page.”

Hypothesis for A/B test: On observing a 31.7% drop-off rate between the add-to-cart page and checkout page, nameOn thought that reducing the number of CTAs on the page will reduce distractions for visitors, and allow them to focus on “continue to checkout” button.

The Test: On reviewing the add-to-cart page, nameOn noticed that it contained too many CTAs. Because nameOn wanted visitors to focus only on “continue to checkout,” the decision was made to retain only two CTAs — “continue to checkout” and “welcome bonus.” The test was run for 44 days for 621 visitors. You can read the entire case study here. Images of control and variation are given below:

Minimizing number of CTAs and reducing distractions - Case Study
Control
Minimizing number of CTAs and reducing distractions - Case Study (2)
Variation

Running an Analysis with Heatmaps

With the help of Heatmaps, nameOn could have studied the number of clicks that each CTA on the page receives. With actual user insights derived from the analysis of Heatmaps and Clickmap (a type of Heatmaps), nameOn could have done the following:

  1. Study which areas on the page get the most attention.
  2. Discover which CTAs/hyperlinks on the page get the most/least clicks.

For example, nameOn could have studied clicks on all CTAs and only removed the ones that were considerably cannibalizing the primary CTAs.

Use Case 2

Company: Veggietales.com (eCommerce site for VeggieTales, a TV series for children)

Objective: To make people aware of the immense size of their followers on social media.

Hypothesis: The free shipping CTA overshadowed the social icons on the homepage. Increasing the size of the social icons and featuring the large follower count will act as a social proof, and increase the trust visitors have with the brand.

The Test: The control version of the homepage displayed a huge CTA for free shipping with social sharing buttons right below. Based on the hypothesis, VeggieTales replaced the free-shipping CTA with social sharing buttons along with the number of followers that VeggieTales had on Twitter and Facebook. You can read the entire case study here. Images of control and variation are given below:

Removing Free Shipping Banner - CTA
Control
Removing Free Shipping Banner - CTA (2)
Variation

Running an Analysis with Heatmaps

VeggieTales could have studied the number of clicks on the free-shipping CTA on the header as well as the free-shipping CTA on the control page, using a Clickmap analysis.

Next, they could have decided whether doing away entirely with the free shipping CTA makes sense, or would it be wiser to simply shift the text below on the homepage. This way, they could have worked out another version that could get them more social engagement as well as retain the conversions that might have been lost because of completely removing the free shipping text on the right.

Use Case 3

Company: Buyakilt.com (an online kilt and Scottish highland dress retailer)

Objective: To improve click-throughs and engagement on the product category page.

Hypothesis: Adding a product filter on the page would increase conversions.

The Test: Buyakilt.com has a number of category pages that further have four sub-category pages. They added a product filter that gave visitors an option to shop by different kilt types, kilt patterns, etc. You can read the case study here. Images of control and variation are given below:

Removing sub-categories from product page and adding filter criterias - Case Study
Control
Removing sub-categories from product page and adding filter criterias - Case Study
Variation

Running an Analysis with Heatmaps

In this case, apart from adding a product filter for simplifying the search process, Buyakilt could have used element list analysis (explained below) to study the number of clicks on each element under the categories.

An element list analysis is a type of Clickmap analysis which gives the actual number of clicks on both visible as well as hidden elements in a list (e.g., a drop-down list). This data can be further used for optimizing the category menu. For example, elements that receive maximum clicks can be placed on the top of the list, and those with lesser clicks can be placed lower on the list.

Use Cases for Visitor Recordings

Use Case 1

Company: Drukwerkdeal (an online printing shop based out of the Netherlands)

Objective: To push more checkout actions from the product pages.

Hypothesis: Drukwerkdeal hypothesized that the cross-sell messages were doing more harm than good. Upon reviewing a number of its product pages, Drukwerkdeal realized that their cross-selling messages were not very convincing.

The Test: Drukwerkdeal decided to remove these messages from some of the page categories and test if this would have any impact on the sales. 14,000 visitors participated in this test that was run for two weeks. Read the entire case study here. Images of control and variation are given below:

Cross-sell message removed from product page - Case Study
Control
Cross-sell message removed from product page - Case Study
Variation

Running a Visitor Recording Analysis

Although upsell and cross-sell help boost eCommerce conversions, at times these strategies might backfire. Using Visitor Recordings, businesses can find out whether their upsell or cross-sell campaigns are getting them the desired results, or are proving to be a distraction. These recordings give accurate data about users’ entire journey on a website — where exactly they dropped-off, where they spent maximum time, from which point on a page they bounced off, etc. Based on this data, website owners can tailor an improved experience for users.

Here’s an example of Visitor Recordings:

In Drukwerkdeal’s case, before removing the cross-sell messages from the product page, they could first analyze how people interact with the product pages. Using Visitor Recordings, they could have played back the entire session of visitors who interacted with the product page. They could then validate whether users who clicked on a link in the cross-sell message returned to the original product page or not. It could then be established whether actually the cross-sell messages were proving to be a distraction on the product page.

Use Case 2

Company: Yuppiechef (a leading online store selling premium kitchen tools)

Objective/Conversion Goal: Increase the rate of wedding registry signups on one of their landing pages.

Hypothesis: Removing navigation menu would increase the rate of wedding registry signups.

The Test: Yuppiechef created a variation of their landing page without a navigation bar and tested it against the control. You can read the entire case study here. Here’s how the control and variation for Yuppiechef looked like:

Removing navigation bar from eCommerce landing page - Case Study
Control
Removing navigation bar from eCommerce landing page - Case Study (2)
Variation

Running a Visitor Recording Analysis

By playing back the Visitor Recordings of users who visited the landing page, Yuppiechef would be able to playback the individual recordings of those visitors who browsed and clicked on the navigation menu but did not register.

They could also playback recordings of those who clicked on an item in the navigation bar but returned to register. Studying different kinds of interactions by different users, Yuppiechef could have used this research to validate that the navigation bar was serving as a distraction on the website.

Use Case 3

Company: UKToolCenter (an online store that offers tools & products for trade professionals and DIY experts)

Objective: To increase user engagement on a specific category of products on the website.

Hypothesis: By removing the filtering option on a product search, users will convert more.

The Test: UKToolCenter hypothesized that the filter menu was unnecessarily adding extra options for the users, and removing it could help increase engagement for a specific product category page (Cuprinol, a brand of woodcare products). They created a variation in which the product filter for Cuprinol products was removed. You can read the entire case study here. Take a look at their control and variation page:

Removing navigation bar from eCommerce landing page - Case Study
Control
Removing navigation bar from eCommerce landing page - Case Study (2)
Variation

Running a Visitor Recording Analysis

While removing the filter from that specific product category increased user engagement on that product page by 27%, we have another case study where adding a product filter led to an increase in conversions.

Using Visitor Recordings, UKToolCenter could have studied user engagement on the product filter for Cuprinol. By playing back Visitor Recordings they could have identified visitors dropping-off from the Cupinol category page. By playing individual recordings, they could have validated whether the filter on the page was causing visitors to drop-off without browsing Cuprinol products at all.

Use Case for Form Analysis

Use Case 1

Company: Blivakker.no (Norway’s leading online beauty shop)

Objective: To increase form conversions on the registration form.

Hypothesis: Reducing number of form fields will increase conversions.

The Test: The original registration form on Blivakker.no consisted of 17 form fields. The company wanted to test if making a small change in the form would increase conversions on the form. Three form fields — phone number, account number and phone number (evening) — were removed from the web form. Another variation that was tested was a completely stripped down form with fewer fields and less navigational elements. You can read the entire case study here

Reduce form-fields - Case study
Removing three form-fields

Running a Form Analysis

While the test showed that removing unnecessary fields from the form increased the number of registrations, using Form Analysis could have shown the actual fields on the form where users dropped off or hesitated to fill in the information. Here’s a screenshot from VWO’s Form Analysis feature:

Screen Shot 2016-05-19 at 1.41.19 PM

Rather than relying on expert opinions, which can only be contextually right, using Form Analysis could have helped Blivakker.no to remove the fields that created most confusion and retain all the other ones, aiming to fetch more conversions and valuable user information, at the same time.

To Wrap Up

eCommerce businesses cannot rely on A/B testing random ideas for improving conversions. The need of the hour is to understand what their users are really looking for and what pain-points they’re facing.

Any eCommerce business that is able to identify what appeals or repels its users has an edge above its competitors. Using actual user behavior data, businesses can create smoother, friction-free experience for their users, and ultimately push overall conversions higher.

Let us know what you think about visitor behavior analysis tools. Post your comments below.

VWO free-trial CTA

The post Creating Better A/B Tests for eCommerce | Visitor Behavior Analysis Use Cases appeared first on VWO Blog.

Continue reading here: 

Creating Better A/B Tests for eCommerce | Visitor Behavior Analysis Use Cases

Thumbnail

A Definitive Guide to Converting Failed A/B Tests Into Wins

alt : http://feedproxy.google.com/~r/ILoveSplitTesting/~5/LClAaoFWRhc/vba-session-recording02.mp4http://feedproxy.google.com/~r/ILoveSplitTesting/~5/LClAaoFWRhc/vba-session-recording02.mp4

As a marketer, there aren’t many things sweeter than running successful A/B tests and improving your website conversion rate.

It’s sweet because getting a winning A/B test is hard work.

To carry out a successful A/B test, marketers need to follow a robust process. They need to develop data-driven hypotheses, create appropriate website variations, and test on targeted audience. And even by following such a structured process, marketers tend to win just one out of three A/B tests.

[What’s more worrying is that the percentage of winning A/B tests overall is only 14% (one out of seven). That’s largely because most of the marketers still don’t follow a documented process for A/B testing (and CRO as a whole). For instance, only 13% of eCommerce businesses base their testing on extensive historical data.]

But, here is a good news: Your failed A/B tests can still be of value.

By analyzing the A/B tests that didn’t win, you can highlight flaws in your approach, improve the tests, and even identify hidden winners.

This post talks about the key things you can do after encountering an unsuccessful test.

For convenience sake, we’ve segregated unsuccessful tests into two parts: inconclusive tests, and tests with negative results.

When A/B Tests Give Inconclusive Results

An inconclusive result is when an A/B test is unable to declare a winner between variations. Here’s what you need to do with such a test:

Finding Hidden Winners

Even when your A/B test hasn’t found a winner among different variations, there are chances that you can still uncover wins by slicing and dicing your test audience.

What if the A/B test produced results for specific segments of your traffic (segmented the basis of traffic source, device type, etc.)?

This scenario is similar to the Simpson’s Paradox. Let’s understand it with a simple example.

A gender bias study among the UC Berkeley admissions in 1973 showed that men had a higher chance of being admitted as compared to women.

Simpson's paradox example

However, the department-specific data showed that women had a higher admission rate for most departments. Actually, a large number of women had applied for departments with low admission rate (in contrast to a small number of men).

Simpson's paradox example 2

We can see how multiple micro-trends skewed the overall study result.

Likewise, an A/B test can be found working for some traffic segments and not working for some, leading to an inconclusive result.

You can reveal hidden winners (traffic segments where an A/B test delivered results) with post result segmentation.

For instance, you can find if your website conversion rate improved specifically for new visitors or old ones; for paid traffic or organic traffic; or for desktop traffic or mobile traffic.

The analysis can help you identify segments that have the most potential. For example, your inconclusive A/B test might have increased conversions for “returning visitors.” You can run a new (or the same old) test targeting only the returning visitors.

Post Result Segmentation for an A/B Test

Nonetheless, it’s essential to observe the number of visitors for each segment. The conversion rate and other data points for different segments can only be trusted if the individual segment traffic is large enough.

Tracking the Right Metric(s)

The effectiveness of an A/B test’s result depends largely on the metric you’re tracking.

A lot of times, A/B tests aim at improving only micro-conversions for a website. Mostly, that’s either because the test is carried out at a beginning stage of a conversion funnel or on less-critical web pages. Such tests do not track changes in a website’s macro conversions, and fail to notice any rise in the bottom-line (sales/revenue).

When your A/B test is inconclusive, you need to check if you’re optimizing for the correct metric. If multiple metrics are involved, you need to analyze all of them individually.

Let’s suppose, you run an eCommerce store. You create a variation for your product description page that mentions “free shipping,” with the objective of increasing add-to-cart actions (a micro conversion). You A/B test the variation with the control page that gives “no information on shipping.” To your surprise, the test couldn’t come up with a clear winner. Now, you need to see whether the variation boosted your revenue (macro conversion), or not. If it did, the reason can be simple: the “free shipping” variation might have led only the users with high purchase-intent to the checkout page, thus, increasing the number of conversions.

If you realize that you weren’t tracking the most relevant metric with your A/B test, you need to edit the test with new goals. With new metrics in place, you can run the test for a while longer, and find improvements.

It’s advisable to keep your eyes on both micro and macro conversions.

Micro and macro conversions

Analyzing Visitors’ Behavior

Using on-site analysis tools, you can uncover a lot of insights which plain data just can’t offer. With the help of heatmaps/scrollmaps and visitor recordings, you can observe the behavior of your users (A/B test participants) and find probable causes that led to an inconclusive test.

Heatmaps can tell you if the element you’re testing is going unnoticed by most users. For instance, if you’re testing a variation of a CTA button that lies deep down the fold, heatmaps/scrollmaps can highlight the number of users that are reaching the CTA button. An A/B test might be inconclusive if only a handful of users are reaching the CTA button.

Here’s how a scroll map looks:

Scroll Map - VWO Pricing Page

In the same case, visitor recordings can show you how users are interacting with the content and elements above the CTA. With high engagement above the CTA, users might have already made up their mind about their next action (a conversion or an exit). Hence, any changes in the CTA would not affect users and would result in an unsuccessful A/B test.

Apart from giving insights on specific pages, visitor recordings can help you understand user behavior across your entire website (or, conversion funnel). You can learn how critical the page on which you’re testing is in your conversion funnel. Consider a travel website where users can find holiday destinations using a search box and a drop-down navigation bar. An A/B test on the navigation bar will only work if users are actually engaging with it. Visitor recordings can reveal if users are finding the bar friendly and engaging. If the bar itself is too complex, all variations of it can fail to influence users.

Double Checking Your Hypothesis

Whenever an A/B test fails to provide a result, the blaming-fingers invariably point to the hypothesis associated with it.

With an inconclusive A/B test, the first thing to check is the credibility of the test hypothesis.

Start with reviewing the basis of your hypothesis. Ideally, all your test hypothesis should be either backed by website data analysis or user feedback. If that’s not the case, you need to backtrack, and validate your hypothesis with either of the two methods.

When your hypothesis is, in fact, supported by website data or feedback, you need to assess whether your variation closely reflects it. You can also take help of on-site analysis tools, and find ways to improve your variations.

Funnel data analysis
Sample website data that can be used to create hypothesis (Source)

Here’s an example: Let’s suppose you have a form on your website, and data analysis tells you that a majority of users drop off on the form. You hypothesize that reducing friction on the form will increase submissions. For that, you cut down the number of form-fields and run an A/B test. Now, if the test remains inconclusive, you need to see if you’ve removed the friction-inducing form fields or not. Form-analysis can help you find exactly those form-fields that lead to the majority of drop-offs.

Reviewing the Variations

One of the biggest reasons A/B tests remain inconclusive is that the difference between test variations is minuscule.

Now, I know, there are numerous case studies boasting double/triple-digit improvement in conversion rate by just “changing button color.” But what we don’t see are all the tests that fail to achieve the same feat. There probably are tens/hundreds of such failed tests for every single winning test.

For instance, Groove (a helpdesk software), ran six different A/B tests with trivial changes. All of them proved to be inconclusive. Have a look:

CTA button color change A/B test

CTA Text change A/B test

Keeping this in mind, you need to go through your test variations and see if they really have noticeable changes.

If you’re testing for minor elements, you need to start being more radical. Radical or bold A/B tests are usually accompanied by strong hypotheses, tending to deliver results more often.

(Interestingly, testing radical changes is also advisable when you have a low traffic website.)

Deriving Further Learnings from the Tests

So you’ve finished a thorough analysis of your inconclusive A/B test using the above-mentioned points. You now know what went wrong and where you need to improve. But, there’s more.

You also get to know about the elements that (possibly) don’t influence users for conversions.

When your inconclusive test had no hidden winners, you tracked the correct metrics, your hypothesis was spot on, and your variations were disparate enough, you can safely assume that the element tested just didn’t bother your users. You can recognize that the element is not high on your criticality list.

This will help you create a priority list of elements for your future A/B testing.

When A/B Tests Give Negative Results

A negative result for an A/B test means that the control beat the variation. Even with a failed test, you can gain insights and conduct future tests effectively.

Finding What Went Wrong

There could be many reasons because of which your A/B test returned a negative result. Having the hypothesis wrong, or executing the variation poorly are among them.

A negative result will make you question the test hypothesis. Did you follow a data-driven approach to come up with the hypothesis? Did you blindly follow a “best practice?”

Unbounce highlights a few cases where A/B tests performed against “common expectations.”

Example: ”Privacy assurance with form” best practice failed
Example: ”Privacy assurance with form” best practice failed

These tests again emphasize the importance of a data-driven process behind A/B testing and CRO. A negative A/B test result can prove to be a wake-up call for practicing the same.

Knowing Your Users’ Preference

Negative A/B test results let you understand your users’ preferences better. Specifically, you get to know your users’ dislikes (in the form of the changes you made to the losing variation).

Since you know what your users don’t like with your website, you can build on hypotheses about what they might like. In other words, you can use your negative test results to create better tests in the future.

Let’s talk about the Unbounce example used in the point above. The A/B test was performed on a form, where the variation flaunted privacy assurance, saying “100% privacy – we will never spam you.” The variation couldn’t beat the control — it reduced conversions by 17.80%. Upon analyzing the result, it was deduced that users didn’t like the mention of the word “spam.” Knowing what the users hated, the next test was run with a different variation. The form still had privacy assurance but this time it read “We guarantee 100% privacy. Your information will not be shared.” (No mention of the dreaded “spam” word.) This time the result changed — the variation ended up increasing signups by 19.47%.

Learing used from failed A/B test for a win

What’s Your Take?

How often do you encounter failed A/B tests? We’d love to know your thoughts on how to tackle them. Post them in the comments section below.

12

The post A Definitive Guide to Converting Failed A/B Tests Into Wins appeared first on VWO Blog.

View article:

A Definitive Guide to Converting Failed A/B Tests Into Wins

Why Banner Blindness Shouldn’t Scare You

Let me present to you one of the most painful facts from the online advertising industry.

Web users almost never look at anything that looks like an advertisement!

In fact, you are more likely to survive a plane crash than click on a banner ad.

As the Internet has evolved over time, web users have become increasingly indifferent to online ads.

And the reason behind such behavior of users? Banner Blindness.

This post introduces you to Banner Blindness and its roots, along with ways to defeat it. The post includes the following sections:

What is Banner Blindness?

It is a phenomenon where website visitors consciously or subconsciously ignore banner ads or any other banner-like elements on a website.

Benway and Lane coined the term “Banner Blindness” in 1998 after they conducted a study on website usability. They found that any information provided through external ad banners and internal navigational banners on a webpage was being overlooked by users. Moreover, users ignored the banners irrespective of the banners’ placement on the webpage. The study concluded that the traditional practice of making large, colorful and flashy banners had little effect in capturing web users’ attention.

It’s noteworthy that the issue of banner blindness has escalated greatly with time.

While the first ever banner ad on the Internet had a click-through rate of 44% (wow!), the current banner ads have a dismal click-through rate of 0.1%.

Interestingly, Banner Blindness is not just limited to the online world. We can find instances of its real-life occurrences, too. For example, during the 2006 elections in Florida, 13% of voters couldn’t cast their votes for their preferred candidate because of a poorly-designed ballot!

Banner Blindness Statistics

Here are some statistics that illustrate the gravity of Banner Blindness as an issue:

Why Does Banner Blindness Exist?

When web users scan or read through a web page, they only look at information which is relevant to them. They tune out everything else that doesn’t provide them with what they need.

This tendency of users has been developed over time, as the frequency of (irrelevant) ads has grown manifold.

Today, websites are bombarding their visitors with a ridiculous amount of ads. In U.S. alone, over 5.3 trillion display ads were served to users in 2012. This means that a typical web user finds more than 1700 online advertisements every month!

With so many ads constantly encroaching their web space, users have learned to focus just on the information pertinent to them.

Users have mastered the art of finding value amid clutter. Tweet: Why Banner Blindness Shouldn't Scare You. Read more at https://vwo.com/blog/banner-blindness

A research by Lapa, too, suggested that web users learn the structure of a web page very quickly,
allowing them to locate useful content faster and avoid ad banners.

Banner Blindness and the Signal Detection Theory

Brandt Dainow, from ThinkMetrics, expertly links Banner Blindness to Signal Detection Theory.

The Signal Detection theory talks about how humans can distinguish important signals from noise in a jumbled environment. For example, even in a noisy party, you are able to tune out all other voices and sounds so that you can listen to the person speaking directly to you.

In the same manner, people visiting a web page are able to tune out unnecessary content (ads and other elements) so that they can go through just the information they need.

How Users Browse Through a Web Page

With the help of multiple eye-tracking studies and click map reports, we have learned a lot about the way users read a web page.

Users don’t fully read a web page. They simply scan through it.

The F-shaped Reading Pattern:

The eye-tracking study by Nielsen revealed that users mostly navigate on a web page in an F-shaped pattern.

Users first read the starting of content — on top of a web page — in a horizontal manner. Next, they move down and read in a second horizontal movement (shorter than the first). And finally, they scan through the rest of the web page’s left-side vertically downwards.

The right-side of a web page is largely ignored by the users. And the right-side is where most of the display ads are placed!

Here’s a screenshot from Nielsen’s study that highlights the ads on web pages using a green box. The image clearly shows how users’ didn’t fixate on the ads.

Banner blindness eye-tracking map

The point has been further supported by a research conducted on text advertisements at Wichita State University. The finding of the research says, “Users tend to miss information in text ads on the right-side of the page more often than in text ads at the top of the page.”

Add-on: Since some websites are serving their content along with a lot of superfluous ads and elements, their whole user experience is suffering. Web browsers are treating it as an opportunity to win over users, by offering ways to improve the users’ browsing experience. Most browsers, as we all know, allow users to activate ad-block extensions for a long time. Now, they’ve gone a step further by providing them with a “reader-view mode” option. Once selected, the option transforms the web-page into a plain-text version, letting users to just view the information they require from the page.

This is how Mozilla’s Firefox does it.

Reader view mode in browser

Banner Blindness in the Mobile Era

The mobile channel is a huge contributor to a lot of websites’ traffic. It has even become the top source of traffic for some websites.

The mobile application market, too, has reached a considerable number of users.

As a result, mobile has emerged as one of the hottest properties for displaying banners ads.

But wait.

Mobile ads, too, suffer from Banner Blindness! Tweet: Why Banner Blindness Shouldn't Scare You. Read more at https://vwo.com/blog/banner-blindness

Here’s how.

First of all, there are not many choices for banner ads on mobile. The most popular banner dimension of 320×50 pixels covers 82% of all mobile banner ads. This banner is mostly placed on the top/bottom of the mobile screen. Since it does not intrude the main content that users read, its presence is easily overlooked by the users.

Secondly, mobile users spend a substantial amount of time browsing websites and applications ‘on-the-go.’ During this time, users are even more focused towards reading the main content (and ignoring the ads).

Concern Over Banner Blindness Studies

Many of the studies conducted to prove the existence of Banner Blindness, do it on the basis of indirect evidence that participants don’t remember ads. A research in Applied Cognitive Psychology named “Is Banner Blindness Genuine? Eye Tracking Internet Text Advertising” raised doubts over this methodology.

The research argued that “one should be careful before concluding that banners have not been looked at on the basis of users’ memory performance.”

Although the research’s eye-tracking results confirmed that 64% of the text ads included in the research were overlooked by participants, 82% of the participants still fixated on at least one of the text ads.

However, even after fixating on an ad, the participants couldn’t recall if the ad content was incongruous with the web page’s main content.

This highlights the importance of having ads that ‘go’ with the web page on which it’s placed.

6 Ways to Beat Banner Blindness

We’ve seen how Banner Blindness negatively affects the performance of our online ads. But still, there are tricks and techniques that can minimize Banner Blindness and make our ads stand out in front of users.

Let’s take a look at them.

#1 Ad Placement

Place your ads above the fold on a web page to gain more attention from users.

The above-the-fold content works better than below-the-fold content in terms of visibility ratio, time spent, and time to notice.

A study by Infolinks found that 156% more users read the above-the-fold content as compared to the below-the-fold content.

However, the study also found that leaderboard ads — placed at the very top of a web page — aren’t always the best performers. An ad placed on the bottom of the screen (but placed just above the fold) was seen 225% more quickly by users.

Related Post: Is Above the Fold Really Dead?

#2 Native Ads

First things first. What is Native Advertising?

Native Advertising is the practice of designing and presenting ads to users in the same form and function of a web page.

The ads have the same look and feel as the web page’s ‘native’ — or original — feel.

Native ads provide greater context to users, and generally have higher visibility.

There are various types of native ads, out of which search ads, in-feed ads, and sponsored content on websites are the most popular.

Especially, the native in-feed ads on different social media platforms, are currently offering a much higher click-through rate as compared to other ad-units.

Here are examples from Facebook and Twitter.

FB native ad
A native ad on Facebook

twitter native ad
A native ad on Twitter

#3 Behavioral Ads

The Banner Blindness studies mentioned above provide us with some additional information on users’ perception of ads.

80% of users felt that the last ad they saw was irrelevant to them.

Less than 3% of users believed that the ads they saw gave more context to the brand/product the ads promoted.

Ads can get greater visibility if they relate well with users’ interests. Tweet: Why Banner Blindness Shouldn't Scare You. Read more at https://vwo.com/blog/banner-blindness

Behavioral ads offer ad content to users based on their interests and preferences. These ads can be served to users on social media as well as the conventional web.

Under behavioral targeting, there is another advertising practice named “Retargeting” that offers a higher click-through rate.

In retargeting, users are served with ads based on their history of actions on the Internet.

For instance, when users browse products on amazon.com and leave without converting, a retargeted ad displaying the same products can follow them on other websites, prompting them to return to amazon.com.

Related Post: Retargeting Tools and Tips to Skyrocket Your Conversion Rate

#4 Ad Design

When your ads are not native, it is important to give your ads a highlighted presence on a web page. You can do that by tweaking your ads’ design.

Contrast

Ads that have sufficient contrast with the rest of the web page have a higher chance of getting noticed by users.

First, you should know the color schemes of the websites that host your ads. Then, you should decide your ad colors that match your brand AND provide contrast to the host sites’ colors.

When host sites are light-colored, use dark colors for your ads. Similarly, use light colors for your ads when the host sites use dark.

CTA

Include a prominent Call-to-Action (CTA) button in your ad copy (if the goal of your ad is a conversion).

An attention-grabbing CTA will make users fixate on the button and then, on the rest of the ad.

Ideally, your CTA button should have ample blank space surrounding it, so users can identify it easily. The color of the CTA button, too, should have great contrast within the ad copy.

Just like how your ad should stand out on a web page, your CTA should stand out within your ad.

Below is an example of a clean ad with a prominent CTA.

CTA example

Source

Directional Cues

Direct your users’ to your banners using visual cues.

Like this.

directional cue

Source

Another great example of directional cues is pictures of human faces looking at a specific direction.

It is human nature to follow the gaze of other humans. Be it real humans, or pictures of humans, we always try to find out where they are looking.

By including human faces as directional cues in your ad design, users are more likely to interact with your ad.

Here’s an example.

Directional cue

Related Resource: Dutch Major Uses Directional Cues to Improve CTA’s Clickthrough Rate

#5 Innovative Ad Types

Welcome-page ads

These ads appear before the host website loads for a user.

Some of these ads can be closed by the user, to move on to the host site. Other ads make the users wait for a certain amount of time, before the host website opens up automatically.

Forbes.com uses the latter of the two, making users spend at least three seconds on its ad page.

Welcome page ad

Welcome-page ads just might be our biggest weapon against Banner Blindness. Tweet: Why Banner Blindness Shouldn't Scare You. Read more at https://vwo.com/blog/banner-blindness

Website-skin ads

These ads cover the entire background of a website.

Furthermore, as the ads seem like they belong to the host website, users associate the host website’s brand value and trust factor with the advertiser.

Find below an example from IMDB.com.

Website skin ad

Source

#6 A/B Testing

With so many best practices on ad designing (some being mentioned here), it is impossible for one to make an ad incorporating them all.

Additionally, it is equally difficult to know beforehand which ‘best practice’ is actually going to help an ad, and which one will prove to be a dud.

The best way to go about it is A/B Testing.

You can make multiple versions of an ad, and test them against each other to determine which version works the best for you.

Online Advertisement: A Branding Practice

Sure, Banner Blindness makes users avoid fixating and clicking on online ads, but there are still other ways by which your ads can provide value to you.

Apart from initiating conversions, your ads should also spread awareness about your brand.

A research from the University of Chicago — “Banner Ads Work — Even If You Don’t Notice Them At All” — suggested that even the mere presence of your ads on a web page can result in a positive effect on users. The research says, “regardless of measured click-through rates, banner ads may still create a favorable attitude towards the ad due to repeated exposure.”

Let’s continue with the noisy party example from earlier.

You tune out all other noises from the party to only listen to the person speaking directly to you. Yet, when other people from the party mention your name (even in their own conversations), you immediately notice and acknowledge it.

Here again, we find a connection with the Signal Detection Theory.

We still identify relevant information from the noise we have tuned out. And in the same manner, web users process even those ads which they’ve ignored, at some level in their minds.

Your ads, even when unnoticed, can help effect a latent conversion. Tweet: Why Banner Blindness Shouldn't Scare You. Read more at https://vwo.com/blog/banner-blindness

View-through conversion rate is a great parameter to measure the effect an ad has in making users convert in their follow-up encounters with a brand.

Conclusion

Banner Blindness is the reason why online ads get minimal interaction with web users. However, you can beat it by offering relevant ads to users and placing the ads better.

Also, online ads help in building brand recall along with attracting conversions. Therefore, you shouldn’t be judging the effectiveness of ads through their click-through rate alone.

If I missed anything important let me know in the comments section below.

The post Why Banner Blindness Shouldn’t Scare You appeared first on VWO Blog.

Read article here: 

Why Banner Blindness Shouldn’t Scare You

Thumbnail

Success Kid Ran an A/B Test

We at Wingify spend a lot of our time running A/B tests to increase conversions for ourselves and our customers. But oftentimes, we find ourselves wondering about the deeper questions of life. What would happen if Success Kid ran an A/B test? What if overly attached girlfriend got to know about personalization? What if Bad Luck Brian got into content marketing?

And answering these questions is no easy task. They need deep research. So we spent day after day on Meme Generator and Know Your Meme learning more about our characters. Presented below are the findings from our extensive research in a slightly different kind of report.

Success Kid

Success Kid

First World Problems

First World Problems

Overly Attached Girlfriend

Overly attached girlfriend

Drunk Baby

Drunk Baby

Scumbag Steve

Scumbag Steve

Bad Luck Brian

Bad Luck Brian

The Most Interesting Man in the World

The most interesting man in the world

The meme from Taken — not sure what it’s called

Taken meme

Conspiracy Keanu

Conspiracy Keanu

Good Guy Greg

Good Guy Greg

Mr Bean

Mr Bean

Photogenic Guy

Photogenic Guy

We got a…

We got a badass over here

Kill Yourself

Don't use VWO? Kill yourself.

(Originally written for Medium. Reproduced here with small edits.)

The post Success Kid Ran an A/B Test appeared first on VWO Blog.

View original post here: 

Success Kid Ran an A/B Test

Thumbnail

Split Testing between Standard Search Box and Drop-Down Search Increased Leads by 57.25%

The Company

Casa Mineira is a real estate company operating in Brazil. They have eight physical offices spread in different locations in Belo Horizonte, one of Brazil’s largest cities. Apart from a strong physical presence in the city, they also have a website to attract customers online.

The original homepage of the website had a neat and somewhat minimalist design above the fold. The headline, pointing towards a search box, asked people to find a property in Belo Horizonte. This is how the search box on their original homepage looked like:

Original homepage of Casa Mineira with standard search box

To improve the conversions of their website, they hired Supersonic, a CRO consultancy in Brazil. Supersonic started by doing email surveys, on-site feedback (using Qualaroo) and exit-surveys to really understand the visitors. One thing that prominently came out from these was that the visitors really needed to perform their search quickly and easily.

Rafael at Supersonic, decided to test a variation and see the effect it will have on conversions. He replaced the standard search box with 2 boxes containing drop down menus. The first box had a drop down list labelled as Type (of apartment) and the second consisted of locations to choose from. Since Casa Mineira operates in one city and all major locations could be easily covered in a drop down menu, they decided to put this new search bar to test using Visual Website Optimizer.

This is how the search box on the variation looked:

variation_drop_down_search_box

The Test

A split URL test was set up and close to 7,500 visitors became a part of the test. The hypothesis was that increasing the usability of the search box by giving users clear choices will lead to more people giving their email ids to be contacted by one of the brokers from Casa Mineira. The conversion goal that was being tracked in VWO was the number of emails collected from each variation.

The Result

The variation outperformed the original homepage and brought the company 57.25% more leads. Here’s a quick comparison image showing the search box in the original and the variation side by side.

Comparison image - Casa Mineira split test

Why the Search Box with Drop-Down Menus Won?

The drop-down search gave visitors a clear path of action

With a standard search box, the visitors had to be sure about what they were looking for. Since the scope of a standard search box is limitless, they could type in anything to refer to what they were actually looking for. By covering all the locations and types for the visitors, in the drop down search box, the website made it easier for people to choose from the menu. Additionally, this design avoided the chance of typos and users typing in something and not getting any results when the keywords didn’t match with any listing on the website.

The drop-down made it possible for visitors to search for various combinations of apartment types and locations

In the variation, visitors could select multiple locations and apartment types in one search. In the standard search where visitors had to type keywords, there was no clarity if users could type in multiple locations at one time and how would the search respond if they wanted to look for more than one type of apartment in different locations. By selecting one or more of the choices from the drop-down menu, visitors’ expectations were set right.

Let’s Talk!

What do you think about the new search box on the Casa Mineira website? And how do you think they can optimize their home page further? Share your views in the comments section and together let’s make the web more optimized!

The post Split Testing between Standard Search Box and Drop-Down Search Increased Leads by 57.25% appeared first on VWO Blog.

Source:

Split Testing between Standard Search Box and Drop-Down Search Increased Leads by 57.25%

Just another WordPress site