Tag Archives: a/b testing

Thumbnail

Expert Email Design Tips From Klaviyo’s Ecommerce Summit, Part Two

Klaviyo:BOS

Welcome! For those of you just tuning in: Last week I attended Klaviyo: BOS, a two-day summit focused on growth tactics and business strategy for online merchants and ecommerce brands. Session topics ranged from Facebook Messenger bots and segmentation to email design and marketing automation. I took a ton of very squiggly, sometimes illegible notes and thought it would be a shame to keep them on paper, so I sat down and started writing a blog post titled “Expert SEO and CRO Tips From Klaviyo’s Ecommerce Summit.” I was at about 2,000 words when I had to take a break, so here…

The post Expert Email Design Tips From Klaviyo’s Ecommerce Summit, Part Two appeared first on The Daily Egg.

See more here:  

Expert Email Design Tips From Klaviyo’s Ecommerce Summit, Part Two

Thumbnail

Discover How Online Marketers Grade Top Performing A/B Testing Tools

We know that marketers try and use different A/B testing tools with the intent of discovering the one that works the best. This repeated trial and error method wastes valuable time and energy because the process of finding the perfect A/B Testing tool almost never ends.

To help all such marketers, G2 Crowd has come up with an exhaustive report comparing some of the top performing A/B Testing tools out there.  

Choose one of the top performing a/b testing tools - g2 crowd report

AND we are excited to tell you that VWO is ranked one of the top performers in the G2 Crowd A/B Testing Report.

Isn’t that great?

VWO has been the tool of choice for more than 5,000 businesses across the globe to seamlessly plan, manage, and execute their conversion optimization programs.

This report is generated based on 1,000+ independent user reviews from thousands of independent users of leading A/B testing tools. The comparison is based on product features, quality of support, user satisfaction ratings, and other parameters. All the platforms are ranked solely on the basis of user satisfaction ratings (based on the number of reviews, market share, vendor size, and social impact).

The time for asking around for recommendations is over. No more need to try one A/B test tool and compare it with another. The G2 Crowd report has done it all for you, and the results are out!

After you have read this report, you will be in a position to make an informed choice that will enable you to progress in your conversion rate optimization journey.

Come, grab your copy of the VWO G2 Crowd Best A/B Testing tool report by clicking the CTA below: 

G2 crowd report -top performing a/b testing tools

Now you can leave the trial-and-error search for the world’s Best A/B testing platforms and deploy the best that suits you based on your objective, verified findings.

The post Discover How Online Marketers Grade Top Performing A/B Testing Tools appeared first on Blog.

View this article – 

Discover How Online Marketers Grade Top Performing A/B Testing Tools

6 Easy Ways To Learn A/B Testing (Number 6 Is Our Favorite)

Have you always wanted to introduce A/B testing into your marketing skill set but are unsure of where to begin?

Do you think A/B testing is for more technical marketers?

If so, you might be worried about nothing. A/B testing, also known as split-testing, is a common feature of almost every marketing tool these days.

Thankfully, many software products with built-in A/B testing functionality have made implementing A/B tests so easy that laypeople can learn to improve their marketing skills by using A/B tests.

To help you get up and running with your first A/B test campaign, here are 6 tools with built-in A/B testing features that are easy to implement for the average nontechnical person.

Let’s take a quick look at what A/B testing is.

What Is A/B Testing?

In this guide to A/B testing from VWO, it is defined as “comparing two versions of a webpage to see which one performs better.”

The reason why you would want to run an A/B test on your website is to improve conversions. For example, you can A/B test the product photos on your e-commerce website to see if models with beards increase conversions compared to models without beards.

As you can see, with A/B testing, you can follow a process to slowly increase the number of website visitors that convert into customers. If done properly, you can be confident that you’ll always get the same results.

So now that you understand what A/B testing is and the potential benefits of doing A/B tests in your marketing, let’s look at 6 tools that make it easy to run your first A/B test.

  1. Google AdWords

Google AdWords may have been the first tool with built-in A/B testing, so it’s likely where most marketers launched their first A/B testing campaign.

As Google gets paid each time someone clicks one of its ads, it’s in Google’s best interests to help improve the quality of its ads. And to help you figure out which ads are the best, you can A/B test your ads by rotating them evenly to see which has a higher click-through rate (CTR).

To get started on A/B testing in AdWords, go to your campaign settings, click to expand the Ad rotation settings, and then select Do not optimize: Rotate ads indefinitely.

If you want Google to pick the winning ad, select the Optimize: Prefer best performing ads radio button. It’s a good idea to have Google rotate the ads indefinitely and then you can manually pick a winner. This would help you make observations about why some ads perform better than others.

top A/B testing tools

Next, make sure that you have at least 2 ads in each ad group, and then start collecting data.

top A/B testing tools

Unfortunately, AdWords won’t tell you if your data is statistically significant, so you’ll need to enter the impressions and clicks each ad received into a tool like VWO’s A/B split test significance calculator to figure out which ad won.

2. Sumo

If you’re not yet collecting email addresses on your website, you should be.

Adding a pop-up to your website is a great way to grow your email list. One of the easiest ways to install a pop-up is with Sumo.com’s suite of free tools.

Its “List Builder” tool makes it easy to strategically add pop-ups to your website to collect email addresses. But what if your pop-ups aren’t converting well?

Fortunately, you can easily A/B test your pop-ups.

To gradually increase the number of email addresses, you can create variations with different text, colors, or calls to action.

Within Sumo, under List Builder,  click the Tests tab, and then create a new form:

top A/B testing tools

top A/B testing tools

Select the form of which you need to create a variation:

After creating the variation, Sumo rotates both versions of the pop-up and collects conversion data, which will be displayed in your dashboard:

top A/B testing tools

Give your A/B test enough time to collect statistically significant data. After getting a clear winner, you can delete the losing pop-up and create a new pop-up to compete against the winner.

3. Drip

Drip.com is marketing automation software that helps you send personalized emails at exactly the right time.

For example, if you want to send an abandoned cart email 30 minutes after your website visitor added a product to the cart but didn’t complete the purchase, you can create an Abandoned Cart campaign within Drip to send the email automatically.

But what happens if your recipient doesn’t open the email? That’s another missed opportunity.

So, to recover such customers, you want to make sure your abandoned cart email stands out in their inbox and gets opened. Fortunately, you can increase the likelihood of that with Drip’s built-in split test feature.

Within Drip, you have the ability to easily split test the subject line, “From” name, and/or delivery time of the emails in your campaign.

In the example below, you can see how easy it is to set up an A/B test of a subject line:

top A/B testing tools

Next, enter an alternate subject line, and then Drip automatically rotates the subject lines in your abandoned cart email campaign:

top A/B testing tools

Drip also tracks how many times the emails associated with each subject line were opened. After gathering a statistically significant amount of data, you can see in your dashboard the confidence level at which you would get the same results if you let the A/B test running.

top A/B testing tools

After you’ve reached a 95% confidence level or higher, you can stop the losing variation and continue with the winning variation, or create a new A/B test to try and beat the winner.

4. Intercom

Next, we’ll look at the ways you can A/B test chat messages. Fortunately, Intercom makes it easy for you to do this.

Chat messages are a great way to engage your website visitors to increase your conversion rate or just get their email address so that you can market to them in the future.

You can think of a chat message the same way as greeting people when they walk into your brick-and-mortar store. It’s their first impression of you and your brand, so the quality of your greeting can be the difference whether they make a purchase or not.

With most chat tools, you can send “proactive messages” to engage your website visitors. Examples of proactive messages are:

  • “Hello, I’m here to answer any questions you may have.”
  • “Can I help you find a product?”
  • “Do you have any questions about shipping?”

If your proactive message isn’t warm or engaging enough, the visitor may not reply and you may lose a chance to convert them into a customer.

With Intercom, you can A/B test your proactive messages to see which ones have a high open rate. Just create your greeting:

top A/B testing toolsThen use the built-in A/B test feature to create a different greeting for your proactive message:

top A/B testing toolsIntercom will then show each greeting 50% of the time and display the results of the A/B test in your message dashboard so that you can see which greeting has the best open rate:

top A/B testing tools

5. Title Experiments

Did you know that 80% of people who read a headline copy won’t read the rest of the blog post? This is why it’s so important to write great blog post titles.

But how do you know what’s considered a good title? Well, you can split-test your blog post titles to find out.

With a WordPress plug-in called Title Experiments, it’s easy to create 2 versions of titles for each of your blog posts.

Every time you publish a new blog post, just click Add New Title, and then you can write a second variation of your blog post title:

top A/B testing tools

Title Experiments automatically A/B tests both variations, and then you can see how well each one is performing until you eventually pick a winner:

top A/B testing tools

6. VWO

So far, I’ve shown you how to run A/B tests within third-party tools, but what about doing actual A/B tests on your website itself?

Increasing conversions by changing your website’s copy, colors, and layout are where the fun begins when it comes to A/B testing.

With VWO, you can create a hypothesis about how to improve website conversions, and then easily create a variant of your webpage by using its WYSIWYG editor to test against your current page (also known as the control.)

The great thing about A/B testing with VWO is that you don’t have to be technical so that you can do it yourself without the need to hire a developer.

Get started by clicking the Create on the A/B Tests page:

top A/B testing tools

Edit the page you want to A/B test by using its WYSIWYG editor to create a variation to test against the control page:

top A/B testing toolsFrom your VWO dashboard, you can view the results of the A/B test. You can see which variation resulted in more conversions and whether the data is statistically significant so that you can be confident of the results.

top A/B testing tools

Just like the other tools mentioned above, VWO tells you when you’ve collected enough data to make a statistically significant decision about the results.

Conclusion

A/B testing isn’t as hard as it seems. It’s pretty easy to give A/B testing a try, thanks to the built-in features found in marketing software these days.

So if you’re ready to take the leap and want to run your first split test campaign, give one of the above-mentioned tools a try. I think you’ll find that it’s easier than you expected!

Over to You

Have you ran A/B tests by using the tools I just shared? Are there other tools with built-in A/B testing features that you think we can talk about?

It would be awesome to hear from you in the comments!

The post 6 Easy Ways To Learn A/B Testing (Number 6 Is Our Favorite) appeared first on Blog.

Link: 

6 Easy Ways To Learn A/B Testing (Number 6 Is Our Favorite)

Know How Uptowork Brought in Visitor’s Trust and Reduced The Cart Abandonment Rate

Regardless of what product or service you are offering, the above quote stands true for all ecommerce players. Trust plays a key role to increase the conversion rate on your checkout page, getting more revenue and more customers from your existing traffic base. And that happens when your visitors trust your brand. Trust plays a very significant role at every step of a user journey. If your target audience doesn’t trust your brand, they might not visit your website. And even if they land on your website, they might not purchase from you.

What happens when visitors don’t trust you?

  • Low conversion rate
  • High cart abandonment rate 
  • High bounce rate

“In eCommerce, everything hinges on trust. If they don’t trust you, they won’t buy from you.”

Jeremy Smith

So how do you earn the trust of your visitors and motivate them to buy your product?

Building trust is a long-term process, and it doesn’t happen overnight. However, there are some actionable tips that can be given a shot. Some time ago, we created this exhaustive list of tips for eCommerce brands. Among these, adding a trust seal on the checkout page to convince potential customers that the process is safe and secure can be a great option.A survey conducted by Matthew Niederberger on Actual Insights found that “61% of participants said they have at one time NOT completed a purchase because there were no trust logos present.”

What Is a Trust Seal/Trust Badge?

A trust seal, sometimes called a secure site seal, is something you’re likely already familiar with if you’ve ever noticed small badges displayed on a website, particularly on store or payment pages.

Our client, Uptowork, experienced a great deal of results by earning visitors’ trust by following the same approach. Let’s see how they did it.

Background: The Company

Uptowork is a a career site and online resume building platform. The platform is easy to use, fast, and professional. Uptowork targets all types of job aspirants, especially especially those, who struggle with building their resume in traditional text editors. You can always refer to their blog for some quick tips for your resume. Most of the traffic coming to Uptowork website is organic and through AdWords.

Investigating and Identifying the Issue

Although organic medium was paying off well for them by getting substantial traffic, they wanted to improve the percentage of visitors making a purchase and converting into customers besides the surprisingly high cart abandonment rate.

Reduce cart abandonment rate with VWO

When they analyzed their visitor journey, they noticed that a lot of visitors are checking out the product and adding it to their carts, but not making the final purchase. This resulted in a high cart abandonment rate and low conversion rate.

Earlier Approach

The Uptowork team tried making a couple of changes on the website and closely analyzed the GA data to see if it worked.  

  • They made some changes, but GA and other tools were not capable enough to give them all the answers.
  • They also did not A/B test them, so there was no direct comparison that could be made.

All this made them doubt the data they had.

Finding the Gap

The Uptowork team understood that there was a huge gap between what the brand wanted to convey and what the visitors perceived. They understood that the one thing lacking was visitor trust on the website.

Keeping an Objective in Mind

With the objective of filling this trust gap and reducing the cart abandonment rate, the team began its research. While doing the research, it came across this article on the VWO blog<, which includes actionable tips to build trust again for an eCommerce website.

Key Idea

The key idea was to completely redesign the cart page and add a McAfee trust badge on their cart page to convey a sense of security to its visitors.

Hypothesis

“We added a McAfee badge to our cart with the assumption that it will reduce the percentage of people leaving the cart. And it did “

Bases on their research they came up with hypothesis of adding a McAfee badge to gain visitor’s trust. They hoped that adding a McAfee badge will ensure a secure payment gateway for visitors and uplift the brand image. And thus, reduce the cart abandonment rate and increase conversion rate.

“While we were hoping for the badge to work, we had our doubts about how such a small change will make any impact”

Implementing and Testing

Almost a month-long test was ran for their entire user base with the help of  VWO AB testing capability.

Control

Reduce cart abandonment rate with VWO

Variation

Reduce cart abandonment rate with VWO

Result

The results of this test perfectly aligned with its hypothesis. Adding the McAfee seal reduced its abandoned cart rate and increased the conversion rate by 1.27 %.

Learning

“We were almost sure that such a small badge wouldn’t have any impact on our bottom line. If it wasn’t for the test we would just remove it and wonder what happened to our sales. VWO made it really easy to prepare the test and track the results.”

Rafał Romański

The team believed that visitors recognize this badge from other places, and it builds a sense of security.

“We aren’t a huge brand (yet!) and trust is still something we have to take care about. Using visual cues like that can bring that little extra reassurance we need.”

Rafał Romański

Final Thoughts

“We use VWO to test any visual or content changes that might impact our bottom line. It turns lengthy discussions about what should we do into easy to setup tests that bring results to the table, not opinions. I think this has been the biggest value we got out of using VWO (along with the hundreds of dollars we managed to save on mistakes we would’ve made without it!).”

Rafał Romański

When a small change inspired from a blogpost showed such impact on the conversion rate, you can just imagine the impact of a planned conversion rate optimization for eCommerce.

“Trust comes from delivering everyday on what you promised as a manager, an employee and a company.”

Robert Hurley

The Wall Street Journal

Do you need some tips to optimize your eCommerce conversion rate? Drop us a line at sales@vwo.com, or get in touch with our services team.

The post Know How Uptowork Brought in Visitor’s Trust and Reduced The Cart Abandonment Rate appeared first on Blog.

Original link:

Know How Uptowork Brought in Visitor’s Trust and Reduced The Cart Abandonment Rate

Case Study: Getting consecutive +15% winning tests for software vendor, Frontline Solvers

The post Case Study: Getting consecutive +15% winning tests for software vendor, Frontline Solvers appeared first on WiderFunnel Conversion Optimization.

Original post: 

Case Study: Getting consecutive +15% winning tests for software vendor, Frontline Solvers

The Art of Being Stupid – Why Testing Matters More Than Everything Else

Note: This is a guest article written by Tyler Hakes, the strategy director at Optimist, a full-service content marketing agency. He’s spent nearly 10 years helping agencies, startups, and corporate clients achieve sustainable growth through strategic content marketing and SEO. Any and all opinions expressed in the post are Tyler’s.

Almost 10 years ago, I got my first job in marketing.

I was right out of college, and I was eager to prove myself and light the world on fire.

Like most people in their early 20s, I was convinced that I knew everything. I thought I had all of the solutions to every problem. I was a marketing mastermind, of course, because I had managed to get a few hundred people to follow me on Twitter.

It didn’t take me long to learn that I didn’t quite have all of the answers. In fact, I had a lot to learn. And it became more important for me to understand what I don’t know and to learn rather than to feel like I already had the answers.

Since then, I’ve worked for agencies, corporations, and startups. As a freelancer and agency owner, I’ve done marketing for every kind of company imaginable—from custom hats to apartment rentals. I’ve put together dozens of content marketing strategies and written/published thousands of articles, ebooks, and landing pages.

In all that time, I’ve come to realize something really, really important.

I don’t know anything.

Sure, I have accumulated a lot of knowledge and skills in the digital marketing space. I understand, at a high level, how things work. And I know, directionally, what the best practices are for achieving results.

But when it comes to executing any particular tactic, writing a particular type of content, or advertising to a particular market, each scenario is a little different. What I think will work best is usually wrong.

With this realization in mind, I’ve developed a kind of manifesto. It’s a way to remind myself that it’s okay to not have all the answers. It’s okay to be wrong, as long as you commit to finding the right answer eventually. Embrace a testing mentality.

Assume You’re Wrong

The biggest challenge with having a testing mentality is accepting that you are almost always wrong.

Let me say this again: You’re wrong.

It can be difficult to swallow. But don’t take it personally. Don’t link your personal worth to your ability to guess which messaging will get the most clicks or which blog post will drive the most social engagement. That’s just silly.

This isn’t Mad Men. You’re not Don Draper. So, don’t spend a million bucks trying to come up with the best idea. We live in a digital age of data. We’re able to track, measure, and test anything and everything that we do in business. There should be no more guesswork.

And what we generally consider to be “conventional wisdom” about best practices when it comes to optimization is also generally wrong. (That’s why it’s called “conventional wisdom,” after all.)

So, just assume that whatever you think is “best” is probably wrong and that you’ll need to validate any idea you have against cold, hard data.

Rather than fight this, I’ve come to embrace it.

It’s become a driving force for my work and my business. I assume that I know nothing and that everything—anything—is open for testing. Test, fail and learn. In that order.

And instead of taking it personally, I just accept that it’s impossible for someone to know the right answer 100% of the time.

As such, it makes way more sense to defer to the data whenever possible.

Steal Shamelessly

Unfortunately, you can’t possibly test every single variable to determine the single best approach, messaging, targeting, or design.

But you can get a head start.

Begin any testing cycle by looking at companies that test and optimize regularly. Then, steal their findings. Rather than starting from square one, begin your own testing with their current best case—the design, ad, or content that they’ve found to be most successful.

You can do this in a number of ways.

  1. Look at crowd-sourced A/B or multivariate test communities like Behave.org.
  2. Find and read case studies on testing outcomes.
  3. Visit competitors websites and emulate what they’ve done.
  4. Use social media to uncover specific messaging/positioning/CTAs used by competitors.

For our work on content marketing, we begin any client engagement with an extensive research and competitive analysis process. It’s the foundation of our content marketing strategy—is what we already know working for competitors and other companies in the space?

We’re able to gain years (or decades) or knowledge in a matter of weeks. We avoid expensive, time-consuming, and frustrating trial and error by just stealing what works and iterating on it from there.

Prove Yourself Right (Or Wrong)

Once you have learned to not internalize the results and found a base to start with, it’s time to test.

Depending on what it is you’re testing, you’ll want to generate dozens—or hundreds—of variations. Try different colors, placements, layouts, or strategies.

Of course, a tool like VWO will help you execute these tests quickly and measure the results.

Create an experiment sheet that allows you to track each experiment and the outcome of that experiment. Remember to constantly challenge your own assumptions.assume you’re wrong and that you can come up with a variation that works better.

This kind of data-driven testing mentality applies not only to tactical tweaks or changes. You can assume a similar mentality for your entire strategy.

When we work with a new client on content marketing, we make a whole bunch of new assumptions.

Each piece of content that we create serves a strategic purpose within our larger framework. Because of this, we have a specific goal for that piece—to generate search traffic, to earn links, to generate social shares, and so on. And this is the benchmark that we use to measure our effectiveness.

So, we may begin with an idea about which kinds of content will best accomplish those goals.

But, in most cases, we have never created content in this particular market. We have never tried to build relationships within this particular community. We’re just guessing (per our past experience with other clients and other industries).

This means that what we really want to do is try what we think we will work, get the results, and then incorporate that data to help us improve in the future. A lot of times, we’re wrong. If we didn’t adopt a testing mentality, then we would just carry on being wrong.

Obviously, this is not ideal. It’s better to be wrong and to learn from that mistake than to be blind to your mistakes. This is why we apply a testing model to everything from our overall strategy to specific, tactical implementation—content flow, calls to action, outreach emails, and so on.

We want to achieve the best results we can, even if it means that we admit we were wrong.

Do It All Over Again

Think you’ve found the right answer? You’re probably wrong—again.

Any test is only as good as the variations that you’re considering. So, while you may have identified a clear winner of those that you’re considering, that doesn’t mean that you’ve objectively identified the best possible solution.

Whatever is working best now could only work half as well as the true best case. And it’s just a matter of time until you hit that particular variation.

It’s the pursuit of continuous improvement. It’s relentless.  

This is the foundational idea behind “growth hacking,” which is really just a data-driven, experimental approach to growth. It takes trial and error—over and over again—ad infinitum.

It’s why many software teams have embraced agile development because it allows for iterative progress and improvement rather than investing all of your time and resources into a single window or opportunity.

Testing isn’t just about making small tweaks. It’s about embracing a culture of continuous learning and improvement. It’s about the pursuit of truth, even when it makes you feel stupid.

And it all starts by admitting that you don’t have all the answers.

The post The Art of Being Stupid – Why Testing Matters More Than Everything Else appeared first on Blog.

View original: 

The Art of Being Stupid – Why Testing Matters More Than Everything Else

Your frequently asked conversion optimization questions, answered!

Reading Time: 28 minutes

Got a question about conversion optimization?

Chances are, you’re not alone!

This Summer, WiderFunnel participated in several virtual events. And each one, from full-day summit to hour-long webinar, ended with a TON of great questions from all of you.

So, here is a compilation of 29 of your top conversion optimization questions. From how to get executive buy-in for experimentation, to the impact of CRO on SEO, to the power (or lack thereof) of personalization, you asked, and we answered.

As you’ll notice, many experts and thought-leaders weighed in on your questions, including:

Now, without further introduction…

Your conversion optimization questions

Optimization Strategy

  1. What do you see as the most common mistake people make that has a negative effect on website conversion?
  2. What are the most important questions to ask in the Explore phase?
  3. Is there such a thing as too much testing and / or optimizing?

Personalization

  1. Do you get better results with personalization or A/B testing or any other methods you have in mind?
  2. Is there such a thing as too much personalization? We have a client with over 40 personas, with a very complicated strategy, which makes reporting hard to justify.
  3. With the advance of personalization technology, will we see broader segments disappear? Will we go to 1:1 personalization, or will bigger segments remain relevant?
  4. How do you explain personalization to people who are still convinced that personalization is putting first and last name fields on landing pages?

SEO versus CRO

  1. How do you avoid harming organic SEO when doing conversion optimization?

Getting Buy-in for Experimentation

  1. When you are trying to solicit buy-in from leadership, do you recommend going for big wins to share with the higher ups or smaller wins?
  2. Who would you say are the key stakeholders you need buy-in from, not only in senior leadership but critical members of the team?

CRO for Low Traffic Sites

  1. Do you have any suggestions for success with lower traffic websites?
  2. What would you prioritize to test on a page that has lower traffic, in order to achieve statistical significance?
  3. How far can I go with funnel optimization and testing when it comes to small local business?

Tips from an In-House Optimization Champion

  1. How do you get buy-in from major stakeholders, like your CEO, to go with a conversion optimization strategy?
  2. What has surprised you or stood out to you while doing CRO?

Optimization Across Industries

  1. Do you have any tips for optimizing a website to conversion when the purchase cycle is longer, like 1.5 months?
  2. When you have a longer sales process, getting them to convert is the first step. We have softer conversions (eBooks) and urgent ones like demo requests. Do we need to pick ONE of these conversion options or can ‘any’ conversion be valued?
  3. You’ve mainly covered websites that have a particular conversion goal, for example, purchasing a product, or making a donation. What would you say can be a conversion metric for a customer support website?
  4. Do you find that results from one client apply to other clients? Are you learning universal information, or information more specific to each audience?
  5. For companies that are not strictly e-commerce and have multiple business units with different goals, can you speak to any challenges with trying to optimize a visible page like the homepage so that it pleases all stakeholders? Is personalization the best approach?
  6. Do you find that testing strategies differ cross-culturally?

Experiment Design & Setup

  1. How do you recommend balancing the velocity of experimentation with quality, or more isolated design?
  2. I notice that you often have multiple success metrics, rather than just one? Does this ever lead to cherry-picking a metric to make sure that the test you wanted to win seem like it’s the winner?
  3. When do you make the call for A/B tests for statistical significance? We run into the issue of varying test results depending on part of the week we’re running a test. Sometimes, we even have to run a test multiple times.
  4. Is there a way to conclusively tell why a test lost or was inconclusive?
  5. How many visits do you need to get to statistically relevant data from any individual test?
  6. We are new to optimization. Looking at your Infinity Optimization Process, I feel like we are doing a decent job with exploration and validation – for this being a new program to us. Our struggle seems to be your orange dot… putting the two sides together – any advice?
  7. When test results are insignificant after lots impressions, how do you know when to ‘call it a tie’ and stop that test and move on?

Testing and technology

  1. There are tools meant to increase testing velocity with pre-built widgets and pre-built test variations, even – what are your thoughts on this approach?

Your questions, answered

Q: What do you see as the most common mistake people make that has a negative effect on website conversion?

Chris Goward: I think the most common mistake is a strategic one, where marketers don’t create or ensure they have a great process and team in place before starting experimentation.

I’ve seen many teams get really excited about conversion optimization and bring it into their company. But they are like kids in a candy store: they’re grabbing at a bunch of ideas, trying to get quick wins, and making mistakes along the way, getting inconclusive results, not tracking properly, and looking foolish in the end.

And this burns the organizational momentum you have. The most important resource you have in an organization is the support from your high-level executives. And you need to be very careful with that support because you can quickly destroy it by doing things the wrong way.

It’s important to first make sure you have all of the right building blocks in place: the right process, the right team, the ability to track and the right technology. And make sure you get a few wins, perhaps under the radar, so that you already have some support equity to work with.

Further reading:

Back to Top

Q: What are the most important questions to ask in the Explore phase?

Chris Goward: During Explore, we are looking for your visitors’ barriers to conversion. It’s a general research phase. (It’s called ‘Explore’ for a reason). In it, we are looking for insights about what questions to ask and validate. We are trying to identify…

  • What are the barriers to conversion?
  • What are the motivational triggers for your audience?
  • Why are people buying from you?

And answering those questions comes through the qualitative and quantitative research that’s involved in Explore. But it’s a very open-ended process. It’s an expansive process. So the questions are more about how to identify opportunities for testing.

Whereas Validate is a reductive process. During Validate, we know exactly what questions we are trying to answer, to determine whether the insights gained in Explore actually work.

Further reading:

  • Explore is one of two phases in the Infinity Optimization Process – our framework for conversion optimization. Read about the whole process, here.

Back to Top

Q: Is there such a thing as too much testing and / or optimizing?

Chris Goward: A lot of people think that if they’re A/B testing, and improving an experience or a landing page or a website…they can’t improve forever. The question many marketers have is, how do I know how long to do this? Is there going to be diminishing returns? By putting in the same effort will I get smaller and smaller results?

But we haven’t actually found this to be true. We have yet to find a company that we have over-A/B tested. And the reason is that visitor expectations continue to increase, your competitors don’t stop improving, and you continuously have new questions to ask about your business, business model, value proposition, etc.

So my answer is…yes, you will run out of opportunities to test, as soon as you run out of business questions. When you’ve answered all of the questions you have as a business, then you can safely stop testing.

Of course, you never really run out of questions. No business is perfect and understands everything. The role of experimentation is never done.

Case Study: DMV.org has been running an optimization program for 4+ years. Read about how they continue to double revenue year-over-year in this case study.

Back to Top

Q: Do you get better results with personalization or A/B testing or any other methods you have in mind?

Chris Goward: Personalization is a buzzword right now that a lot of marketers are really excited about. And personalization is important. But it’s not a new idea. It’s simply that technology and new tools are now available, and we have so much data that allows us to better personalize experiences.

I don’t believe that personalization and A/B testing are mutually exclusive. I think that personalization is a tactic that you can test and validate within all your experiences. But experimentation is more strategic.

At the highest level of your organization, having an experimentation ethos means that you’ll test anything. You could test personalization, you could test new product lines, or number of products, or types of value proposition messaging, etc. Everything is included under the umbrella of experimentation, if a company is oriented that way.

Personalization is really a tactic. And the goal of personalization is to create a more relevant experience, or a more relevant message. And that’s the only thing it does. And it does it very well.

Further Reading: Are you evaluating personalization at your company? Learn how to create the most effective personalization strategy with our 4-step roadmap.

Back to Top

Q: Is there such a thing as too much personalization? We have a client with over 40 personas, with a very complicated strategy, which makes reporting hard to justify.

Chris Goward: That’s an interesting question. Unlike experimentation, I believe there is a very real danger of too much personalization. Companies are often very excited about it. They’ll use all of the features of the personalization tools available to create (in your client’s case) 40 personas and a very complicated strategy. And they don’t realize that the maintenance cost of personalization is very high. It’s important to prove that a personalization strategy actually delivers enough business value to justify the increase in cost.

When you think about it, every time you come out with a new product, a new message, or a new campaign, you would have to create personalized experiences against 40 different personas. And that’s 40 times the effort of having a generic message. If you haven’t tested from the outset, to prove that all of those personas are accurate and useful, you could be wasting a lot of time and effort.

We always start a personalization strategy by asking, ‘what are the existing personas?’, and proving out whether those existing personas actually deliver distinct value apart from each other, or whether they should be grouped into a smaller number of personas that are more useful. And then, we test the messaging to see if there are messages that work better for each persona. It’s a step by step process that makes sure we are only creating overhead where it’s necessary and will create value.

Further Reading: Are you evaluating personalization at your company? Learn how to create the most effective personalization strategy with our 4-step roadmap.

Back to Top

Q: With the advance of personalization technology, will we see broader segments disappear? Will we go to 1:1 personalization, or will bigger segments remain relevant?

Chris Goward: Broad segments won’t disappear; they will remain valid. With things like multi-threaded personalization, you’ll be able to layer on some of the 1:1 information that you have, which may be product recommendations or behavioral targeting, on top of a broader segment. If a user falls into a broad segment, they may see that messaging in one area, and 1:1 messaging may appear in another area.

But if you try to eliminate broad segments and only create 1:1 personalization, you’ll create an infinite workload for yourself in trying to sustain all of those different content messaging segments. And it’s almost impossible for a marketing department practically to create infinite marketing messages.

Hudson Arnold: You are absolutely going to need both. I think there’s a different kind of opportunity, and a different kind of UX solution to those questions. Some media and commerce companies won’t have to struggle through that content production, because their natural output of 1:1 personalization will be showing a specific product or a certain article, which they don’t have to support from a content perspective.

What they will be missing out on is that notion of, what big segments are we missing? Are we not targeting moms? Newly married couples? CTOs vs. sales managers? Whatever the distinction is, that segment-level messaging is going to continue to be critical, for the foreseeable future. And the best personalization approach is going to balance both.

Back to Top

Q: How do you explain personalization to people who are still convinced that personalization is putting first and last name fields on landing pages?

A PANEL RESPONSE

André Morys: I compare it to the experience people have in a real store. If you go to a retail store, and you want to buy a TV, the salesperson will observe how you’re speaking, how you’re walking, how you’re dressed, and he will tailor his sales pitch to the type of person you are. He will notice if you’ve brought your family, if it’s your first time in a shop, or your 20th. He has all of these data points in his mind.

Personalization is the art of transporting this knowledge of how to talk to people on a 1:1 level to your website. And it’s not always easy, because you may not have all of the data. But you have to find out which data you can use. And if you can do personalization properly, you can get big uplift.

John Ekman: On the other hand, I heard a psychologist once say that people have more in common than what separates them. If you are looking for very powerful persuasion strategies, instead of thinking of the different individual traits and preferences that customers might have, it may be better to think about what they have in common. Because you’ll reach more people with your campaigns and landing pages. It will be interesting to see how the battle between general persuasion techniques and individual personalization techniques will result.

Chris Goward: It’s a good point. I tend to agree that the nirvana of 1:1 personalization may not be the right goal in some cases, because there are unintended consequences of that.

One is that it becomes more difficult to find generalized understanding of your positioning, of your value proposition, of your customers’ perspectives, if everything is personalized. There are no common threads.

The other is that there is significant maintenance cost in having really fine personalization. If you have 1:1 personalization with 1,000 people, and you update your product features, you have to think about how that message gets customized across 1,000 different messages rather than just updating one. So there is a cost to personalization. You have to validate that your approach to personalization pays off, and that is has enough benefit to balance out your cost and downside.

David Darmanin: [At Hotjar], we aren’t personalizing, actually. It’s a powerful thing to do, but there is a time to deploy it. If personalization adds too much complexity and slows you down, then obviously that can be a challenge. Like most things that can be complex, I think that they are the most valuable, when you have a high ticket price or very high value, where that touch of personalization has a big impact.

With Hotjar, we’re much more volume and lower price points, so it’s not yet a priority for us. Having said that, we have looked at it. But right now, we’re a startup, at the stage where speed is everything. And having many common threads is as important as possible, so we don’t want to add too much complexity now. But if you’re selling very expensive things, and you’re at a more advanced stage as a company, it would be crazy not to leverage personalization.

Video Resource: This panel response comes from the Growth & Conversion Virtual Summit held this Spring. You can still access all of the session recordings for free, here.

Back to Top

Q: How do you avoid harming organic SEO when doing conversion optimization?

Chris Goward: A common question! WiderFunnel was actually one of Google’s first authorized consultants for their testing tool, and Google told us is that they support optimization fully. They do not penalize companies for running A/B tests, if they are set up properly and the company is using a proper tool.

On top of that, what we’ve found is that the principles of conversion optimization parallel the principles of good SEO practice.

If you create a better experience for your users, and more of them convert, it actually sends a positive signal to Google that you have higher quality content.

Google looks at pogo-sticking, where people land on the SERP, find a result, and then return back to the SERP. Pogo-sticking signals to Google that this is not quality content. If a visitor lands on your page and converts, they are not going to come back to the SERP, which sends Google a positive signal. And we’ve actually never seen an example where SEO has been harmed by a conversion optimization program.

Video Resource: Watch SEO Wizard Rand Fishkin’s talk from CTA Conf 2017, “Why We Can’t Do SEO without CRO

Back to Top

Q:When you are trying to solicit buy-in from leadership do you recommend going for big wins to share with the higher ups or smaller wins?

Chris Goward: Partly, it depends on how much equity you have to burn up front. If you are in a situation where you don’t have a lot of confidence from higher-ups about implementing an optimization program, I would recommend starting with more under the radar tests. Try to get momentum, get some early wins, and then share your success with the executives to show the potential. This will help you get more buy-in for more prominent areas.

This is actually one of the factors that you want to consider when prioritizing where to test. The “PIE Framework” shows you the three factors to help you prioritize.

PIE framework for A/B testing prioritization.
A sample PIE prioritization analysis.

One of them is Ease. Potential, Importance, and Ease. And one of the important aspects within Ease is political ease. So you want to look for areas that have political ease, which means there might not be as much sensitivity around them (so maybe not the homepage). Get those wins first, and create momentum, and then you can start sharing that throughout the organization to build that buy-in.

Further Reading: Marketers from ASICS’ global e-commerce team weigh in on evangelizing optimization at a global organization in this post, “A day in the life of an optimization champion

Back to Top

Q: Who would you say are the key stakeholders you need buy-in from, not only in senior leadership but critical members of the team?

Nick So: Besides the obvious senior leadership and key decision-makers as you mention, we find getting buy-in from related departments like branding, marketing, design, copywriters and content managers, etc., can be very helpful.

Having these teams on board can not only help with the overall approval process, but also helps ensure winning tests and strategies are aligned with your overall business and marketing strategy.

You should also consider involving more tangentially-related teams like customer support. This makes them a part of the process and testing culture, but your customer-facing teams can also be a great source for business insights and test ideas as well!

Back to Top

Q: Do you have any suggestions for success with lower traffic websites?

Nick So: In our testing experience, we find we get the most impactful results when we feel we have a strong understanding of the website’s visitors. In the Infinity Optimization Process, this understanding is gained through a balanced approach of Exploratory research, and Validated insights and results.

infinity optimization process
The Infinity Optimization Process is iterative and leads to continuous growth and insights.

When a site’s traffic is low, the ability to Validate is decreased, and so we try to make up for it by increasing the time spent and work done in the Explore phase.

We take those yet-to-be-validated insights found in the Explore phase, and build a larger, more impactful single variation, and test the cluster of changes. (This variation is generally more drastic than we would create for a higher-traffic client, since we can validate those insights easily through multiple tests.)

Because of the more drastic changes, the variation should have a larger impact on conversion rate (and hopefully gain statistical significance with lower traffic). And because we have researched evidence to support these changes, there is a higher likelihood that they will perform better than a standard re-design.

If a site does not have enough overall primary conversions, but you definitely, absolutely MUST test, then I would look for a secondary metric further ‘upstream’ to optimize for. These should be goals that indicate or guide the primary conversion (e.g. clicks to form > form submission, add to cart > transaction). However with this strategy, stakeholders have to be aware that increases in this secondary goal may not be tied directly to increases of the primary goal at the same rate.

Back to Top

Q: What would you prioritize to test on a page that has lower traffic, in order to achieve statistical significance?

Chris Goward: The opportunities that are going to make the most impact really depend on the situation and the context. So if it’s a landing page or the homepage or a product page, they’ll have different opportunities.

But with any area, start by trying to understand your customers. If you have a low-traffic site, you’ll need to spend more time on the qualitative research side, really trying to understand: what are the opportunities, the barriers your visitors might be facing, and drilling into more of their perspective. Then you’ll have a more powerful test setup.

You’ll want to test dramatically. Test with fewer variations, make more dramatic changes with the variations, and be comfortable with your tests running longer. And while they are running and you are waiting for results, go talk to your customers. Go and run some more user testing, drill into your surveys, do post-purchase surveys, get on the phone and get the voice of customer. All of these things will enrich your ability to imagine their perspective and come up with more powerful insights.

In general, the things that are going to have the most impact are value proposition changes themselves. Trying to understand, do you have the right product-market fit, do you have the right description of your product, are you leading with the right value proposition point or angle?

Back to Top

 

Q: How far can I go with funnel optimization and testing when it comes to small local business?

A PANEL RESPONSE

David Darmanin: What do you mean by small local business? If you’re a startup just getting started, my advice would be to stop thinking about optimization and focus on failing fast. Get out there, change things, get some traction, get growth and you can optimize later. Whereas, if you’re a small but established local business, and you have traffic but it’s low, that’s different. In the end, conversion optimization is a traffic game. Small local business with a lot of traffic, maybe. But if traffic is low, focus on the qualitative, speak to your users, spend more time understanding what’s happening.

John Ekman:

If you can’t test to significance, you should turn to qualitative research.

That would give you better results. If you don’t have the traffic to test against the last step in your funnel, you’ll end up testing at the beginning of your funnel. You’ll test for engagement or click through, and you’ll have to assume that people who don’t bounce and click through will convert. And that’s not always true. Instead, go start working with qualitative tools to see what the visitors you have are actually doing on your page and start optimizing from there.

André Morys: Testing with too small a sample size is really dangerous because it can lead to incorrect assumptions if you are not an expert in statistics. Even if you’re getting 10,000 to 20,000 orders per month, that is still a low number for A/B testing. Be aware of how the numbers work together. We’ve had people claiming 70% uplift, when the numbers are 64 versus 27 conversions. And this is really dangerous because that result is bull sh*t.

Video Resource: This panel response comes from the Growth & Conversion Virtual Summit held this Spring. You can still access all of the session recordings for free, here.

Back to Top

Q: How do you get buy-in from major stakeholders, like your CEO, to go with an evolutionary, optimized redesign approach vs. a radical redesign?

Jamie Elgie: It helps when you’ve had a screwup. When we started this process, we had not been successful with the radical design approach. But my advice for anyone championing optimization within an organization would be to focus on the overall objective.

For us, it was about getting our marketing spend to be more effective. If you can widen the funnel by making more people convert on your site, and then chase the people who convert (versus people who just land on your site) with your display media efforts, your social media efforts, your email efforts, and with all your paid efforts, you are going to be more effective. And that’s ultimately how we sold it.

It really sells itself though, once the process begins. It did not take long for us to see really impactful results that were helping our bottom line, as well as helping that overall strategy of making our display media spend, and all of our media spend more targeted.

Video Resource: Watch this webinar recording and discover how Jamie increased his company’s sales by more than 40% with evolutionary site redesign and conversion optimization.

Back to Top

Q: What has surprised you or stood out to you while doing CRO?

Jamie Elgie: There have been so many ‘A-ha!’s, and that’s the best part. We are always learning. Things that we are all convinced we should change on our website, or that we should change in our messaging in general, we’ll test them and actually find out.

We have one test running right now, and it’s failing, which is disappointing. But our entire emphasis as a team is changing, because we are learning something. And we are learning it without a huge amount of risk. And that, to me, has been the greatest thing about optimization. It’s not just the impact to your marketing funnel, it’s also teaching us. And it’s making us a better organization because we’re learning more.

One of the biggest benefits for me and my team has been how effective it is just to be able to say, ‘we can test that’.

If you have a salesperson who feels really strongly about something, and you feel really strongly that they’re wrong, the best recourse is to put it out on the table and say, ok, fine, we’ll go test that.

It enables conversations to happen that might not otherwise happen. It eliminates disputes that are not based on objective data, but on subjective opinion. It actually brings organizations together when people start to understand that they don’t need to be subjective about their viewpoints. Instead, you can bring your viewpoint to a test, and then you can learn from it. It’s transformational not just for a marketing organization, but for the entire company, if you can start to implement experimentation across all of your touch points.

Case Study: Read the details of how Jamie’s company, weBoost, saw a 100% lift in year-over-year conversion rate with and optimization program.

Back to Top

Q: Do you have any tips for optimizing a website to conversion when the purchase cycle is longer, like 1.5 months?

Chris Goward: That’s a common challenge in B2B or with large ticket purchases for consumers. The best way to approach this is to

  1. Track your leads and opportunities to the variation,
  2. Then, track them through to the sale,
  3. And then look at whether average order value changes between the variations, which implies the quality of the leads.

Because it’s easy to measure lead volume between variations. But if lead quality changes, then that makes a big impact.

We actually have a case study about this with Magento. We asked the question, “Which of these calls-to-action is actually generating the most valuable leads?”. And ran an experiment to try to find out. We tracked the leads all the way through to sale. This helped Magento optimize for the right calls-to-action going forward. And that’s an important question to ask near the beginning of your optimization program, which is, am I providing the right hook for my visitor?

Case Study: Discover how Magento increased lead volume and lead quality in the full case study.

Back to Top

Q: When you have a longer sales process, getting visitors to convert is the first step. We have softer conversions (eBooks) and urgent ones like demo requests. Do we need to pick ONE of these conversion options or can ‘any’ conversion be valued?

Nick So: Each test variation should be based on a single, primary hypothesis. And each hypothesis should be based on a single, primary conversion goal. This helps you keep your hypotheses and strategy focused and tactical, rather than taking a shotgun approach to just generally ‘improve the website’.

However, this focused approach doesn’t mean you should disregard all other business goals. Instead, count these as secondary goals and consider them in your post-test results analysis.

If a test increases demo requests by 50%, but cannibalizes ebook downloads by 75%, then, depending on the goal values of the two, a calculation has to be made to see if the overall net benefit of this tradeoff is positive or negative.

Different test hypotheses can also have different primary conversion goals. One test can focus on demos, but the next test can be focused on ebook downloads. You just have to track any other revenue-driving goals to ensure you aren’t cannibalizing conversions and having a net negative impact for each test.

Back to Top

Q: You’ve mainly covered websites that have a particular conversion goal, for example, purchasing a product, or making a donation. What would you say can be a conversion metric for a customer support website?

Nick So: When we help a client determine conversion metrics…

…we always suggest following the money.

Find the true impact that customer support might have on your company’s bottom line, and then determine a measurable KPI that can be tracked.

For example, would increasing the usefulness of the online support decrease costs required to maintain phone or email support lines (conversion goal: reduction in support calls/submissions)? Or, would it result in higher customer satisfaction and thus greater customer lifetime value (conversion goal: higher NPS responses via website poll)?

Back to Top

Q: Do you find that results from one client apply to other clients? Are you learning universal information, or information more specific to each audience?

Chris Goward: That question really gets at the nub of where we have found our biggest opportunity. When I started WiderFunnel in 2007, I thought that we would specialize in an industry, because that’s what everyone was telling us to do. They said, you need to specialize, you need to focus and become an expert in an industry. But I just sort of took opportunities as they came, with all kinds of different industries. And what I found is the exact opposite.

We’ve specialized in the process of optimization and personalization and creating powerful test design, but the insights apply to all industries.

What we’ve found is people are people, regardless of whether they’re shopping for a server, or shopping for socks, or donating to third-world countries, they go through the same mental process in each case.

The tactics are a bit different, sometimes. But often, we’re discovering breakthrough insights because we’re able to apply principles from one industry to another. For example, taking an e-commerce principle and identifying where on a B2B lead generation website we can apply that principle because someone is going through the same step in the process.

Most marketers spend most of their time thinking about their near-field competitors rather than in different industries, because it’s overwhelming to look at all of the other opportunities. But we are often able to look at an experience in a completely different way, because we are able to look at it through the lens of a different industry. That is very powerful.

Back to Top

Q: For companies that are not strictly e-commerce and have multiple business units with different goals, can you speak to any challenges with trying to optimize a visible page like the homepage so that it pleases all stakeholders? Is personalization the best approach?

Nick So: At WiderFunnel, we often work with organizations that have various departments with various business goals and agendas. We find the best way to manage this is to clearly quantify the monetary value of the #1 conversion goal of each stakeholder and/or business unit, and identify areas of the site that have the biggest potential impact for each conversion goal.

In most cases, the most impactful test area for one conversion goal will be different for another conversion goal (e.g. brand awareness on the homepage versus checkout for e-commerce conversions).

When there is a need to consider two different hypotheses with differing conversion goals on a single test area (like the homepage), teams can weigh the quantifiable impact + the internal company benefits in their decision and make that negotiation of prioritization and scheduling between teams.

I would not recommend personalization for this purpose, as that would be a stop-gap compromise that would limit the creativity and strategy of hypotheses, as well as create a disjointed experience for visitors, which would generally have a negative impact overall.

If you HAVE to run opposing strategies simultaneously on an area of the site, you could run multiple variations for different teams and measure different goals. Or, run mutually exclusive tests (keeping in mind these tactics would reduce test velocity, and would require more coordination between teams).

Back to Top

 

Q: Do you find testing strategies differ cross-culturally? Do conversion rates vary drastically across different countries / languages when using these strategies?

Chris Goward: We have run tests for many clients outside of the USA, such as in Israel, Sweden, Australia, UK, Canada, Japan, Korea, Spain, Italy and for the Olympics store, which is itself a global e-commerce experience in one site!

There are certainly cultural considerations and interesting differences in tactics. Some countries don’t have widespread credit card use, for example, and retailers there are accustomed to using alternative payment methods. Website design preferences in many Asian countries would seem very busy and overly colorful to a Western European visitor. At WiderFunnel, we specialize in English-speaking and Western-European conversion optimization and work with partner optimization companies around the world to serve our global and international clients.

Back to Top

Q: How do you recommend balancing the velocity of experimentation with quality, or more isolated design?

Chris Goward: This is where the art of the optimization strategist comes into play. And it’s where we spend the majority of our effort – in creating experiment plans. We look at all of the different options we could be testing, and ruthlessly narrow them down to the things that are going to maximize the potential growth and the potential insights.

And there are frameworks we use to do that. Its all about prioritization. There are hundreds of ideas that we could be testing, so we need to prioritize with as much data as we can. So, we’ve developed some frameworks to do that. The PIE Framework allows you to prioritize ideas and test areas based on the potential, importance, and ease. The potential for improvement, the importance to the business, and the ease of implementation. And sometimes these are a little subjective, but the more data you can have to back these up, the better your focus and effort will be in delivering results.

Further Reading:

Back to Top

Q: I notice that you often have multiple success metrics, rather than just one? Does this ever lead to cherry-picking a metric to make sure that the test you wanted to win seem like it’s the winner?

Chris Goward: Good question! We actually look for one primary metric that tells us what the business value of a winning test is. But we also track secondary metrics. The goal is to learn from the other metrics, but not use them for decision-making. In most cases, we’re looking for a revenue-driving primary metric. Revenue-per-visitor, for example, is a common metric we’ll use. But the other metrics, whether conversion rate or average order value or downloads, will tell us more about user behavior, and lead to further insights.

There are two steps in our optimization process that pair with each other in the Validate phase. One is design of experiments, and the other is results analysis. And if the results analysis is done correctly, all of the metrics that you’re looking at in terms of variation performance, will tell you more about the variations. And if the design of experiments has been done properly, then you’ll gather insights from all of the different data.

But you should be looking at one metric to tell you whether or not a test won.

Further Reading: Learn more about proper design of experiments in this blog post.

Back to Top

 

Q: When do you make the call for A/B tests for statistical significance? We run into the issue of varying test results depending on part of the week we’re running a test. Sometimes, we even have to run a test multiple times.

Chris Goward: It sounds like you may be ending your tests or trying to analyze results too early. You certainly don’t want to be running into day-of-the-week seasonality. You should be running your tests over at least a week, and ideally two weekends to iron out that seasonality effect, because your test will be in a different context on different days of the week, depending on your industry.

So, run your tests a little bit longer and aim for statistical significance. And you want to use tools that calculate statistical significance reliably, and help answer the real questions that you’re trying to ask with optimization. You should aim for that high level of statistical significance, and iron out that seasonality. And sometimes you’ll want to look at monthly seasonality as well, and retest questionable things within high and low urgency periods. That, of course, will be more relevant depending on your industry and whether or not seasonality is a strong factor.

Further Reading: You can’t make business decisions based on misleading A/B test results. Learn how to avoid the top 3 mistakes that make your A/B test results invalid in this post.

Back to Top

Q: Is there a way to conclusively tell why a test lost or was inconclusive? To know what the hidden gold is?

Chris Goward: Developing powerful hypotheses is dependent on having workable theories. Seeking to determine the “Why” behind the results is some of the most interesting part of the work.

The only way to tell conclusively is to infer a potential reason, then test again with new ways to validate that inference. Eventually, you can form conversion optimization theories and then test based on those theories. While you can never really know definitively the “why” behind the “what”, when you have theories and frameworks that work to predict results, they become just as useful.

As an example, I was reviewing a recent test for one of our clients and it didn’t make sense based on our LIFT Model. One of the variations was showing under-performance against another variation, but I believed strongly that it should have over-performed. I struggled for some time to align this performance with our existing theories and eventually discovered the conversion rate listed was a typo! The real result aligned perfectly with our existing framework, which allowed me to sleep at night again!

Back to Top

Q: How many visits do you need to get to statistically relevant data from any individual test?

Chris Goward: The number of visits is just one of the variables that determines statistical significance. The conversion rate of the Control and conversion rate delta between the variations are also part of the calculation. Statistical significance is achieved when there is enough traffic (i.e. sample size), enough conversions, and the conversion rate delta is great enough.

Here’s a handy Excel test duration calculator. Fortunately, today’s testing tools calculate statistical significance automatically, which simplifies the conversion champion’s decision-making (and saves hours of manual calculation!)

When planning tests, it’s helpful to estimate the test duration, but it isn’t an exact science. As a rule-of-thumb, you should plan for smaller isolation tests to run longer, as the impact on conversion rate may be less. The test may require more conversions to potentially achieve confidence.

Larger, more drastic cluster changes would typically run for a shorter period of time, as they have more potential to have a greater impact. However, we have seen that isolations CAN have the potential to have big impact. If the evidence is strong enough, test duration shouldn’t hinder you from trying smaller, more isolated changes as they can lead to some of the biggest insights.

Often, people that are new to testing become frustrated with tests that never seem to finish. If you’ve run a test with more than 30,000 to 50,000 visitors and one variation is still not statistically significant over another, then your test may not ever yield a clear winner and you should revise your test plan or reduce the number of variations being tested.

Further Reading: Do you have to wait for each test to reach statistical significance? Learn more in this blog post: “The more tests, the better!” and other A/B testing myths, debunked

Back to Top

Q: We are new to optimization (had a few quick wins with A/B testing and working toward a geo targeting project). Looking at your Infinity Optimization Process, I feel like we are doing a decent job with exploration and validation – for this being a new program to us. Our struggle seems to be your orange dot… putting the two sides together – any advice?

Chris Goward: If you’re getting insights from your Exploratory research, those insights should tie into the Validate tests that you’re running. You should be validating the insights that you’re getting from your Explore phase. If you started with valid insights, the results that you get really should be generating growth, and they should be generating insights.

Part of it is your Design of Experiments (DOE). DOE is how you structure your hypotheses and how you structure your variations to generate both growth and insights, and those are the two goals of your tests.

If you’re not generating growth, or you’re not generating insights, then your DOE may be weak, and you need to go back to your strategy and ask, why am I testing this variation? Is it just a random idea? Or, am I really isolating it against another variation that’s going to teach me something as well as generate lift? If you’re not getting the orange dot right, then you probably need to look at researching more about Design of Experiments.

Q: When test results are insignificant after lots impressions, how do you know when to ‘call it a tie’ and stop that test and move on?

Chris Goward: That’s a question that requires a large portion of “it depends.” It depends on whether:

  • You have other tests ready to run with the same traffic sources
  • The test results are showing high volatility or have stabilized
  • The test insights will be important for the organization

There’s an opportunity cost to every test. You could always be testing something else and need to constantly be asking whether this is the best test to be running now vs. the cost and potential benefit of the next test in your conversion strategy.

Back to Top

 

Q: There are tools meant to increase testing velocity with pre-built widgets and pre-built test variations, even – what are your thoughts on this approach?

A PANEL RESPONSE

John Ekman: Pre-built templates provide a way to get quick wins and uplift. But you won’t understand why it created an uplift. You won’t understand what’s going on in the brain of your users. For someone who believes that experimentation is a way to look in the minds of whoever is in front of the screen, I think these methods are quite dangerous.

Chris Goward: I’ll take a slightly different stance. As much as I talk about understanding the mind of the customer, asking why, and testing based on hypotheses, there is a tradeoff. A tradeoff between understanding the why and just getting growth. If you want to understand the why infinitely, you’ll do multivariate testing and isolate every potential variable. But in practice, that can’t happen. Very few have enough traffic to multivariate test everything.

But if you don’t have tons of traffic and you want to get faster results, maybe you don’t want to know the why about anything, and you just want to get lift.

There might be a time to do both. Maybe your website performance is really bad, or you just want to try a left-field variation, just to see if it works…if you get a 20% lift in your revenue, that’s not a failure. That’s not a bad thing to do. But then, you can go back and isolate all of the things to ask yourself: Well, I wonder why that won, and start from there.

The approach we usually take at WiderFunnel is to reserve 10% of the variations for ‘left-field’ variations. As in, we don’t know why this will work, but we’re just going to test something crazy and see if it sticks.

David Darmanin: I agree, and disagree. We’re living in an era when technology has become so cheap, that I think it’s dangerous for any company to try to automate certain things, because they’re going to just become one of many.

Creating a unique customer experience is going to become more and more important.

If you are using tools like a platform, where you are picking and choosing what to use so that it serves your strategy and the way you want to try to build a business, that makes sense to me. But I think it’s very dangerous to leave that to be completely automated.

Some software companies out there are trying to build a completely automated conversion rate optimization platform that does everything. But that’s insane. If many sites are all aligned in the same way, if it’s pure AI, they’re all going to end up looking the same. And who’s going to win? The other company that pops up out of nowhere, and does everything differently. That isn’t fully ‘optimized’ and is more human.

Optimization, in itself, if it’s too optimized, there is a danger. If we eliminate the human aspect, we’re kind of screwed.

Video Resource: This panel response comes from the Growth & Conversion Virtual Summit held this Spring. You can still access all of the session recordings for free, here.

Back to Top

What conversion optimization questions do you have?

Add your questions in the comments section below!

The post Your frequently asked conversion optimization questions, answered! appeared first on WiderFunnel Conversion Optimization.

View article:

Your frequently asked conversion optimization questions, answered!

Thumbnail

Structured Approach To Testing Increased This Insurance Provider’s Conversions By 30%

CORGI HomePlan provides boiler and home cover insurance in Great Britain. It offers various insurance policies and an annual boiler service. Its main value proposition is that it promises “peace of mind” to customers. It guarantees that if anything goes wrong, it’ll be fixed quickly and won’t cost anything extra over the monthly payments.

Problem

CORGI’s core selling points were not being communicated clearly throughout the website. Insurance is a hyper-competitive industry and most customers compare other providers before taking a decision. After analyzing its data, CORGI saw that there was an opportunity to improve conversions and reduce drop-offs at major points throughout the user journey. To help solve that problem, CORGI hired Worship Digital, a conversion optimization agency.

Observations

Lee Preston, a conversion optimization consultant at Worship Digital, analyzed CORGI’s existing Google Analytics data, conducted user testing and heuristic analysis, and used VWO to run surveys and scrollmaps. After conducting qualitative and quantitative analysis, Lee found that:

  • Users were skeptical of CORGI’s competition, believing they were not transparent enough. Part of CORGI’s value proposition is that it doesn’t have any hidden fees so conveying this to users could help convince them to buy.
  • On analyzing the scrollmap results, it was found that only around a third of mobile users scrolled down enough to see the value proposition at the bottom of the product pages.
  • They ran surveys for users and asked, “Did you look elsewhere before visiting this site? (If so, where?)” More than 70% of respondents had looked elsewhere.
  • They ran another survey and asked users what they care about most; 18% of users said “fast service” while another 12% said “reliability”.

This is how CORGI’s home page originally looked:

corgi_original

Hypothesis

After compiling all these observations, Lee and his team distilled it down to one hypothesis:

CORGI’s core features were not being communicated properly. Displaying these more clearly on the home page, throughout the comparison journey, and the checkout could encourage more users to sign up rather than opting for a competitor.

Lee adds, “Throughout our user research with CORGI, we found that visitors weren’t fully exposed to the key selling points of the service. This information was available on different pages on the site, but was not present on the pages comprising the main conversion journey.”

Test

Worship Digital first decided to put this hypothesis to test on the home page.

“We hypothesized that adding a USP bar below the header would mean 100% of visitors would be exposed to these anxiety-reducing features, therefore, improving motivation and increasing the user conversion rate,” Lee said.

This is how the variation looked.

corgi_variation

Results

The variation performed better than the control across all devices and majority of user types. The variation increased the conversions by 30.9%.

“We were very happy that this A/B test validated our research-driven hypothesis. We loved how we didn’t have to buy some other tool for running heatmaps and scrollmaps for our visitor behavior experiment,” Lee added.

Next Steps

Conversion optimization is a continuous process at CORGI. Lee has been constantly running new experiments and gathering deep understanding about the insurance provider’s visitors. For the next phase of testing, he plans to:

  • Improve the usability of the product comparing feature.
  • Identify and fix leaks during the checkout process.
  • Make complex product pages easier to digest.

0

0 ratings

How will you rate this content?

Please choose a rating

The post Structured Approach To Testing Increased This Insurance Provider’s Conversions By 30% appeared first on VWO Blog.

Original article – 

Structured Approach To Testing Increased This Insurance Provider’s Conversions By 30%

Data-Driven Optimization: How The Moneyball Method Can Deliver Increased Revenues

Whether your current ROI is something to brag about or something to worry about, the secret to making it shine lies in a 2011 award-winning movie starring Brad Pitt.

Do you remember the plot?

The manager of the downtrodden Oakland A’s meets a baseball-loving Yale economics graduate who maintains certain theories about how to assemble a winning team.

His unorthodox methods run contrary to scouting recommendations and are generated by computer analysis models.

Despite the ridicule from scoffers and naysayers, the geek proves his point. His data-driven successes may even have been the secret sauce, fueling Boston’s World Series title in 2004 (true story, and the movie is Moneyball).

img-0_copy

What’s my point?

Being data-driven seemed a geeks’ only game, or a far reach to many, just a few years ago. Today, it’s time to get on the data-driven bandwagon…or get crushed by it.

Let’s briefly look at the situation and the cure.

Being Data-Driven: The Situation

Brand awareness, test-drive, churn, customer satisfaction, and take rate—these are essential nonfinancial metrics, says Mark Jeffery, adjunct professor at the Kellogg School of Management.

Throw in a few more—payback, internal rate of return, transaction conversion rate, and bounce rate—and you’re well on your way to mastering Jeffery’s 15 metric essentials.

Why should you care?

Because Mark echoes the assessment of his peers from other top schools of management:

Organizations that embrace marketing metrics and create a data-driven marketing culture have a competitive advantage that results in significantly better financial performance than that of their competitors. – Mark Jeffery.

You don’t believe in taking marketing and business growth advice from a guy who earned a Ph.D. in theoretical physics? Search “data-driven stats” for a look at the research. Data-centric methods are leading the pack.

Being Data-Driven: The Problem

If learning to leverage data can help the Red Sox win the World Series, why are most companies still struggling to get on board, more than a decade later?

There’s one little glitch in the movement. We’ve quickly moved from “available data” to “abundant data” to “BIG data.”

CMO’s are swamped with information and are struggling to make sense of it all. It’s a matter of getting lost in the immensity of the forest and forgetting about the trees.

We want the fruits of a data-driven culture. We just aren’t sure where or how to pick them.

Data-Driven Marketing: The Cure

I’ve discovered that the answer to big data overload is hidden right in the problem, right there at the source.

Data is produced by scientific means. That’s why academics like Mark are the best interpreters of that data. They’re schooled in the scientific method.

That means I must either hire a data scientist or learn to approach the analytical part of business with the demeanor of a math major.

Turns out that it’s not that difficult to get started. This brings us to the most important aspect, that is, the scientific approach to growth.

Scientific Method of Growth

You’re probably already familiar with the components of the scientific method. Here’s one way of describing it:

  1. Identify and observe a problem, then state it as a question.
  2. Research the topic and then develop a hypothesis that would answer the question.
  3. Create and run an experiment to test the hypothesis.
  4. Go over the findings to establish conclusions.
  5. Continue asking and continue testing.

    Scientific Method of Growth and Optimization

By focusing on one part of the puzzle a time, neither the task nor the data will seem overwhelming. As you are designing the experiment, you can control it.

Here’s an example of how to apply the scientific method to data-driven growth/optimization, as online enterprises would know it.

  1. Question: Say you have a product on your e-commerce site that’s not selling as well as you want. The category manager advises lowering the price. Is that a good idea?
  2. Hypothesis: Research tells you that similar products are selling at an average price that is about the same as yours. You hypothesize that lowering your price will increase sales.
  3. Test: You devise an A/B test that will offer the item at a lower price to half of your e-commerce visitors and at the same price to the other half. You run the test for one week.
  4. Conclusions: Results show that lowering the price did not significantly increase sales.
  5. Action: You create another hypothesis to explain the disappointing sales and test this hypothesis for accuracy.

A/B Testing

You may think that the above example is an oversimplification, but we’ve seen our clients at The Good make impressive gains by arriving at data-driven decisions based on experiments even less complicated.

And the scientific methodology applies to companies both large and small, too. We’ve used the same approach with everyone from Xerox to Adobe.

Big data certainly is big, but it doesn’t have to be scary. Step-by-step analysis on fundamental questions followed by a data-driven optimization plan is enough to get you large gains.

The scientific approach to growth can be best implemented with a platform that is connected and comprehensive. Such a platform, which shows business performance on its goals, from one stage of the funnel to another, can help save a lot of time, effort, and money.

Conclusion

Businesses need to be data-driven in order to optimize for growth, and to achieve business success. The scientific method can help utilize data in the best possible ways to attain larger gains. Take A/B testing, for example. Smart A/B testing is more than just about testing random ideas. It is about following a scientific, data-driven approach. Follow the Moneyball method of data-driven testing and optimization, and you’ll be on your way to the World Series of increased revenues in no time.

Do you agree that a data-driven approach is a must for making your ROI shine? Share your thoughts and feedback in the comments section below.

CTA_FreeTrial_Being_Data_Driven

The post Data-Driven Optimization: How The Moneyball Method Can Deliver Increased Revenues appeared first on VWO Blog.

Excerpt from: 

Data-Driven Optimization: How The Moneyball Method Can Deliver Increased Revenues

[Gifographic] Better Website Testing – A Simple Guide to Knowing What to Test

Note: This marketing infographic is part of KlientBoost’s 25-part series. You can subscribe here to access the entire series of gifographics.


If you’ve ever tested your website, you’ve probably been in the unfortunate situation of running out of ideas on what to test.

But don’t worry – it happens to everybody.

That’s of course, unless you have a website testing plan.

That’s why KlientBoost has teamed up with VWO to bring to you a gifographic that provides a simple guide on knowing the what, how, and why when it comes to testing your website.

21-vwo-website-testing2

Setting Your Testing Goals

Like a New Year’s resolution around getting fitter, if you don’t have any goals tied to your website testing plan, then you may be doing plenty of work, with little results to show.

With your goals in place, you can focus on the website tests that will help you achieve those goals –the fastest.

Testing a button color on your home page when you should be testing your checkout process, is a sure sign that you are heading to testing fatigue or the disappointment of never wanting to run a test again.

But let’s take it one step further.

While it’s easy to improve click-through rates, or CTRs, and conversion rates, the true measure of a great website testing plan comes from its ability to increase revenue.

No optimization efforts matter if they don’t connect to increased revenue in some shape or form.

Whether you improve the site user experience, your website’s onboarding process, or get more conversions from your upsell thank you page, all those improvements compound into incremental revenue gains.

Lesson to be learned?

Don’t pop the cork on the champagne until you know that an improvement in the CTRs or conversion rates would also lead to increased revenue.

Start closest to the money when it comes to your A/B tests.

Knowing What to Test

When you know your goals, the next step is to figure out what to test.

You have two options here:

  1. Look at quantitative data like Google Analytics that show where your conversion bottlenecks may be.
  2. Or gather qualitative data with visitor behavior analysis where your visitors can tell you the reasons for why they’re not converting.

Both types of data should fall under your conversion research umbrella. In addition to this gifographic, we created another one, all around the topic of CRO research.

When you’ve done your research, you may find certain aspects of a page that you’d like to test. For inspiration, VWO has created The Complete Guide To A/B Testing – and in it, you’ll find some ideas to test once you’ve identified which page to test:

  • Headlines
  • Subheads
  • Paragraph Text
  • Testimonials
  • Call-to-Action text
  • Call-to-Action button
  • Links
  • Images
  • Content near the fold
  • Social proof
  • Media mentions
  • Awards and badges

As you can see, there are tons of opportunities and endless ideas to test when you decide what to test and in what order.

website-testing
A quick visual for what’s possible

So now that you know your testing goals and what to test, the last step is forming a hypothesis.

With your hypothesis, you’re able to figure out what you think will have the biggest performance lift with the thought of effort in mind as well (easier to get quicker wins that don’t need heaps of development help).

Running an A/B Test

Alright, so you have your goals, list of things to test, and hypotheses to back these up, the next task now is to start testing.

With A/B testing, you’ll always have at least one variant running against your control.

In this case, your control is your actual website as it is now and your variant is the thing you’re testing.

With proper analytics and conversion tracking along with the goal in place, you can start seeing how each of these two variants (hence the name A/B) is doing.

a_b-testing
Consider this a mock-up of your conversion rate variations

When A/B testing, there are two things you may want to consider before you call winners or losers of a test.

One is statistical significance. Statistical significance gives you the thumbs up or thumbs down around whether your test results can be tied to a random chance. If a test is statistically significant, then the chances of the results are ruled out.

And VWO has created its own calculator so that you can see how your test is doing.

The second one is confidence level. It helps you decide whether you can replicate the results of your test again and again.

A confidence level of 95% tells you that your test will achieve the same results 95% of the time if you run it repeatedly. So, as you can tell, the higher your confidence level, the surer you can be that your test truly won or lost.

You can see the A/B test that increased revenue for Server Density by 114%.

Multivariate Testing for Combination of Variations

Let’s say you have multiple ideas to test, and your testing list is looking way too long.

Wouldn’t it be cool if you could test multiple aspects of your page at once to get faster results?

That’s exactly what multivariate testing is.

Multivariate testing allows you to test which combinations of different page elements affect each other when it comes to CTRs, conversion rates, or revenue gains.
Look at the multivariate pizza example below:

multivariate-testing-example
Different headlines, CTAs, and colors are used

The recipe for multivariate testing is simple and delicious.

multivariate-testing-formula
Different elements increase the combination size

And the best part is that VWO can automatically run through all the different combinations you set so that your multivariate test can be done without the heavy lifting.

If you’re curious about whether you should A/B test or run multivariate tests, then look at this chart that VWO created:

multivariate-testing-software-visual-website-optimizer
Which one makes the most sense for you?

Split URL Testing for Heavier Variations

If you find that your A/B or multivariate tests lead you to the end of the rainbow that shows bigger initiatives in backend development or major design changes are needed, then you’re going to love split URL testing.

As VWO states:

“If your variation is on a different address or has major design changes compared to control, we’d recommend that you create a Split URL Test.”

what-is-split-testing-explained-by-vwo

Split URL testing allows you to host different variations of your website test without changing the actual URL.

As the visual shows above, you can see that the two different variations are set up in a way that the URL is different as well.

URL testing is great when you want to test some major redesigns such as your entire website built from scratch.

By not changing your current website code, you can host the redesign on a different URL and have VWO split the traffic between the control and the variant—giving you clear insight whether your redesign will perform better.

Over to You

Now that you have a clear understanding on different types of website tests to run, the only thing left is to, well, run some tests.

Armored with quantitative and qualitative knowledge of your visitors, focus on the areas that have the biggest and quickest impact to strengthen your business.

And I promise, when you finish your first successful website test, you’ll get hooked on.

I know I was.

0

0 ratings

How will you rate this content?

Please choose a rating

The post [Gifographic] Better Website Testing – A Simple Guide to Knowing What to Test appeared first on VWO Blog.

Continue reading: 

[Gifographic] Better Website Testing – A Simple Guide to Knowing What to Test