Tag Archives: data

How To Avoid Web Analytics ‘Analysis Paralysis’ & Spend More Time Making Optimization Wins

Visualizations are the best place to start It’s much easier to start your website optimization journey from a visual perspective than a strictly numerical one. When you can immediately see where visitors and users are clicking and where they’re not, you’re instantly clued into obvious bottlenecks, blockers, and regions that are completely ignored. Take this Google Analytics data for example… When you start digging through your typical analytics packages, you’ll end up several pages deep, looking at listed data like what is shown above. Not always helpful, right? What happens when I look at visual website analytics? This is a…

The post How To Avoid Web Analytics ‘Analysis Paralysis’ & Spend More Time Making Optimization Wins appeared first on The Daily Egg.

View article: 

How To Avoid Web Analytics ‘Analysis Paralysis’ & Spend More Time Making Optimization Wins

The Six Most Misunderstood Metrics in Google Analytics

Google Analytics (GA) is capable of generating incredibly detailed and comprehensive data. It provides the insights needed to fine-tune your site, reduce UX friction and ultimately maximize conversions. But there’s a catch. It’s only effective if you actually know how to interpret the data.   Unfortunately, not all users fully understand the core metrics, and there’s uncertainty as to how to decipher them. Here, we’ll take a look at six of the most misunderstood metrics in GA to find out what the data means and how to apply it in order to optimize your site. 1. Direct Traffic At first…

The post The Six Most Misunderstood Metrics in Google Analytics appeared first on The Daily Egg.

Read this article:

The Six Most Misunderstood Metrics in Google Analytics

Infographic: Seven Salient (and Strange) Email Marketing Insights

ecommerce stats

When it comes to ecommerce, email marketing is one of the most powerful tools to increase conversions, meaning more sales and more subscriptions. Today we’ll go over an insightful infographic found here that has a few important points which may be hard to understand at first glance, and a couple of points I find contention with. Be sure to read the breakdown below the infographic! The infographic was originally posted on soundest.com. Let’s break it down Insight #1: Bigger businesses generate more orders (but have lower open rates?) Smaller businesses (5,000 member lists) enjoy an average open rate of 21.38%…

The post Infographic: Seven Salient (and Strange) Email Marketing Insights appeared first on The Daily Egg.

Link to original – 

Infographic: Seven Salient (and Strange) Email Marketing Insights

Infographic: The Data Behind What Makes An Effective Sales Process

This is one of my all-time favorite infographics. I reference it in other articles quite regularly. It really gets to the point of how important it is to respond to your inbound leads ASAP. And I’m not talking about newsletter signups or people who have downloaded a white paper. I’m talking about hot leads: People who are calling in, asking for demos, and asking specific questions. I’ve worked for many B2B companies in the past where this was always something that could have been improved. The first problem is: 9-5. You’re losing a ton of money by not responding to…

The post Infographic: The Data Behind What Makes An Effective Sales Process appeared first on The Daily Egg.

Original source – 

Infographic: The Data Behind What Makes An Effective Sales Process

Want a Better Way to Engage Your Audience? Try Data-Driven Micro-Content

micro content

Content marketing is in a state of surplus: there is too much supply of branded content and diminishing returns of audience engagement. A report by Beckon analyzed over 16 million in marketing spend and concluded: “Brands might be shocked to hear that while branded content creation is up 300 percent year over year, consumer engagement with that content is totally flat. They’re investing a lot in content creation, and it’s not driving more consumer engagement.” -Jennifer Zeszut, CEO at Beckon The painful truth is: the vast majority of content marketing ended up going down the rabbit holes of the internet…

The post Want a Better Way to Engage Your Audience? Try Data-Driven Micro-Content appeared first on The Daily Egg.

Taken from:  

Want a Better Way to Engage Your Audience? Try Data-Driven Micro-Content

How to Create, Track and Rank CRO Hypotheses So You Know What to Test

CRO hypothesis ranking

CRO makes big promises. But the way people get to those 300% lifts in conversions is by being organized. Otherwise, you find yourself in the position that a lot of marketers do: you do a test, build on the result, wait a while, do another test, wait a while… meanwhile, the big jumps in conversions, leads and revenue never really seem to manifest. That’s because only a structured approach can get you in position to make the best use of your testing time and budget. This isn’t something you want to be doing by the seat of your pants. In…

The post How to Create, Track and Rank CRO Hypotheses So You Know What to Test appeared first on The Daily Egg.

Follow this link:

How to Create, Track and Rank CRO Hypotheses So You Know What to Test

“The more tests, the better!” and other A/B testing myths, debunked

Reading Time: 8 minutes

Will the real A/B testing success metrics please stand up?

It’s 2017, and most marketers understand the importance of A/B testing. The strategy of applying the scientific method to marketing to prove whether an idea will have a positive impact on your bottom-line is no longer novel.

But, while the practice of A/B testing has become more and more common, too many marketers still buy into pervasive A/B testing myths. #AlternativeFacts.

This has been going on for years, but the myths continue to evolve. Other bloggers have already addressed myths like “A/B testing and conversion optimization are the same thing”, and “you should A/B test everything”.

As more A/B testing ‘experts’ pop up, A/B testing myths have become more specific. Driven by best practices and tips and tricks, these myths represent ideas about A/B testing that will derail your marketing optimization efforts if left unaddressed.

Avoid the pitfalls of ad-hoc A/B testing…

Get this guide, and learn how to build an optimization machine at your company. Discover how to use A/B testing as part of your bigger marketing optimization strategy!



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.



But never fear! With the help of WiderFunnel Optimization Strategist, Dennis Pavlina, I’m going to rebut four A/B testing myths that we hear over and over again. Because there is such a thing as a successful, sustainable A/B testing program…

Into the light, we go!

Myth #1: The more tests, the better!

A lot of marketers equate A/B testing success with A/B testing velocity. And I get it. The more tests you run, the faster you run them, the more likely you are to get a win, and prove the value of A/B testing in general…right?

Not so much. Obsessing over velocity is not going to get you the wins you’re hoping for in the long run.

Mike St Laurent

The key to sustainable A/B testing output, is to find a balance between short-term (maximum testing speed), and long-term (testing for data-collection and insights).

Michael St Laurent, Senior Optimization Strategist, WiderFunnel

When you focus solely on speed, you spend less time structuring your tests, and you will miss out on insights.

With every experiment, you must ensure that it directly addresses the hypothesis. You must track all of the most relevant goals to generate maximum insights, and QA all variations to ensure bugs won’t skew your data.

Dennis Pavlina

An emphasis on velocity can create mistakes that are easily avoided when you spend more time on preparation.

Dennis Pavlina, Optimization Strategist, WiderFunnel

Another problem: If you decide to test many ideas, quickly, you are sacrificing your ability to really validate and leverage an idea. One winning A/B test may mean quick conversion rate lift, but it doesn’t mean you’ve explored the full potential of that idea.

You can often apply the insights gained from one experiment, when building out the strategy for another experiment. Plus, those insights provide additional evidence for testing a particular concept. Lining up a huge list of experiments at once without taking into account these past insights can result in your testing program being more scattershot than evidence-based.

While you can make some noise with an ‘as-many-tests-as-possible’ strategy, you won’t see the big business impact that comes from a properly structured A/B testing strategy.

Myth #2: Statistical significance is the end-all, be-all

A quick definition

Statistical significance: The probability that a certain result is not due to chance. At WiderFunnel, we use a 95% confidence level. In other words, we can say that there is a 95% chance that the observed result is because of changes in our variation (and a 5% chance it is due to…well…chance).

If a test has a confidence level of less than 95% (positive or negative), it is inconclusive and does not have our official recommendation. The insights are deemed directional and subject to change.

Ok, here’s the thing about statistical significance: It is important, but marketers often talk about it as if it is the only determinant for completing an A/B test. In actuality, you cannot view it within a silo.

For example, a recent experiment we ran reached statistical significance three hours after it went live. Because statistical significance is viewed as the end-all, be-all, a result like this can be exciting! But, in three hours, we had not gathered a representative sample size.

Claire Vignon Keser

You should not wait for a test to be significant (because it may never happen) or stop a test as soon as it is significant. Instead, you need to wait for the calculated sample size to be reached before stopping a test. Use a test duration calculator to understand better when to stop a test.

After 24 hours, the same experiment had dropped to a confidence level of 88%, meaning that there was now only an 88% likelihood that the difference in conversion rates was not due to chance – i.e. statistically significant.

Traffic behaves differently over time for all businesses, so you should always run a test for full business cycles, even if you have reached statistical significance. This way, your experiment has taken into account all of the regular fluctuations in traffic that impact your business.

For an e-commerce business, a full business cycle is typically a one-week period; for subscription-based businesses, this might be one month or longer.

Myth #2, Part II: You have to run a test until reaches statistical significance

As Claire pointed out, this may never happen. And it doesn’t mean you should walk away from an A/B test, completely.

As I said above, anything below 95% confidence is deemed subject to change. But, with testing experience, an expert understanding of your testing tool, and by observing the factors I’m about to outline, you can discover actionable insights that are directional (directionally true or false).

  • Results stability: Is the conversion rate difference stable over time, or does it fluctuate? Stability is a positive indicator.
ab testing results stability
Check your graphs! Are conversion rates crossing? Are the lines smooth and flat, or are there spikes and valleys?
  • Experiment timeline: Did I run this experiment for at least a full business cycle? Did conversion rate stability last throughout that cycle?
  • Relativity: If my testing tool uses t-test to determine significance, am I looking at the hard numbers of actual conversions in addition to conversion rate? Does the calculated lift make sense?
  • LIFT & ROI: Is there still potential for the experiment to achieve X% lift? If so, you should let it run as long as it is viable, especially when considering the ROI.
  • Impact on other elements: If elements outside the experiment are unstable (social shares, average order value, etc.) the observed conversion rate may also be unstable.

You can use these factors to make the decision that makes the most sense for your business: implement the variation based on the observed trends, abandon the variation based on observed trends, and/or create a follow-up test!

Myth #3: An A/B test is only as good as its effect on conversion rates

Well, if conversion rate is the only success metric you are tracking, this may be true. But you’re underestimating the true growth potential of A/B testing if that’s how you structure your tests!

To clarify: Your main success metric should always be linked to your biggest revenue driver.

But, that doesn’t mean you shouldn’t track other relevant metrics! At WiderFunnel, we set up as many relevant secondary goals (clicks, visits, field completions, etc.) as possible for each experiment.

Dennis Pavlina

This ensures that we aren’t just gaining insights about the impact a variation has on conversion rate, but also the impact it’s having on visitor behavior.

– Dennis Pavlina

When you observe secondary goal metrics, your A/B testing becomes exponentially more valuable because every experiment generates a wide range of secondary insights. These can be used to create follow up experiments, identify pain points, and create a better understanding of how visitors move through your site.

An example

One of our clients provides an online consumer information service — users type in a question and get an Expert answer. This client has a 4-step funnel. With every test we run, we aim to increase transactions: the final, and most important conversion.

But, we also track secondary goals, like click-through-rates, and refunds/chargebacks, so that we can observe how a variation influences visitor behavior.

In one experiment, we made a change to step one of the funnel (the landing page). Our goal was to set clearer visitor expectations at the beginning of the purchasing experience. We tested 3 variations against the original, and all 3 won resulted in increased transactions (hooray!).

The secondary goals revealed important insights about visitor behavior, though! Firstly, each variation resulted in substantial drop-offs from step 1 to step 2…fewer people were entering the funnel. But, from there, we saw gradual increases in clicks to steps 3 and 4.

Our variations seemed to be filtering out visitors without strong purchasing intent. We also saw an interesting pattern with one of our variations: It increased clicks from step 3 to step 4 by almost 12% (a huge increase), but decreased actual conversions by -1.6%. This result was evidence that the call-to-action on step 4 was extremely weak (which led to a follow-up test!)

ab testing funnel analysis
You can see how each variation fared against the Control in this funnel analysis.

We also saw large decreases in refunds and chargebacks for this client, which further supported the idea that the right visitors (i.e. the wrong visitors) were the ones who were dropping off.

This is just a taste of what every A/B test could be worth to your business. The right goal tracking can unlock piles of insights about your target visitors.

Myth #4: A/B testing takes little to no thought or planning

Believe it or not, marketers still think this way. They still view A/B testing on a small scale, in simple terms.

But A/B testing is part of a greater whole—it’s one piece of your marketing optimization program—and you must build your tests accordingly. A one-off, ad-hoc test may yield short-term results, but the power of A/B testing lies in iteration, and in planning.

ab testing infinity optimization process
A/B testing is just a part of the marketing optimization machine.

At WiderFunnel, a significant amount of research goes into developing ideas for a single A/B test. Even tests that may seem intuitive, or common-sensical, are the result of research.

ab testing planning
The WiderFunnel strategy team gathers to share and discuss A/B testing insights.

Because, with any test, you want to make sure that you are addressing areas within your digital experiences that are the most in need of improvement. And you should always have evidence to support your use of resources when you decide to test an idea. Any idea.

So, what does a revenue-driving A/B testing program actually look like?

Today, tools and technology allow you to track almost any marketing metric. Meaning, you have an endless sea of evidence that you can use to generate ideas on how to improve your digital experiences.

Which makes A/B testing more important than ever.

An A/B test shows you, objectively, whether or not one of your many ideas will actually increase conversion rates and revenue. And, it shows you when an idea doesn’t align with your user expectations and will hurt your conversion rates.

And marketers recognize the value of A/B testing. We are firmly in the era of the data-driven CMO: Marketing ideas must be proven, and backed by sound data.

But results-driving A/B testing happens when you acknowledge that it is just one piece of a much larger puzzle.

One of our favorite A/B testing success stories is that of DMV.org, a non-government content website. If you want to see what a truly successful A/B testing strategy looks like, check out this case study. Here are the high level details:

We’ve been testing with DMV.org for almost four years. In fact, we just launched our 100th test with them. For DMV.org, A/B testing is a step within their optimization program.

Continuous user research and data gathering informs hypotheses that are prioritized and created into A/B tests (that are structured using proper Design of Experiments). Each A/B test delivers business growth and/or insights, and these insights are fed back into the data gathering. It’s a cycle of continuous improvement.

And here’s the kicker: Since DMV.org began A/B testing strategically, they have doubled their revenue year over year, and have seen an over 280% conversion rate increase. Those numbers kinda speak for themselves, huh?

What do you think?

Do you agree with the myths above? What are some misconceptions around A/B testing that you would like to see debunked? Let us know in the comments!

The post “The more tests, the better!” and other A/B testing myths, debunked appeared first on WiderFunnel Conversion Optimization.

Excerpt from:

“The more tests, the better!” and other A/B testing myths, debunked

Glossary: Bandit Testing

what is bandit testing

A term used to describe test methods or algorithms that continuously shift traffic in reaction to the real-time performance of the test. Also known as “multi-armed bandit testing”, the name is derived from the behavior of casino slot machine players who often play several machines at once in order to optimize their payout. Rather than stay with a single machine, the gambler will often play some percentage of the time on several other nearby machines. In this way, the new “hot” machine can be identified without leaving the original machine behind. When used in website testing, bandit testing represents a…

The post Glossary: Bandit Testing appeared first on The Daily Egg.

See original article:

Glossary: Bandit Testing

You Need More Than Analytics Data to Grow Your Business – You Need Systems

systems analytics

I have realized something after working with countless companies and helping them set up their analytics tools. They all want the hottest tool. They all want as much data as they can possibly gather. However, these companies don’t actually care about the data. This might sound crazy, but hear me out. They want the data, but not because of what you might think. They want access to data so they can improve something that matters to their business (e.g. conversion rates, user retention, revenue, etc.). They think that having data will make this easier (like their tools will magically make…

The post You Need More Than Analytics Data to Grow Your Business – You Need Systems appeared first on The Daily Egg.

Link: 

You Need More Than Analytics Data to Grow Your Business – You Need Systems