What You Should Do Right Now to Avoid Stressing Out for the Holiday E-commerce Explosion

santa chllin

Every year it feels like TV and radio stations, retailers and other businesses start the “holiday season” talk earlier and earlier. Some people complain. Some people love it. I’m usually thinking about other things — like how to stay sane. You see, online retailers (which many of my clients are) are basically going berserk during the months of November and December. Ever heard of Black Friday? Cyber Monday? E-commerce booms as the holidays near, especially among women and Millennials. Taking advantage of these search trends takes preparation, grit, timing, and even a little bit of luck. I’m not promising an…

The post What You Should Do Right Now to Avoid Stressing Out for the Holiday E-commerce Explosion appeared first on The Daily Egg.

Excerpt from – 

What You Should Do Right Now to Avoid Stressing Out for the Holiday E-commerce Explosion

Running an A/A Test Before A/B Testing – Wise or Waste?

To A/A test or not is a question that invites conflicting opinions. Enterprises when faced with the decision of implementing an A/B testing tool do not have enough context on whether they should A/A test. Knowing the benefits and loopholes of A/A testing can help organizations make better decisions.

In this blog post we explore why some organizations practice A/A testing and the things they need to keep in mind while A/A testing. We also discuss other methods that can help enterprises decide whether or not to invest in a certain A/B testing tool.

Why Some Organizations Practice A/A Testing

A/A testing is done when organizations are taking up new implementation of an A/B testing tool. Running an A/A test at that time can help them with:

  • Checking the accuracy of an A/B Testing tool
  • Setting a baseline conversion rate for future A/B tests
  • Deciding a minimum sample size

Checking the Accuracy of an A/B Testing Tool

Organizations who are about to purchase an A/B testing tool or want to switch to a new testing software may run an A/A test to ensure that the new software works fine, and that it has been set up properly.

Tomasz Mazur, an eCommerce Conversion Rate Optimization expert, explains further: “A/A testing is a good way to run a sanity check before you run an A/B test. This should be done whenever you start using a new tool or go for new implementation. A/A testing in these cases will help check if there is any discrepancy in data, let’s say, between the number of visitors you see in your testing tool and the web analytics tool. Further, this helps ensure that your hypothesis are verified.”

In an A/A test, a web page is A/B tested against an identical variation. When there is absolutely no difference between the control and the variation, it is expected that the result will be inconclusive. However, in cases where an A/A test provides a winner between two identical variations, there is a problem. The reasons could be the following:

  • The tool has not been set up properly.
  • The test hasn’t been conducted correctly.
  • The testing tool is inefficient.

Here’s what Corte Swearingen, Director, A/B Testing and Optimization at American Eagle, has to say about A/A testing: “I typically will run an A/A test when a client seems uncertain about their testing platform, or needs/wants additional proof that the platform is operating correctly. There really is no better way to do this than to take the exact same page and test it against itself with no changes whatsoever. We’re essentially tricking the platform and seeing if it catches us! The bottom line is that while I don’t run A/A tests very often, I will occasionally use it as a proof of concept for a client, and to help give them confidence that the split testing platform they are using is working as it should.”

Determining the Baseline Conversion Rate

Before running any A/B test, you need to know the conversion rate that you will be benchmarking the performance results against. This benchmark is your baseline conversion rate.

An A/A test can help you set the baseline conversion rate for your website. Let’s explain this with the help of an example. Suppose you are running an A/A test where the control gives 303 conversions out of 10,000 visitors and the identical variation B gives 307 out of 10,000 conversions. The conversion rate for A is 3.03%, and that for B is 3.07%, when there is no difference between the two variations. Therefore, the conversion rate range that can be set as a benchmark for future A/B tests can be set at 3.03–3.07%. If you run an A/B test later and get an uplift within this range, this might mean that this result is not significant.

Deciding a Minimum Sample Size

A/A testing can also help you get an idea about the minimum sample size from your website traffic. A small sample size would not include sufficient traffic from multiple segments. You might miss out on a few segments which can potentially impact your test results. With a larger sample size, you have a greater chance of taking into account all segments that impact the test.

Corte says, “A/A testing can be used to make a client understand the importance of getting enough people through a test before assuming that a variation is outperforming the original.” He explains this with an A/A testing case study that was done for Sales Training Program landing pages for one of his clients, Dale Carnegie. The A/A test that was run on two identical landing pages got test results indicating that a variation was producing an 11.1% improvement over the control. The reason behind this was that the sample size being tested was too small.

a/a test initial results

After having run the A/A test for a period of 19 days and with over 22,000 visitors, the conversion rates between the two identical versions were the same.

a/a test results with more data

Michal Parizek, Senior eCommerce & Optimization Specialist at Avast, shares similar thoughts. He says, “At Avast, we did a comprehensive A/A test last year. And it gave us some valuable insights and was worth doing it!” According to him, “It is always good to check the statistics before final evaluation.”

At Avast, they ran an A/A test on  two main segments—customers using the free version of the product and customers using the paid version. They did so to get a comparison.

The A/A test had been live for 12 days, and they managed to get quite a lot of data. Altogether, the test involved more than 10 million users and more than 6,500 transactions.

In the “free” segment, they saw a 3% difference in the conversion rate and 4% difference in Average Order Value (AOV). In the “paid” segment, they saw a 2% difference in conversion and 1% difference in AOV.

“However, all uplifts were NOT statistically significant,” says Michal. He adds, “Particularly in the ‘free’ segment, the 7% difference in sales per user (combining the differences in the conversion rate and AOV) might look trustworthy enough to a lot of people. And that would be misleading. Given these results from the A/A test, we have decided to implement internal A/B testing guidelines/lift thresholds. For example, if the difference in the conversion rate or AOV is lower than 5%, be very suspicious that the potential lift is not driven by the difference in the design but by chance.”

Michal sums up his opinion by saying, “A/A testing helps discover how A/B testing could be misleading if they are not taken seriously. And it is also a great way to spot any bugs in the tracking and setup.”

Problems with A/A Testing

In a nutshell, the two main problems inherent in A/A testing are:

  • Everpresent element of randomness in any experimental setup
  • Requirement of a large sample size

We will consider these one by one:

Element of Randomness

As pointed out earlier in the post, checking the accuracy of a testing tool is the main reason for running an A/A test. However, what if you find out a difference between conversions of control and an identical variation? Do you always point it out as a bug in the A/B testing tool?

The problem (for the lack of a better word) with A/A testing is that there is always an element of randomness involved. In some cases, the experiment acquires statistical significance purely by chance, which means that the change in the conversion rate between A and its identical version is probabilistic and does not denote absolute certainty.  

Tomaz Mazur explains randomness with a real-world example. “Suppose you set up two absolutely identical stores in the same vicinity. It is likely, purely by chance or randomness, that there is a difference in results reported by the two. And it doesn’t always mean that the A/B testing platform is inefficient.”

Requirement of a Large Sample Size

Following the example/case study provided by Corte above, one problem with A/A testing is that it can be time-consuming. When testing identical versions, you need a large sample size to find out if A is preferred to its identical version. This in turn will take too much time.

As explained in one of the ConversionXL’s posts, “The amount of sample and data you need to prove that there is no significant bias is huge by comparison with an A/B test. How many people would you need in a blind taste testing of Coca-Cola (against Coca-Cola) to conclude that people liked both equally? 500 people, 5000 people?” Experts at ConversionXL explain that entire purpose of an optimization program is to reduce wastage of time, resources, and money.  They believe that even though running an A/A test is not wrong, there are better ways to use your time when testing.  In the post they mention, “The volume of tests you start is important but even more so is how many you *finish* every month and from how many of those you *learn* something useful from. Running A/A tests can eat into the “real” testing time.”

VWO’s Bayesian Approach and A/A Testing

VWO uses a Bayesian-based statistical engine for A/B testing. This allows VWO to deliver smart decisions–it tells you which variation will minimize potential loss.

Chris Stucchio, Director of Data Science at VWO, shares his viewpoint on how A/A testing is different in VWO than typical frequentist A/B testing tools.

Most A/B testing tools are seeking truth. When running an A/A test in a frequentist tool, an erroneous “winner” should only be reported 5% of the time. In contrast, VWO’s SmartStats is attempting to make a smart business decision. We report a smart decision when we are confident that a particular variation is not worse than all the other variations, that is, we are saying “you’ll leave very little money on the table if you choose this variation now.” In an A/A test, this condition is always satisfied—you’ve got nothing to lose by stopping the test now.

The correct way to evaluate a Bayesian test is to check whether the credible interval for lift contains 0% (the true value).

He also says that the possible and simplest reason for A/A tests to provide a winner

is random chance. “With a frequentist tool, 5% of A/A tests will return a winner due to bad luck. Similarly, 5% of A/A tests in a Bayesian tool will report erroneous lifts. Another possible reason is the configuration error; perhaps the javascript or HTML is incorrectly configured.”

Other Methods and Alternatives to A/A Testing

A few experts believe that A/A testing is inefficient as it consumes a lot of time that could otherwise be used in running actual A/B tests. However, there are others who say that it is essential to run a health check on your A/B testing tool. That said, A/A testing alone is not sufficient to establish whether one testing tool should be prefered over another. When making a critical business decision such as buying a new tool/software application for A/B testing, there are a number of other things that should be considered.

Corte points out that though there is no replacement or alternative to A/A testing, there are other things that must be taken into account when a new tool is being implemented. These are listed as follows:

  1.  Will the testing platform integrate with my web analytics program so that I can further slice and dice the test data for additional insight?
  2.  Will the tool let me isolate specific audience segments that are important to my business and just test those audience segments?
  3.  Will the tool allow me to immediately allocate 100% of my traffic to a winning variation? This feature can be an important one for more complicated radical redesign tests where standardizing on the variation may take some time. If your testing tool allows immediate 100% allocation to the winning variation, you can reap the benefits of the improvement while the page is built permanently in your CMS.
  4. Does the testing platform provide ways to collect both quantitative and qualitative information about site visitors that can be used for formulating additional test ideas? These would be tools like heatmap, scrollmap, visitor recordings, exit surveys, page-level surveys, and visual form funnels. If the testing platform does not have these integrated, do they allow integration with third-party tools for these services.
  5. Does the tool allow for personalization? If test results are segmented and it is discovered that one type of content works best for one segment and another type of content works better for a second segment, does the tool allow you to permanently serve these different experiences for different audience segments”?

That said, there is still a set of experts or people who would opt for alternatives such as triangulating data over an A/A test. Using this procedure means you have two sets of performance data to cross-check with each other. Use one analytics platform as the base to compare all other outcomes against, to check if there is something wrong or something that needs fixing.

And then there is the argument—why just A/A test when you can get more meaningful insights by running an A/A/B test. Doing this, you can still compare two identical versions while also testing some changes in the B variant.

Conclusion

When businesses face the decision of implementing a new testing software application, they need to run a thorough check on the tool. A/A testing is one method that some organizations use for checking the efficiency of the tool. Along with personalization and segmentation capabilities and some other pointers mentioned in this post, this technique can help check if the software application is good for implementation.

Did you find the post insightful? Drop us a line in the comments section with your feedback.

Free-trial CTA

The post Running an A/A Test Before A/B Testing – Wise or Waste? appeared first on VWO Blog.

Read original article:  

Running an A/A Test Before A/B Testing – Wise or Waste?

Beyond Optimization: Email A/B Tests That Will Improve Your Entire Business

email-tests-blog-image
Those email metrics may provide you with more insight than you thought. Image via Shutterstock.

The components of an A/B test are pretty straightforward: change some stuff, compare key metrics, deploy winner, repeat.

So when you start an A/B test on your email, this is the sort of process you fall back on. You brainstorm a couple of alternate subject lines, test them on a small segment and send the winner to everyone else. This is a great way of making sure you’re sending the better of two ideas, but does it really mean you’re sending better email?

Instead, today we’re going to focus on the benefits of A/B testing for the future. That means turning your results into actionable guidance for feature planning, branding, sales and retention strategies.

ezgif-2245072739
Maximizing is not always optimizing.

Feature planning

It can be really tough figuring out which features need the most attention, not to mention prioritizing improvements your top users would be most excited for. Email can help!

A simple email teasing upcoming improvements to X or Y feature can give you valuable insights for your next product planning session on what changes actually pique a user’s interest.

Similarly, you can test something like, “What would you like to see added to feature X” vs “…feature Y.” Even if you get little to no feedback, the comparative open rates can tell you a lot about which features people want to see updated.

This can be especially insightful for startups, because setting the wrong priorities for your development team can hamstring your growth. In cases like this where the stakes are higher, it may be more powerful to subtly present options and observe responses than to straight up ask.

image01
The problem with asking users what they want directly. Image via Frankiac.

Product branding

What if you’re getting ready to launch a new feature or plan an event, but you’re torn on what to call it. Simply run a test with a sneak peak email to your most engaged users and see what gets their attention.

This one may feel a bit weird, because branding of your product and features can feel really personal, but it’s also really important, so why leave it to your gut when you can test?

You don’t even have to build out a fancy announcement email, because you’re just looking for opens, indicating that initial spark of interest. The body can be a simple, plain text save the date or a link to a survey or something.

Subject line cheat sheet

For the rest of your emails…

Write a click-worthy subject line every time with our Subject Line Cheat Sheet.
By entering your email you’ll receive weekly Unbounce Blog updates and other resources to help you become a marketing genius.

Sales materials development

Good email testing can also translate to benefits for your sales team. Imagine their eyes lighting up when you pass them a document illustrating how your highest value customers engage with different phrasings of your core features.

There are a couple of interesting ways to execute on this, but I think the most practical is to build an onboarding email with links to your features, and then test headlines for each section (bonus points if you randomize the order to satisfy the statisticians in the house).

You could also stretch this across multiple emails in your onboarding drip campaign, or send a one-off “What’s new” update.

image02
Life is about decisions. Image source.

Retention

Now that you’ve figured out which features resonate most with your high-touch users, it’s time to figure out what gets people hooked on your product or service in the first place.

There are a ton of ways to accomplish this in the traditional on-site manner, but how does email fit into the picture?

The most obvious option here is to use the information you gleaned to craft a killer onboarding campaign that introduces new users to the most beloved features first. That strategy, however, is really focused on top-of-funnel retention. Today, I want to take a look at the other end: churn prevention.

There are, of course, some users that were never a good fit to begin with and will churn regardless. But for those that just never got the hang of things, the most common move is to hit them with a “Hail Mary email” — one last-ditch effort to win them back.

A lot of times this comes in the form of a direct note from someone asking what they could have done better, but why not use that space to run some tests? Not just to squeeze out a few more opens on a low-converting email, but to see what actually gets people’s attention. Then you can take the stuff that works, and work it into your onboarding campaign to keep people from ever getting to the Hail Mary state.

image03
Saying goodbye is hard. Image source.

Conclusion

These are, of course, not the only ways that you can incorporate your learnings from tests into other aspects of your marketing, but it’s a great start if you don’t have a process like this in place.

The structure you build out to track and share the results from tests like these can be tremendously helpful for the whole team — not just in the ways I’ve outlined above, but also in just keeping everyone on the same page and in line with what your customers want to hear.

Have anything to add? I’d love to hear about it in the comments.

Follow this link: 

Beyond Optimization: Email A/B Tests That Will Improve Your Entire Business

Create a Case Study that Converts [INFOGRAPHIC]

We have a saying at Unbounce: “Put a customer on it.”

Whether it’s a blog post, conference talk or even our homepage, we take every opportunity possible to show how our tool is helping real marketers #dobetter (another common Unbounce phrase).

And it’s not just because we love our customers. I mean, we do love our customers, but putting a customer on it — particularly in the form of a case study — is a compelling way to inject social proof into your marketing. And persuasive social proof can be just the thing to convince your prospects that they need what you’re offering.

But how does one create a case study that provides social proof and ultimately wins you customers?

Our friends at JBH Agency (a UK-based content marketing agency) have the answer: a 34-point checklist for creating a case study that converts. It’ll take you through the whole process, from choosing the right customer to feature to selecting a case study format to documenting its impact.

Psst: If you’re more of a reader than a visual learner, check out the original article that inspired this infographic, written by Ayelet Weisz.

34 point case study checklist
Embed this infographic on your site

Credit: 

Create a Case Study that Converts [INFOGRAPHIC]

6 Smart Ways to Get Quality Backlinks for SEO

smart cat

Link building is one of the oldest and most effective SEO tactics. It’s also one of the most productive ways to grow organic search traffic. Oddly, though, link building can actually harm your traffic, too. Historically, links were how Google figured out which websites were good: a link was a recommendation, so websites with more links ranked higher. Google let the web decide how good each page was. Since then, Google updates have largely been about getting ahead of efforts to game this process by acquiring unearned links. We’re now at a point where only very white hat link building…

The post 6 Smart Ways to Get Quality Backlinks for SEO appeared first on The Daily Egg.

View this article:

6 Smart Ways to Get Quality Backlinks for SEO

How To Poison The Mobile User

One of the most popular children’s television heroes here in the Czech Republic is The Little Mole, an innocent, speechless and cheerful creature who helps other animals in the forest.

How To Poison The Mobile User

TV heroes often fight against people who destroy their natural environment. When watching The Little Mole with my kids, I sometimes picture him as a mobile website user. Do you want to know why?

The post How To Poison The Mobile User appeared first on Smashing Magazine.

Original source – 

How To Poison The Mobile User

Your VWO campaigns will not be affected by the Dyn.com or any other DNS outage

Yesterday, a massive distributed denial of services (DDoS) attack on DynDNS server shook the internet with popular websites like Twitter, Spotify, Reddit, AirBnB, Shopify and thousands more being inaccessible for most part of the day. The enormous scale of this outage impacted millions of users across the globe with billions of dollars being lost in revenue and business.

The incident didn’t impact VWO since we don’t use Dyn as our DNS service provider. However, no web services provider is 100% secure from such attacks. A few months ago, a similar incident happened to our DNS service provider. The issue took some time to get resolved but VWO’s asynchronous SmartCode made sure that our customers need not worry about such threats at all!

The VWO SmartCode works in parallel with your website code and doesn’t get in the way of your website loading even if the SmartCode is unable to load for some reason. This means that while the VWO app service was down because of the attack on our DNS service provider, your website and landing pages continued to work as usual. The only impact our customers saw was their A/B test campaigns not loading properly.

Here’s a quick snapshot of how the VWO SmartCode works relative to your website:

how VWO SmartCode loads

VWO’s asynchronous SmartCode does not add to your page load time. With Synchronous code, the browser has to wait for the test package to download and then process it before loading the rest of the page. If for any reason the tracking code can’t contact its servers then the browser will wait, usually 30 to 60 seconds, until the request times out. If your tracking code is in the <head> tags, then your entire page won’t load and your visitor will be stuck with a blank page. Asynchronous code does not have this critical problem. If for any reason, the asynchronous VWO SmartCode can’t contact our servers your page will still download and render properly.

At VWO, we define our success in terms of always serving in the best interest of all our customers. And this becomes crucial in critical times like this when web service providers are facing serious repercussions for absolutely no fault of their own.

The post Your VWO campaigns will not be affected by the Dyn.com or any other DNS outage appeared first on VWO Blog.

Original link:

Your VWO campaigns will not be affected by the Dyn.com or any other DNS outage