Tag Archives: organizations

How To Increase Website Conversions With The Right Messaging

 Note: This is a guest article written by Josh Mendelsohn, VP of Marketing at Privy. Any and all opinions expressed in the post are Josh’s.

Let’s cut right to it. We all suck at conversion. According to E-marketer, 98% of online traffic leaves a site without filling out a form or completing a purchase. That means you have missed a chance to start building a relationship with potential customers. While it’s easy to shrug off a low on-site conversion rate, imagine if you owned a physical store and 100 people walked in… and 98 walked out without interacting with a represented or making a purchase. You’d be pretty sad, right? Yet, that’s what most of us are doing in our online stores and are not able to increase website conversions.

Why Do We Do This To Ourselves?

For starters, most organizations are thinking their product far more than they are thinking about conversion. If you’re a publisher, that might be the articles you are producing. If you’re an online store, it’s literally the products you are sourcing, merchandising, and selling. If you’re a non-profit, it’s the services you are providing to the world.

They are also likely thinking about how to drive site traffic. Whether that is through building a social media presence, paid search, radio, or even print ads.

And they may have even hired someone to think about the customer or member experience and how to keep those people engaged and generating word of mouth. But they often forget the middle, critical piece of the funnel, which is on-site conversion.

For the (much) smaller group of organizations who are actively trying to drive conversion, most fall into one of two camps. They either take a very passive approach because they don’t want to be too salesy. Or they take an overly aggressive approach with forms coming at a visitor from all angles, blocking a site’s core content.  But that’s not what good salespeople do. They take what they know about a prospect (in this case, a site visitor) and they use that to craft a message.

What We Know About Site Visitors

Through the magic of digital marketing, we know a lot about a site visitor without having to ask. While some people may find this creepy, for marketers it is an untapped goldmine of messaging opportunity.  For example, we can usually answer the question:

  • Where did they come from?
  • Is this their first visit?
  • What page are they on?
  • How many pages have they looked at?
  • What language do they speak?
  • What device are they on?
  • How much is in their cart?

What Do You Do With That Information?

While most organizations who have started thinking about conversion might have a simple opt-in form pop-up for visitors to their site, those who are focused on it can use the information we know to their advantage to create a more targeted experience for visitors to their site by crafting different messages based on who they are and what they have done. For the example below, I am going to imagine an E-Commerce company selling women’s clothing and I want to offer a 10% discount to new customers who sign up for my email list. While you probably wouldn’t want to hit someone with ALL of these messages, you can see how your core message might change based on what you know about a visitor.

Question What we know Messaging Strategy
Where did they come from? The visitor clicked on an Instagram ad featuring a specific blue swimsuit . Try featuring the product that they already expressed interest in within your message. “Looking for a new swimsuit? Get 10% off your first purchase by entering your email below.”
Is this their first visit? They have visited before but have never bought anything from you. Don’t treat them like a stranger! “Welcome back to my store! We’ve just launched a new product line. Sign up below to get 10% off your first purchase.”
What page are they on? They are on the “About” page of your site and not actually shopping. Try a “stay in touch” message over a discount. “Sign up to hear about new products and special offers.”
How many pages have they looked at?

AND

How much is in their cart?

They have looked at 7 different pages in your store without adding anything to their cart, which means they are browsing but are not yet sold. “Having trouble finding what you are looking for? Sign up and we’ll let you know when we launch new products and give you a 10% discount for your first purchase.”
What language do they speak? The visitor’s primary language on their browser is spanish. “¡Bienvenidos a mi tienda! Regístrese abajo para obtener un 10% de descuento en su primera compra.”
What device category are they on? The visitor is on a mobile device, which is a great cue to slim down your text. “Sign up today for 10% off your first purchase.”

How To Deliver The Message

There are two things that you need to think about when delivering the message to your site visitors: timing and format. Let’s look at the format first :

1- Targeted displays – There are three categories of display types that drive the most on-site conversions.

– Popups: Popups, also known as lightboxes, typically display in the center of the website, or sometimes as “fly outs” in the corner.

– Bars: A full width bar that typically sits either on top of your site, or at the bottom.

– Banners: A more subtle interaction that sits at the top or bottom of a site, but starts in a “hidden” state until triggered, then rolls into sight at the desired time.

Pop-Ups for Increasing Website Conversions
Pop-Up for Targeted Display


2-
Chat
More and more often, successful online stores are investing in automated and live chat to help reduce the anxiety that consumers feel before making a purchase from a new retailer. In fact, the availability of a “live” person on your site accomplishes two important goals:

– It allows people to ask any questions ahead of completing a purchase. Especially for larger ticket items, this inspires confidence that they are making the right decision

– It tells them that if something goes wrong with an order, there is a real person they can reach out to for help. The combination of those two factors makes shoppers more likely to hit the buy button.

Chat for Increasing Website Conversions
Engaging Visitors through Chat

3- Video
The third way of delivering the message that can have a huge impact on conversion is the use of video. Unlike static images and text, video helps bring your products to life and gives you the chance to both explain why someone should buy and put the product in a real life context. Or in some cases, lets you tell a broader story of how the product came to be in the first place.  Here’s an example of one I love (and am desperate to own.)

Product Videos for Increasing Website Conversions
Product Videos for Capturing Visitor Attention

Triggering Your Messages

The second consideration is deciding when to trigger each of your messages. There are four primary ways you can trigger a campaign to your desired audience.

  • Timer: The time trigger simply enables you to determine when to display your campaign, based on how long a visitor has been on your site. It could show immediately when a visitor lands, 10 seconds later, etc.
  • Exit intent: This trigger is growing in popularity. Exit intent tracks your visitors mouse movement, and if the visitor appears to be leaving or “exiting” your site, you can use that as a trigger for your campaign.
  • Scroll percentage: Show your campaign once a visitor has scrolled down your page a certain percentage.
  • Tabs: Tabs, or other visual calls to action can be customized to fit in with your site layout, and when clicked, trigger your campaign to display.

Which Converts Best?

Ultimately any combination of targeted messaging delivered through displays, videos, and chats will improve your conversion rate. We’ve looked at thousands of campaigns and found that each of the display types and triggers can be effective.  Because investing in video can take significant resources (time and money), I recommend starting with display and chat to deliver the right message at the right time. Once you have videos on hand, you can embed them on your product pages to level up your product content and add them into your displays to get them in front of shoppers as they navigate your site.

In terms of display types, banners are actually the highest converting format largely because they are less subtle than a simple “bar” but less frustrating to visitors than pop-ups that interrupt the browsing experience before a visitor has had a chance to consumer any of your content. In addition, we find that triggering a campaign in less than thirty seconds from the time a visitor lands on your site (or a specific page) is most effective in driving conversion.

Setting that data aside for a second, recent trends are showing that among the most impactful things you can do if you operate an online store is actually combining a pop-up with an exit intent trigger that serves as a “cart saver.” Simply put, if someone is visiting your store and attempts to leave by closing the browser tab or clicking the back button, you can show a message with a special offer that gets them to sign up and/or keep shopping while giving you permission to market to them in the future.

Exit Intent Pop-Ups to Increase Website Conversions
Exit Intent Pop-Ups

Walk. Jog. Run.

So, where do you get started? You don’t need to craft custom messages for every audience and every page on your site right out of the gate. We suggest thinking about one or two of your most common audiences and creating targeted offers and messages just for them that you can track, test, and adapt before rolling out a full on-site conversion program.

0

0 ratings

How will you rate this content?

Please choose a rating

The post How To Increase Website Conversions With The Right Messaging appeared first on VWO Blog.

Link to original: 

How To Increase Website Conversions With The Right Messaging

Data-Driven Optimization: How The Moneyball Method Can Deliver Increased Revenues

Whether your current ROI is something to brag about or something to worry about, the secret to making it shine lies in a 2011 award-winning movie starring Brad Pitt.

Do you remember the plot?

The manager of the downtrodden Oakland A’s meets a baseball-loving Yale economics graduate who maintains certain theories about how to assemble a winning team.

His unorthodox methods run contrary to scouting recommendations and are generated by computer analysis models.

Despite the ridicule from scoffers and naysayers, the geek proves his point. His data-driven successes may even have been the secret sauce, fueling Boston’s World Series title in 2004 (true story, and the movie is Moneyball).

img-0_copy

What’s my point?

Being data-driven seemed a geeks’ only game, or a far reach to many, just a few years ago. Today, it’s time to get on the data-driven bandwagon…or get crushed by it.

Let’s briefly look at the situation and the cure.

Being Data-Driven: The Situation

Brand awareness, test-drive, churn, customer satisfaction, and take rate—these are essential nonfinancial metrics, says Mark Jeffery, adjunct professor at the Kellogg School of Management.

Throw in a few more—payback, internal rate of return, transaction conversion rate, and bounce rate—and you’re well on your way to mastering Jeffery’s 15 metric essentials.

Why should you care?

Because Mark echoes the assessment of his peers from other top schools of management:

Organizations that embrace marketing metrics and create a data-driven marketing culture have a competitive advantage that results in significantly better financial performance than that of their competitors. – Mark Jeffery.

You don’t believe in taking marketing and business growth advice from a guy who earned a Ph.D. in theoretical physics? Search “data-driven stats” for a look at the research. Data-centric methods are leading the pack.

Being Data-Driven: The Problem

If learning to leverage data can help the Red Sox win the World Series, why are most companies still struggling to get on board, more than a decade later?

There’s one little glitch in the movement. We’ve quickly moved from “available data” to “abundant data” to “BIG data.”

CMO’s are swamped with information and are struggling to make sense of it all. It’s a matter of getting lost in the immensity of the forest and forgetting about the trees.

We want the fruits of a data-driven culture. We just aren’t sure where or how to pick them.

Data-Driven Marketing: The Cure

I’ve discovered that the answer to big data overload is hidden right in the problem, right there at the source.

Data is produced by scientific means. That’s why academics like Mark are the best interpreters of that data. They’re schooled in the scientific method.

That means I must either hire a data scientist or learn to approach the analytical part of business with the demeanor of a math major.

Turns out that it’s not that difficult to get started. This brings us to the most important aspect, that is, the scientific approach to growth.

Scientific Method of Growth

You’re probably already familiar with the components of the scientific method. Here’s one way of describing it:

  1. Identify and observe a problem, then state it as a question.
  2. Research the topic and then develop a hypothesis that would answer the question.
  3. Create and run an experiment to test the hypothesis.
  4. Go over the findings to establish conclusions.
  5. Continue asking and continue testing.

    Scientific Method of Growth and Optimization

By focusing on one part of the puzzle a time, neither the task nor the data will seem overwhelming. As you are designing the experiment, you can control it.

Here’s an example of how to apply the scientific method to data-driven growth/optimization, as online enterprises would know it.

  1. Question: Say you have a product on your e-commerce site that’s not selling as well as you want. The category manager advises lowering the price. Is that a good idea?
  2. Hypothesis: Research tells you that similar products are selling at an average price that is about the same as yours. You hypothesize that lowering your price will increase sales.
  3. Test: You devise an A/B test that will offer the item at a lower price to half of your e-commerce visitors and at the same price to the other half. You run the test for one week.
  4. Conclusions: Results show that lowering the price did not significantly increase sales.
  5. Action: You create another hypothesis to explain the disappointing sales and test this hypothesis for accuracy.

A/B Testing

You may think that the above example is an oversimplification, but we’ve seen our clients at The Good make impressive gains by arriving at data-driven decisions based on experiments even less complicated.

And the scientific methodology applies to companies both large and small, too. We’ve used the same approach with everyone from Xerox to Adobe.

Big data certainly is big, but it doesn’t have to be scary. Step-by-step analysis on fundamental questions followed by a data-driven optimization plan is enough to get you large gains.

The scientific approach to growth can be best implemented with a platform that is connected and comprehensive. Such a platform, which shows business performance on its goals, from one stage of the funnel to another, can help save a lot of time, effort, and money.

Conclusion

Businesses need to be data-driven in order to optimize for growth, and to achieve business success. The scientific method can help utilize data in the best possible ways to attain larger gains. Take A/B testing, for example. Smart A/B testing is more than just about testing random ideas. It is about following a scientific, data-driven approach. Follow the Moneyball method of data-driven testing and optimization, and you’ll be on your way to the World Series of increased revenues in no time.

Do you agree that a data-driven approach is a must for making your ROI shine? Share your thoughts and feedback in the comments section below.

CTA_FreeTrial_Being_Data_Driven

The post Data-Driven Optimization: How The Moneyball Method Can Deliver Increased Revenues appeared first on VWO Blog.

Excerpt from: 

Data-Driven Optimization: How The Moneyball Method Can Deliver Increased Revenues

Running an A/A Test Before A/B Testing – Wise or Waste?

To A/A test or not is a question that invites conflicting opinions. Enterprises when faced with the decision of implementing an A/B testing tool do not have enough context on whether they should A/A test. Knowing the benefits and loopholes of A/A testing can help organizations make better decisions.

In this blog post we explore why some organizations practice A/A testing and the things they need to keep in mind while A/A testing. We also discuss other methods that can help enterprises decide whether or not to invest in a certain A/B testing tool.

Why Some Organizations Practice A/A Testing

A/A testing is done when organizations are taking up new implementation of an A/B testing tool. Running an A/A test at that time can help them with:

  • Checking the accuracy of an A/B Testing tool
  • Setting a baseline conversion rate for future A/B tests
  • Deciding a minimum sample size

Checking the Accuracy of an A/B Testing Tool

Organizations who are about to purchase an A/B testing tool or want to switch to a new testing software may run an A/A test to ensure that the new software works fine, and that it has been set up properly.

Tomasz Mazur, an eCommerce Conversion Rate Optimization expert, explains further: “A/A testing is a good way to run a sanity check before you run an A/B test. This should be done whenever you start using a new tool or go for new implementation. A/A testing in these cases will help check if there is any discrepancy in data, let’s say, between the number of visitors you see in your testing tool and the web analytics tool. Further, this helps ensure that your hypothesis are verified.”

In an A/A test, a web page is A/B tested against an identical variation. When there is absolutely no difference between the control and the variation, it is expected that the result will be inconclusive. However, in cases where an A/A test provides a winner between two identical variations, there is a problem. The reasons could be the following:

  • The tool has not been set up properly.
  • The test hasn’t been conducted correctly.
  • The testing tool is inefficient.

Here’s what Corte Swearingen, Director, A/B Testing and Optimization at American Eagle, has to say about A/A testing: “I typically will run an A/A test when a client seems uncertain about their testing platform, or needs/wants additional proof that the platform is operating correctly. There really is no better way to do this than to take the exact same page and test it against itself with no changes whatsoever. We’re essentially tricking the platform and seeing if it catches us! The bottom line is that while I don’t run A/A tests very often, I will occasionally use it as a proof of concept for a client, and to help give them confidence that the split testing platform they are using is working as it should.”

Determining the Baseline Conversion Rate

Before running any A/B test, you need to know the conversion rate that you will be benchmarking the performance results against. This benchmark is your baseline conversion rate.

An A/A test can help you set the baseline conversion rate for your website. Let’s explain this with the help of an example. Suppose you are running an A/A test where the control gives 303 conversions out of 10,000 visitors and the identical variation B gives 307 out of 10,000 conversions. The conversion rate for A is 3.03%, and that for B is 3.07%, when there is no difference between the two variations. Therefore, the conversion rate range that can be set as a benchmark for future A/B tests can be set at 3.03–3.07%. If you run an A/B test later and get an uplift within this range, this might mean that this result is not significant.

Deciding a Minimum Sample Size

A/A testing can also help you get an idea about the minimum sample size from your website traffic. A small sample size would not include sufficient traffic from multiple segments. You might miss out on a few segments which can potentially impact your test results. With a larger sample size, you have a greater chance of taking into account all segments that impact the test.

Corte says, “A/A testing can be used to make a client understand the importance of getting enough people through a test before assuming that a variation is outperforming the original.” He explains this with an A/A testing case study that was done for Sales Training Program landing pages for one of his clients, Dale Carnegie. The A/A test that was run on two identical landing pages got test results indicating that a variation was producing an 11.1% improvement over the control. The reason behind this was that the sample size being tested was too small.

a/a test initial results

After having run the A/A test for a period of 19 days and with over 22,000 visitors, the conversion rates between the two identical versions were the same.

a/a test results with more data

Michal Parizek, Senior eCommerce & Optimization Specialist at Avast, shares similar thoughts. He says, “At Avast, we did a comprehensive A/A test last year. And it gave us some valuable insights and was worth doing it!” According to him, “It is always good to check the statistics before final evaluation.”

At Avast, they ran an A/A test on  two main segments—customers using the free version of the product and customers using the paid version. They did so to get a comparison.

The A/A test had been live for 12 days, and they managed to get quite a lot of data. Altogether, the test involved more than 10 million users and more than 6,500 transactions.

In the “free” segment, they saw a 3% difference in the conversion rate and 4% difference in Average Order Value (AOV). In the “paid” segment, they saw a 2% difference in conversion and 1% difference in AOV.

“However, all uplifts were NOT statistically significant,” says Michal. He adds, “Particularly in the ‘free’ segment, the 7% difference in sales per user (combining the differences in the conversion rate and AOV) might look trustworthy enough to a lot of people. And that would be misleading. Given these results from the A/A test, we have decided to implement internal A/B testing guidelines/lift thresholds. For example, if the difference in the conversion rate or AOV is lower than 5%, be very suspicious that the potential lift is not driven by the difference in the design but by chance.”

Michal sums up his opinion by saying, “A/A testing helps discover how A/B testing could be misleading if they are not taken seriously. And it is also a great way to spot any bugs in the tracking and setup.”

Problems with A/A Testing

In a nutshell, the two main problems inherent in A/A testing are:

  • Everpresent element of randomness in any experimental setup
  • Requirement of a large sample size

We will consider these one by one:

Element of Randomness

As pointed out earlier in the post, checking the accuracy of a testing tool is the main reason for running an A/A test. However, what if you find out a difference between conversions of control and an identical variation? Do you always point it out as a bug in the A/B testing tool?

The problem (for the lack of a better word) with A/A testing is that there is always an element of randomness involved. In some cases, the experiment acquires statistical significance purely by chance, which means that the change in the conversion rate between A and its identical version is probabilistic and does not denote absolute certainty.  

Tomaz Mazur explains randomness with a real-world example. “Suppose you set up two absolutely identical stores in the same vicinity. It is likely, purely by chance or randomness, that there is a difference in results reported by the two. And it doesn’t always mean that the A/B testing platform is inefficient.”

Requirement of a Large Sample Size

Following the example/case study provided by Corte above, one problem with A/A testing is that it can be time-consuming. When testing identical versions, you need a large sample size to find out if A is preferred to its identical version. This in turn will take too much time.

As explained in one of the ConversionXL’s posts, “The amount of sample and data you need to prove that there is no significant bias is huge by comparison with an A/B test. How many people would you need in a blind taste testing of Coca-Cola (against Coca-Cola) to conclude that people liked both equally? 500 people, 5000 people?” Experts at ConversionXL explain that entire purpose of an optimization program is to reduce wastage of time, resources, and money.  They believe that even though running an A/A test is not wrong, there are better ways to use your time when testing.  In the post they mention, “The volume of tests you start is important but even more so is how many you *finish* every month and from how many of those you *learn* something useful from. Running A/A tests can eat into the “real” testing time.”

VWO’s Bayesian Approach and A/A Testing

VWO uses a Bayesian-based statistical engine for A/B testing. This allows VWO to deliver smart decisions–it tells you which variation will minimize potential loss.

Chris Stucchio, Director of Data Science at VWO, shares his viewpoint on how A/A testing is different in VWO than typical frequentist A/B testing tools.

Most A/B testing tools are seeking truth. When running an A/A test in a frequentist tool, an erroneous “winner” should only be reported 5% of the time. In contrast, VWO’s SmartStats is attempting to make a smart business decision. We report a smart decision when we are confident that a particular variation is not worse than all the other variations, that is, we are saying “you’ll leave very little money on the table if you choose this variation now.” In an A/A test, this condition is always satisfied—you’ve got nothing to lose by stopping the test now.

The correct way to evaluate a Bayesian test is to check whether the credible interval for lift contains 0% (the true value).

He also says that the possible and simplest reason for A/A tests to provide a winner

is random chance. “With a frequentist tool, 5% of A/A tests will return a winner due to bad luck. Similarly, 5% of A/A tests in a Bayesian tool will report erroneous lifts. Another possible reason is the configuration error; perhaps the javascript or HTML is incorrectly configured.”

Other Methods and Alternatives to A/A Testing

A few experts believe that A/A testing is inefficient as it consumes a lot of time that could otherwise be used in running actual A/B tests. However, there are others who say that it is essential to run a health check on your A/B testing tool. That said, A/A testing alone is not sufficient to establish whether one testing tool should be prefered over another. When making a critical business decision such as buying a new tool/software application for A/B testing, there are a number of other things that should be considered.

Corte points out that though there is no replacement or alternative to A/A testing, there are other things that must be taken into account when a new tool is being implemented. These are listed as follows:

  1.  Will the testing platform integrate with my web analytics program so that I can further slice and dice the test data for additional insight?
  2.  Will the tool let me isolate specific audience segments that are important to my business and just test those audience segments?
  3.  Will the tool allow me to immediately allocate 100% of my traffic to a winning variation? This feature can be an important one for more complicated radical redesign tests where standardizing on the variation may take some time. If your testing tool allows immediate 100% allocation to the winning variation, you can reap the benefits of the improvement while the page is built permanently in your CMS.
  4. Does the testing platform provide ways to collect both quantitative and qualitative information about site visitors that can be used for formulating additional test ideas? These would be tools like heatmap, scrollmap, visitor recordings, exit surveys, page-level surveys, and visual form funnels. If the testing platform does not have these integrated, do they allow integration with third-party tools for these services.
  5. Does the tool allow for personalization? If test results are segmented and it is discovered that one type of content works best for one segment and another type of content works better for a second segment, does the tool allow you to permanently serve these different experiences for different audience segments”?

That said, there is still a set of experts or people who would opt for alternatives such as triangulating data over an A/A test. Using this procedure means you have two sets of performance data to cross-check with each other. Use one analytics platform as the base to compare all other outcomes against, to check if there is something wrong or something that needs fixing.

And then there is the argument—why just A/A test when you can get more meaningful insights by running an A/A/B test. Doing this, you can still compare two identical versions while also testing some changes in the B variant.

Conclusion

When businesses face the decision of implementing a new testing software application, they need to run a thorough check on the tool. A/A testing is one method that some organizations use for checking the efficiency of the tool. Along with personalization and segmentation capabilities and some other pointers mentioned in this post, this technique can help check if the software application is good for implementation.

Did you find the post insightful? Drop us a line in the comments section with your feedback.

Free-trial CTA

The post Running an A/A Test Before A/B Testing – Wise or Waste? appeared first on VWO Blog.

Read original article:  

Running an A/A Test Before A/B Testing – Wise or Waste?

Take A Digital Health Check

Have you ever wondered how digitally healthy the organization where you work at really is? If you’re a freelancer or work at an agency, well, have you thought about the health of your clients?
I’m not talking about whether you have the latest mobile application or a responsive website. I’m talking about the organization that sits behind these digital tools. If the organization is not digitally healthy, then even the best technology and design will fail.

Read article here:  

Take A Digital Health Check

Five and a Half Habits of Highly Effective Designers

We have theories about everything: why the sky is blue, why apples fall, why bees buzz (and do other unmentionable things), why my boss said a certain thing, why that girl in the restaurant looked at me, why didn’t that girl in the restaurant look at me…. We’re wired to theorize. Theories make us feel secure. We can wrap our heads around them and explain them with little diagrams on whiteboards, or with equations, or even graphs.

See the article here:

Five and a Half Habits of Highly Effective Designers

10 Ways To Put Your Content In Front Of More People

Which is more important, driving traffic to your website or encouraging as many people as possible to see your content? Believe it or not, they are not one and the same.Too often, we as website owners live and die by web analytics applications. We fret about bounce rates, unique visitors and dwell time. However, when we focus so heavily on the performance of our website, we miss a fundamental point: we should aim to expose users to our content, not our website.

Continue reading: 

10 Ways To Put Your Content In Front Of More People

10 Harsh Truths About Corporate Websites

We all make mistakes running our websites. However, the nature of those mistakes varies depending on the size of your company. As your organization grows, the mistakes change. This post addresses common mistakes among large organizations.
Most of the clients I work with are large organizations: universities, large charities, public sector institutions and large companies. Over the last 7 years, I have noticed certain recurring misconceptions among these organizations. This post aims to dispel these illusions and encourage people to face the harsh reality.

See the original article here: 

10 Harsh Truths About Corporate Websites