There’s an ecommerce audience that doesn’t get talked about much, but they’re buying online in massive and ever-increasing numbers. More than three-quarters of them buy online regularly. They flock to major brands like Amazon, but there are ways you can get in on the action too. Interested? I’m talking about young people. Under-18 teens are one of the most ecommerce-savvy populations out there – unsurprising when you consider that a person who’s 17 this year literally can’t remember a time without the internet. ‘They are the first generation to always have the internet at their disposal. They grew up in…
(This is a sponsored post). As web design focuses more and more on good user experience, designers need to create the most usable and attractive websites possible. Carefully applied minimalist principles can help designers make attractive and effective websites with fewer elements, simplifying and improving users’ interactions.
In this article, I will discuss some examples of minimalism in web design, things to consider when designing minimalist interfaces, and explain why sometimes “less is more”.
Over the past few years, one message has been gaining momentum within the marketing world: customer experience is king.
“Customer experience” (CX) refers to your customer’s perception of her relationship with your brand—both conscious and subconscious—based on every interaction she has with your brand during her customer life cycle.
As conversion optimization specialists, we test in pursuit of the perfect customer experience, from that first email subject line, to the post-purchase conversation with a customer service agent.
We test because it is the best way to listen, and create ideal experiences that will motivate consumers to choose us over our competitors in the saturated internet marketplace.
Create the perfect personalized customer experience!
Your customers are unique, and their ideal experiences are unique. Create the perfect customer experience with this 4-step guide to building the most effective personalization strategy.
By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.
Which leads me to the main question of this post: Which companies are currently providing the best customer experiences, and how can you apply their strategies in your business context?
Each year, the Tempkin Group releases a list of the best and worst US companies, by customer experience rating. The list is based on survey responses from 10,000 U.S. consumers, regarding their recent experiences with companies.
And over the past few years, supermarkets have topped that list: old school, brick-and-mortar, this-model-has-been-around-forever establishments.
In the digital world, we often focus on convenience, usability, efficiency, and accessibility…but are there elements at the core of a great customer experience that we may be missing?
A quick look at the research
First things first: Let’s look at how the Tempkin Group determines their experience ratings.
Tempkin surveys 10,000 U.S. consumers, asking them to rate their recent (past 60 days) interactions with 331 companies across 20 industries. The survey questions cover Tempkin’s three components of experience:
Success: Were you, the consumer, able to accomplish what you wanted to do?
Effort: How easy was it for you to interact with the company?
Emotion: How did you feel about those interactions?
Respondents answer questions on a scale of 1 (worst) to 7 (best), and researchers score each company accordingly. For more details on how the research was conducted, you can download the full report, here.
In this post, I am going to focus on one supermarket that has topped the list for the past three years: Publix. Not only does Publix top the Tempkin ratings, it also often tops the supermarket rankings compiled by the American Customer Satisfaction Index.
Long story short: Publix is winning the customer experience battle.
So, what does Publix do right?
If you don’t know it, Publix Super Markets, Inc. is an American supermarket chain headquartered in Florida. Founded in 1930, Publix is a private corporation that is wholly owned by present and past employees; it is considered the largest employee-owned company in the world.
In an industry that has seen recent struggles, Publix has seen steady growth over the past 10 years. So, what is this particular company doing so very right?
1. World-class customer service
Publix takes great care to provide the best possible customer service.
From employee presentation (no piercings, no unnatural hair color, no facial hair), to the emphasis on “engaging the customer”, to the bread baked fresh on-site every day, the company’s goal is to create the most pleasurable shopping experience for each and every customer.
When you ask “Where is the peanut butter?” at another supermarket, an employee might say, “Aisle 4.” But at Publix, you will be led to the peanut butter by a friendly helper.
The store’s slogan: “Make every customer’s day a little bit better because they met you.”
Note the term “associates”: Because Publix is employee-owned, employees are not referred to as employees, but associates. As owners, associates share in the store’s success: If the company does well, so do they.
“Our culture is such that we believe if we take care of our associates, they in turn will take care of our customers. Associate ownership is our secret sauce,” said Publix spokeswoman, Maria Brous. “Our associates understand that their success is tied to the success of our company and therefore, we must excel at providing legendary service to our customers.”
3. Quality over quantity
While Publix is one of the largest food retailers in the country by revenue, they operate a relatively small number of stores: 1,110 stores across six states in the southeastern U.S. (For context, Wal-Mart operates more than 4,000 stores).
Each of Publix’s store locations must meet a set of standards. From the quality of the icing on a cake in the bakery, to the “Thanks for shopping at Publix. Come back and see us again soon!” customer farewell, customers should have a delightful experience at every Publix store.
In the Tempkin Experience Ratings, emotion was the weakest component for the 331 companies evaluated. But, Publix was among the few organizations to receive an “excellent” emotion rating. (In fact, they are ranked top 3 in this category.)
As marketers, we should be changing the mantra from ‘always be closing’ to ‘always be helping’.
– Jonathan Lister, LinkedIn
In the digital marketing world, it is easy to get lost in acronyms: UX, UI, SEO, CRO, PPC…and forget about the actual customer experience. The experience that each individual shopper has with your brand.
Beyond usability, beyond motivation tactics, beyond button colors and push notifications, are you creating delight?
To create delight, you need to understand your customer’s reality. It may be time to think about how much you spend on website traffic, maintenance, analytics, and tools vs. how much you spend to understand your customers…and flip the ratio.
It’s important to understand the complexity of how your users interact with your website. We say, ‘I want to find problems with my website by looking at the site itself, or at my web traffic’. But that doesn’t lead to results. You have to understand your user’s reality.
– André Morys, Founder & CEO, WebArts
Publix is winning with their customer-centric approach because they are fully committed to it. While the tactics may be different with a brick-and-mortar store and an e-commerce website, the goals overlap:
1. Keep your customer at the core of every touch point
From your Facebook ad, to your product landing page, to your product category page, checkout page, confirmation email, and product tracking emails, you have an opportunity to create the best experience for your customers at each step.
2. Make your customers feel something.
Humans don’t buy things. We buy feelings. What are you doing to make your shoppers feel? How are you highlighting the intangible benefits of your value proposition?
3. Keep your employees motivated.
Happy, satisfied employees, deliver happy, satisfying customer experiences, whether they’re creating customer-facing content for your website, or speaking to customers on the phone. For more on building a motivated, high performance marketing team, read this post!
Testing to improve your customer experience
Of course, this wouldn’t be a WiderFunnel blog post if I didn’t recommend testing your customer experience improvements.
If you have an idea for how to inject emotion into the shopping experience, test it. If you believe a particular tweak will make the shopping experience easier and your shoppers more successful, test it.
Your customers will show you what an ideal customer experience looks like with their actions, if you give them the opportunity.
Here’s an example.
During our partnership with e-commerce platform provider, Magento, we ran a test on the product page for the company’s Enterprise Edition software, meant to improve the customer experience.
The main call-to-action on this page was “Get a free demo”—a universal SaaS offering. The assumption was that potential customers would want to experience and explore the platform on their own (convenient, right?), before purchasing the platform.
Looking at click map data, however, our Strategists noticed that visitors to this page were engaging with informational tabs lower on the page. It seemed that potential customers needed more information to successfully accomplish their goals on the page.
Unfortunately, once visitors had finished browsing tabs, they had no option other than trying the demo, whether they were ready or not.
So, our Strategists tested adding a secondary “Talk to a specialist” call-to-action. Potential customers could connect directly with a Magento sales representative, and get answers to all of their questions.
This call-to-action hadn’t existed prior to this test, so the literal infinite conversion rate lift Magento saw in qualified sales calls was not surprising.
What was surprising was the phone call we received six months later: Turns out the “Talk to a specialist” leads were 8x more valuable than the “Get a free demo” leads.
After several subsequent test rounds, “Talk to a specialist” became the main call-to-action on that product page. Magento’s most valuable prospects had demonstrated that the ideal customer experience included the opportunity to get more information from a specialist.
While Publix’s success reminds us of the core components of a great customer experience, actually creating a great customer experience can be tricky.
You might be wondering:
What is most important to my customers: Success, Effort, or Emotion?
What improvements should I make first?
How will I know these improvements are actually working?
A test-and-learn strategy will help you answer these questions, and begin working toward a truly great customer experience.
Don’t get lost in the guesswork of tweaks, fixes, and best practices. Get obsessed with understanding your customer, instead.
How do you create the ideal customer experience?
Please share your thoughts in the comments section below!
I’ve been thinking a lot about speech for the last few years. In fact, it’s been a major focus in several of my talks of late, including my well-received Smashing Conference talk “Designing the Conversation.” As such, I’ve been keenly interested in the development of the Web Speech API.
Recently, I decided to rebuild my personal website, because it was six years old and looked — politely speaking — a little bit “outdated.” The goal was to include some information about myself, a blog area, a list of my recent side projects, and upcoming events.
As I do client work from time to time, there was one thing I didn’t want to deal with — databases! Previously, I built WordPress sites for everyone who wanted me to. The programming part was usually fun for me, but the releases, moving of databases to different environments, and actual publishing, were always annoying.
If you’ve been following the web development community these last few months, chances are you’ve read about progressive web apps (PWAs). It’s an umbrella term used to describe web experiences advanced that they compete with ever-so-rich and immersive native apps: full offline support, installability, “Retina,” full-bleed imagery, sign-in support for personalization, fast, smooth in-app browsing, push notifications and a great UI.
But even though the new Service Worker API allows you to cache away all of your website’s assets for an almost instant subsequent load, like when meeting someone new, the first impression is what counts. If the first load takes more than 3 seconds, the latest DoubleClick study shows that more than 53% of all users will drop off.
To A/A test or not is a question that invites conflicting opinions. Enterprises when faced with the decision of implementing an A/B testing tool do not have enough context on whether they should A/A test. Knowing the benefits and loopholes of A/A testing can help organizations make better decisions.
In this blog post we explore why some organizations practice A/A testing and the things they need to keep in mind while A/A testing. We also discuss other methods that can help enterprises decide whether or not to invest in a certain A/B testing tool.
Why Some Organizations Practice A/A Testing
A/A testing is done when organizations are taking up new implementation of an A/B testing tool. Running an A/A test at that time can help them with:
Checking the accuracy of an A/B Testing tool
Setting a baseline conversion rate for future A/B tests
Deciding a minimum sample size
Checking the Accuracy of an A/B Testing Tool
Organizations who are about to purchase an A/B testing tool or want to switch to a new testing software may run an A/A test to ensure that the new software works fine, and that it has been set up properly.
Tomasz Mazur, an eCommerce Conversion Rate Optimization expert, explains further: “A/A testing is a good way to run a sanity check before you run an A/B test. This should be done whenever you start using a new tool or go for new implementation. A/A testing in these cases will help check if there is any discrepancy in data, let’s say, between the number of visitors you see in your testing tool and the web analytics tool. Further,this helps ensure that your hypothesis are verified.”
In an A/A test, a web page is A/B tested against an identical variation. When there is absolutely no difference between the control and the variation, it is expected that the result will beinconclusive. However, in cases where an A/A test provides a winner between two identical variations, there is a problem. The reasons could be the following:
The tool has not been set up properly.
The test hasn’t been conducted correctly.
The testing tool is inefficient.
Here’s what Corte Swearingen,Director, A/B Testing and Optimization at American Eagle,has to say about A/A testing: “I typically will run an A/A test when a client seems uncertain about their testing platform, or needs/wants additional proof that the platform is operating correctly. There really is no better way to do this than to take the exact same page and test it against itself with no changes whatsoever. We’re essentially tricking the platform and seeing if it catches us! The bottom line is that while I don’t run A/A tests very often, I will occasionally use it as a proof of concept for a client, and to help give them confidence that the split testing platform they are using is working as it should.”
Determining the Baseline Conversion Rate
Before running any A/B test, you need to know the conversion rate that you will be benchmarking the performance results against. This benchmark is your baseline conversion rate.
An A/A test can help you set the baseline conversion rate for your website. Let’s explain this with the help of an example. Suppose you are running an A/A test where the control gives 303 conversions out of 10,000 visitors and the identical variation B gives 307 out of 10,000 conversions. The conversion rate for A is 3.03%, and that for B is 3.07%, when there is no difference between the two variations. Therefore, the conversion rate range that can be set as a benchmark for future A/B tests can be set at 3.03–3.07%. If you run an A/B test later and get an uplift within this range, this might mean that this result is not significant.
Deciding a Minimum Sample Size
A/A testing can also help you get an idea about the minimum sample size from your website traffic. A small sample size would not include sufficient traffic from multiple segments. You might miss out on a few segments which can potentially impact your test results. With a larger sample size, you have a greater chance of taking into account all segments that impact the test.
Corte says, “A/A testing can be used to make a client understand the importance of getting enough people through a test before assuming that a variation is outperforming the original.” He explains this with an A/A testing case study that was done for Sales Training Program landing pages for one of his clients, Dale Carnegie. The A/A test that was run on two identical landing pages got test results indicating that a variation was producing an 11.1% improvement over the control. The reason behind this was that the sample size being tested was too small.
After having run the A/A test for a period of 19 days and with over 22,000 visitors, the conversion rates between the two identical versions were the same.
Michal Parizek, Senior eCommerce & Optimization Specialist atAvast, shares similar thoughts. He says, “At Avast, we did a comprehensive A/A test last year. And it gave us some valuable insights and was worth doing it!” According to him, “It is always good to check the statistics before final evaluation.”
At Avast, they ran an A/A test on two main segments—customers using the free version of the product and customers using the paid version. They did so to get a comparison.
The A/A test had been live for 12 days, and they managed to get quite a lot of data. Altogether, the test involved more than 10 million users and more than 6,500 transactions.
In the “free” segment, they saw a 3% difference in the conversion rate and 4% difference in Average Order Value (AOV). In the “paid” segment, they saw a 2% difference in conversion and 1% difference in AOV.
“However, all uplifts were NOT statistically significant,” says Michal. He adds, “Particularly in the ‘free’ segment, the 7% difference in sales per user (combining the differences in the conversion rate and AOV) might look trustworthy enough to a lot of people. And that would be misleading. Given these results from the A/A test, we have decided to implement internal A/B testing guidelines/lift thresholds. For example, if the difference in the conversion rate or AOV is lower than 5%, be very suspicious that the potential lift is not driven by the difference in the design but by chance.”
Michal sums up his opinion by saying, “A/A testing helps discover how A/B testing could be misleading if they are not taken seriously. And it is also a great way to spot any bugs in the tracking and setup.”
Problems with A/A Testing
In a nutshell, the two main problems inherent in A/A testing are:
Everpresent element of randomness in any experimental setup
Requirement of a large sample size
We will consider these one by one:
Element of Randomness
As pointed out earlier in the post, checking the accuracy of a testing tool is the main reason for running an A/A test. However, what if you find out a difference between conversions of control and an identical variation? Do you always point it out as a bug in the A/B testing tool?
The problem (for the lack of a better word) with A/A testing is that there is always an element of randomness involved. In some cases, the experiment acquires statistical significance purely by chance, which means that the change in the conversion rate between A and its identical version is probabilistic and does not denote absolute certainty.
Tomaz Mazur explains randomness with a real-world example. “Suppose you set up two absolutely identical stores in the same vicinity. It is likely, purely by chance or randomness, that there is a difference in results reported by the two. And it doesn’t always mean that the A/B testing platform is inefficient.”
Requirement of a Large Sample Size
Following the example/case study provided by Corte above, one problem with A/A testing is that it can be time-consuming. When testing identical versions, you need a large sample size to find out if A is preferred to its identical version. This in turn will take too much time.
As explained in one of the ConversionXL’s posts, “The amount of sample and data you need to prove that there is no significant bias is huge by comparison with an A/B test. How many people would you need in a blind taste testing of Coca-Cola (against Coca-Cola) to conclude that people liked both equally? 500 people, 5000 people?” Experts at ConversionXL explain that entire purpose of an optimization program is to reduce wastage of time, resources, and money. They believe that even though running an A/A test is not wrong, there are better ways to use your time when testing. In the post they mention, “The volume of tests you start is important but even more so is how many you *finish* every month and from how many of those you *learn* something useful from. Running A/A tests can eat into the “real” testing time.”
VWO’s Bayesian Approach and A/A Testing
VWO uses a Bayesian-based statistical engine for A/B testing. This allows VWO to deliver smart decisions–it tells you which variation will minimize potential loss.
Chris Stucchio, Director of Data Science at VWO, shares his viewpoint on how A/A testing is different in VWO than typical frequentist A/B testing tools.
Most A/B testing tools are seeking truth. When running an A/A test in a frequentist tool, an erroneous “winner” should only be reported 5% of the time. In contrast, VWO’s SmartStats is attempting to make a smart business decision. We report a smart decision when we are confident that a particular variation is not worse than all the other variations, that is, we are saying “you’ll leave very little money on the table if you choose this variation now.” In an A/A test, this condition is always satisfied—you’ve got nothing to lose by stopping the test now.
The correct way to evaluate a Bayesian test is to check whether the credible interval for lift contains 0% (the true value).
He also says that the possible and simplest reason for A/A tests to provide a winner
Other Methods and Alternatives to A/A Testing
A few experts believe that A/A testing is inefficient as it consumes a lot of time that could otherwise be used in running actual A/B tests. However, there are others who say that it is essential to run a health check on your A/B testing tool. That said, A/A testing alone is not sufficient to establish whether one testing tool should be prefered over another. When making a critical business decision such as buying a new tool/software application for A/B testing, there are a number of other things that should be considered.
Corte points out that though there is no replacement or alternative to A/A testing, there are other things that must be taken into account when a new tool is being implemented. These are listed as follows:
Will the testing platform integrate with my web analytics program so that I can further slice and dice the test data for additional insight?
Will the tool let me isolate specific audience segments that are important to my business and just test those audience segments?
Will the tool allow me to immediately allocate 100% of my traffic to a winning variation? This feature can be an important one for more complicated radical redesign tests where standardizing on the variation may take some time. If your testing tool allows immediate 100% allocation to the winning variation, you can reap the benefits of the improvement while the page is built permanently in your CMS.
Does the testing platform provide ways to collect both quantitative and qualitative information about site visitors that can be used for formulating additional test ideas? These would be tools like heatmap, scrollmap, visitor recordings, exit surveys, page-level surveys, and visual form funnels. If the testing platform does not have these integrated, do they allow integration with third-party tools for these services.
Does the tool allow for personalization? If test results are segmented and it is discovered that one type of content works best for one segment and another type of content works better for a second segment, does the tool allow you to permanently serve these different experiences for different audience segments”?
That said, there is still a set of experts or people who would opt for alternatives such as triangulating data over an A/A test. Using this procedure means you have two sets of performance data to cross-check with each other. Use one analytics platform as the base to compare all other outcomes against, to check if there is something wrong or something that needs fixing.
And then there is the argument—why just A/A test when you can get more meaningful insights by running an A/A/B test. Doing this, you can still compare two identical versions while also testing some changes in the B variant.
When businesses face the decision of implementing a new testing software application, they need to run a thorough check on the tool. A/A testing is one method that some organizations use for checking the efficiency of the tool. Along with personalization and segmentation capabilities and some other pointers mentioned in this post, this technique can help check if the software application is good for implementation.
Did you find the post insightful? Drop us a line in the comments section with your feedback.
With the tools getting more user-friendly and affordable, virtual reality (VR) development is easier to get involved in than ever before. Our team at Clearbridge Mobile recently jumped on the opportunity to develop immersive VR content for the Samsung Gear VR, using Samsung’s 360 camera.
The result is ClearVR, a mobile application demo that enables users to explore the features, pricing, interiors and exteriors of listed vehicles. Developing this demo project gave us a better understanding of VR development for our future projects, including scaling, stereoscopic display and motion-tracking practices. This article is an introductory guide to developing for VR, with the lessons we learned along the way.
For the last few years, whenever somebody wants to start building an HTTP API, they pretty much exclusively use REST as the go-to architectural style, over alternative approaches such as XML-RPC, SOAP and JSON-RPC. REST is made out by many to be ultimately superior to the other “RPC-based” approaches, which is a bit misleading because they are just different.
This article discusses these two approaches in the context of building HTTP APIs, because that is how they are most commonly used. REST and RPC can both be used via other transportation protocols, such as AMQP, but that is another topic entirely.
A few months ago, Jason Grigsby’s post about autocompletion in forms made the rounds. I loved the idea of allowing users to fill in their credit card details by taking a picture of their card. What I didn’t love was learning all of the possible values for autofill by heart. I’m getting lazy in my old age.
Lately, I’ve gotten spoiled from using an editor that does intelligent autocompletion for me, something that in the past only massive complex IDEs offered. Opening my editor of choice, I created an input element and added an autocomplete attribute, only to find that the code completion offered me the state of on or off. Disappointing.