Tag Archives: because

Be Watchful: PHP And WordPress Functions That Can Make Your Site Insecure

Security of a WordPress (or any) website is a multi-faceted problem. The most important step anyone can take to make sure that a site is secure is to keep in mind that no single process or method is sufficient to ensure nothing bad happens. But there are things you can do to help. One of them is to be on the watch, in the code you write and the code from others you deploy, for functions that can have negative consequences.

Taken from: 

Be Watchful: PHP And WordPress Functions That Can Make Your Site Insecure

Optimizing Sketch Files: Lessons Learned In Creating The Reduce App (Case Study)

Sketch had brought totally new standards for file sizes. You no longer see 10 GB Photoshop files all over the place. Nevertheless, huge Sketch files exist, and they slow down Sketch. As a result, your productivity slows down as well.
Let’s be honest: It’s not the design files that become bigger by magic. It’s designers who fill their files with unused, unoptimized and hidden elements that take unnecessary space. We have faced this problem in our startup, Flawless App.

See the original post – 

Optimizing Sketch Files: Lessons Learned In Creating The Reduce App (Case Study)

Why are You Neglecting the Highest-Traffic Lowest-Converting Page on Your Website?

I’m not talking about your home page. Sure that gets the most traffic, but notice the qualifier in the post title; highest-traffic “lowest-converting”.

But why would you care about a low converting page? Because chances are, it’s not converting because you forgot to add a call to action (CTA).

I’m sure you know about some pages like this on your website, but you’re using one of the following excuses to do nothing about it:

  1. I don’t have the bandwidth to deal with it.
  2. It’s not my responsibility.
  3. I don’t know what to do with it.
  4. I’ll get to it later.

The last excuse is the absolute worst. Because you never will “get to it”.

It’s 2018 – Stop Wasting Time Ignoring This Page

Don’t start this year with yet another failed attempt to go to the gym. Commit one day to optimizing just one page.

For Unbounce, that page is “What is a landing page?“. We’ve held the #1 spot in Google for this term since early 2010, and guess what? We haven’t updated it since early 2010.

Every time we look at Google Analytics, we see this:

10,000 unique visitors every month to that page. And 84.15% of them are NEW visitors. That’s an incredible amount of value.

What does the page look like?

It was embarrassing, to say the least. Spoiler alert I updated it last night. But here’s a screenshot of the abomination that was our previous 8 years of letting visitors down.

A few observations

  • The content is ancient, and has a lot of useless information. Some of which is fundamentally wrong.
  • The CSS is all broken making the layout and reading experience terrible.
  • It links to a bad blog post I wrote in 2010 that has a photo of Miley Cyrus wearing a carrot costume.

You read that right. Miley Cyrus in a carrot costume is the call to action on the highest traffic page on our website (aside from our homepage). #facepalm

How to Convert Top-of-Funnel (TOFU) Traffic

“What is a Landing Page?” is the most TOFU page on our website, which means we need to choose carefully when we ask people to do something.

I decided to go with three options in a choose-your-own-adventure format, as a learning exercise so we can study what these visitors are actually looking for.

Option 1: “I’m new to landing pages, and want to learn more.”
CTA >> [ Watch The Landing Page Sessions Video Series ]

Option 2: “I have a landing page, but I’m not sure how good it is.”
CTA >> [ Grade Your Page With The Landing Page Analyzer ]

Option 3: “I need to build a landing page.”
CTA >> [ Try The Unbounce Builder in Preview Mode ]

The New “What is a…” Page

(Click to see the full-length page in a scrolling lightbox.)

High-Traffic, Yes. High-Converting? We’ll see.

I’ll be looking at the analytics (Hotjar click and scroll heatmaps), Google Analytics (changes in basic behavior), KISS Metrics (changes in signups), and I’ll report back with the results later in Product Awareness Month.

Find your highest-traffic lowest-converting page, now

Do it.

Cheers
Oli Gardner

p.s.

Originally posted here: 

Why are You Neglecting the Highest-Traffic Lowest-Converting Page on Your Website?

Your mobile website optimization guide (or, how to stop pissing off your mobile users)

Reading Time: 15 minutes

One lazy Sunday evening, I decided to order Thai delivery for dinner. It was a Green-Curry-and-Crispy-Wonton kind of night.

A quick google search from my iPhone turned up an ad for a food delivery app. In that moment, I wanted to order food fast, without having to dial a phone number or speak to a human. So, I clicked.

From the ad, I was taken to the company’s mobile website. There was a call-to-action to “Get the App” below the fold, but I didn’t want to download a whole app for this one meal. I would just order from the mobile site.

Dun, dun, duuuun.

Over the next minute, I had one of the most frustrating ordering experiences of my life. Labeless hamburger menus, the inability to edit my order, and an overall lack of guidance through the ordering process led me to believe I would never be able to adjust my order from ‘Chicken Green Curry’ to ‘Prawn Green Curry’.

After 60 seconds of struggling, I gave up, utterly defeated.

I know this wasn’t a life-altering tragedy, but it sure was an awful mobile experience. And I bet you have had a similar experience in the last 24 hours.

Let’s think about this for a minute:

  1. This company paid good money for my click
  2. I was ready to order online: I was their customer to lose
  3. I struggled for about 30 seconds longer than most mobile users would have
  4. I gave up and got a mediocre burrito from the Mexican place across the street.

Not only was I frustrated, but I didn’t get my tasty Thai. The experience left a truly bitter taste in my mouth.

10 test ideas for optimizing your mobile website!

Get this checklist of 10 experiment ideas you should test on your mobile website.




Why is mobile website optimization important?

In 2017, every marketer ‘knows’ the importance of the mobile shopping experience. Americans spend more time on mobile devices than any other. But we are still failing to meet our users where they are on mobile.

Americans spend 54% of online time on mobile devices. Source: KPCB.

For most of us, it is becoming more and more important to provide a seamless mobile experience. But here’s where it gets a little tricky…

Conversion optimization”, and the term “optimization” in general, often imply improving conversion rates. But a seamless mobile experience does not necessarily mean a high-converting mobile experience. It means one that meets your user’s needs and propels them along the buyer journey.

I am sure there are improvements you can test on your mobile experience that will lift your mobile conversion rates, but you shouldn’t hyper-focus on a single metric. Instead, keep in mind that mobile may just be a step within your user’s journey to purchase.

So, let’s get started! First, I’ll delve into your user’s mobile mindset, and look at how to optimize your mobile experience. For real.

You ready?

What’s different about mobile?

First things first: let’s acknowledge that your user is the same human being whether they are shopping on a mobile device, a desktop computer, a laptop, or in-store. Agreed?

So, what’s different about mobile? Well, back in 2013, Chris Goward said, “Mobile is a state of being, a context, a verb, not a device. When your users are on mobile, they are in a different context, a different environment, with different needs.”

Your user is the same person when she is shopping on her iPhone, but she is in a different context. She may be in a store comparing product reviews on her phone, or she may be on the go looking for a good cup of coffee, or she may be trying to order Thai delivery from her couch.

Your user is the same person on mobile, but in a different context, with different needs.

This is why many mobile optimization experts recommend having a mobile website versus using responsive design.

Responsive design is not an optimization strategy. We should stop treating mobile visitors as ‘mini-desktop visitors’. People don’t use mobile devices instead of desktop devices, they use it in addition to desktop in a whole different way.

– Talia Wolf, Founder & Chief Optimizer at GetUplift

Step one, then, is to understand who your target customer is, and what motivates them to act in any context. This should inform all of your marketing and the creation of your value proposition.

(If you don’t have a clear picture of your target customer, you should re-focus and tackle that question first.)

Step two is to understand how your user’s mobile context affects their existing motivation, and how to facilitate their needs on mobile to the best of your ability.

Understanding the mobile context

To understand the mobile context, let’s start with some stats and work backwards.

  • Americans spend more than half (54%) of their online time on mobile devices (Source: KPCB, 2016)
  • Mobile accounts for 60% of time spent shopping online, but only 16% of all retail dollars spent (Source: ComScore, 2015)

Insight: Americans are spending more than half of their online time on their mobile devices, but there is a huge gap between time spent ‘shopping’ online, and actually buying.

  • 29% of smartphone users will immediately switch to another site or app if the original site doesn’t satisfy their needs (Source: Google, 2015)
  • Of those, 70% switch because of lagging load times and 67% switch because it takes too many steps to purchase or get desired information (Source: Google, 2015)

Insight: Mobile users are hypersensitive to slow load times, and too many obstacles.

So, why the heck are our expectations for immediate gratification so high on mobile? I have a few theories.

We’re reward-hungry

Mobile devices provide constant access to the internet, which means a constant expectation for reward.

“The fact that we don’t know what we’ll find when we check our email, or visit our favorite social site, creates excitement and anticipation. This leads to a small burst of pleasure chemicals in our brains, which drives us to use our phones more and more.” – TIME, “You asked: Am I addicted to my phone?

If non-stop access has us primed to expect non-stop reward, is it possible that having a negative mobile experience is even more detrimental to our motivation than a negative experience in another context?

When you tap into your Facebook app and see three new notifications, you get a burst of pleasure. And you do this over, and over, and over again.

So, when you tap into your Chrome browser and land on a mobile website that is difficult to navigate, it makes sense that you would be extra annoyed. (No burst of fun reward chemicals!)

A mobile device is a personal device

Another facet to mobile that we rarely discuss is the fact that mobile devices are personal devices. Because our smartphones and wearables are with us almost constantly, they often feel very intimate.

In fact, our smartphones are almost like another limb. According to research from dscout, the average cellphone user touches his or her phone 2,167 times per day. Our thumbprints are built into them, for goodness’ sake.

Just think about your instinctive reaction when someone grabs your phone and starts scrolling through your pictures…

It is possible, then, that our expectations are higher on mobile because the device itself feels like an extension of us. Any experience you have on mobile should speak to your personal situation. And if the experience is cumbersome or difficult, it may feel particularly dissonant because it’s happening on your mobile device.

User expectations on mobile are extremely high. And while you can argue that mobile apps are doing a great job of meeting those expectations, the mobile web is failing.

If yours is one of the millions of organizations without a mobile app, your mobile website has got to work harder. Because a negative experience with your brand on mobile may have a stronger effect than you can anticipate.

Even if you have a mobile app, you should recognize that not everyone is going to use it. You can’t completely disregard your mobile website. (As illustrated by my extremely negative experience trying to order food.)

You need to think about how to meet your users where they are in the buyer journey on your mobile website:

  1. What are your users actually doing on mobile?
  2. Are they just seeking information before purchasing from a computer?
  3. Are they seeking information on your mobile site while in your actual store?

The great thing about optimization is that you can test to pick off low-hanging fruit, while you are investigating more impactful questions like those above. For instance, while you are gathering data about how your users are using your mobile site, you can test usability improvements.

Usability on mobile websites

If you are looking take get a few quick wins to prove the importance of a mobile optimization program, usability is a good place to begin.

The mobile web presents unique usability challenges for marketers. And given your users’ ridiculously high expectations, your mobile experience must address these challenges.

mobile website optimization - usability
This image represents just a few mobile usability best practices.

Below are four of the core mobile limitations, along with recommendations from the WiderFunnel Strategy team around how to address (and test) them.

Note: For this section, I relied heavily on research from the Nielsen Norman Group. For more details, click here.

1. The small screen struggle

No surprise, here. Compared to desktop and laptop screens, even the biggest smartphone screen is smaller―which means they display less content.

“The content displayed above the fold on a 30-inch monitor requires 5 screenfuls on a small 4-inch screen. Thus mobile users must (1) incur a higher interaction cost in order to access the same amount of information; (2) rely on their short-term memory to refer to information that is not visible on the screen.” – Nielsen Norman Group, “Mobile User Experience: Limitations and Strengths

Strategist recommendations:

Consider persistent navigation and calls-to-action. Because of the smaller screen size, your users often need to do a lot of scrolling. If your navigation and main call-to-action aren’t persistent, you are asking your users to scroll down for information, and scroll back up for relevant links.

Note: Anything persistent takes up screen space as well. Make sure to test this idea before implementing it to make sure you aren’t stealing too much focus from other important elements on your page.

2. The touchy touchscreen

Two main issues with the touchscreen (an almost universal trait of today’s mobile devices) are typing and target size.

Typing on a soft keyboard, like the one on your user’s iPhone, requires them to constantly divide their attention between what they are typing, and the keypad area. Not to mention the small keypad and crowded keys…

Target size refers to a clickable target, which needs to be a lot larger on a touchscreen than it is does when your user has a mouse.

So, you need to make space for larger targets (bigger call-to-action buttons) on a smaller screen.

Strategist recommendations:

Test increasing the size of your clickable elements. Google provides recommendations for target sizing:

You should ensure that the most important tap targets on your site—the ones users will be using the most often—are large enough to be easy to press, at least 48 CSS pixels tall/wide (assuming you have configured your viewport properly).

Less frequently-used links can be smaller, but should still have spacing between them and other links, so that a 10mm finger pad would not accidentally press both links at once.

You may also want to test improving the clarity around what is clickable and what isn’t. This can be achieved through styling, and is important for reducing ‘exploratory clicking’.

When a user has to click an element to 1) determine whether or not it is clickable, and 2) determine where it will lead, this eats away at their finite motivation.

Another simple tweak: Test your call-to-action placement. Does it match with the motion range of a user’s thumb?

3. Mobile shopping experience, interrupted

As the term mobile implies, mobile devices are portable. And because we can use ‘em in many settings, we are more likely to be interrupted.

“As a result, attention on mobile is often fragmented and sessions on mobile devices are short. In fact, the average session duration is 72 seconds […] versus the average desktop session of 150 seconds.”Nielsen Norman Group

Strategist recommendations:

You should design your mobile experience for interruptions, prioritize essential information, and simplify tasks and interactions. This goes back to meeting your users where they are within the buyer journey.

According to research by SessionM (published in 2015), 90% of smartphone users surveyed used their phones while shopping in a physical store to 1) compare product prices, 2) look up product information, and 3) check product reviews online.

You should test adjusting your page length and messaging hierarchy to facilitate your user’s main goals. This may be browsing and information-seeking versus purchasing.

4. One window at a time

As I’m writing this post, I have 11 tabs open in Google Chrome, split between two screens. If I click on a link that takes me to a new website or page, it’s no big deal.

But on mobile, your user is most likely viewing one window at a time. They can’t split their screen to look at two windows simultaneously, so you shouldn’t ask them to. Mobile tasks should be easy to complete in one app or on one website.

The more your user has to jump from page to page, the more they have to rely on their memory. This increases cognitive load, and decreases the likelihood that they will complete an action.

Strategist recommendations:

Your navigation should be easy to find and it should contain links to your most relevant and important content. This way, if your user has to travel to a new page to access specific content, they can find their way back to other important pages quickly and easily.

In e-commerce, we often see people “pogo-sticking”—jumping from one page to another continuously—because they feel that they need to navigate to another page to confirm that the information they have provided is correct.

A great solution is to ensure that your users can view key information that they may want to confirm (prices / products / address) on any page. This way, they won’t have to jump around your website and remember these key pieces of information.

Implementing mobile website optimization

As I’m sure you’ve noticed by now, the phrase “you should test” is peppered throughout this post. Because understanding the mobile context, and reviewing usability challenges and recommendations are first steps.

If you can, you should test any recommendation made in this post. Which brings us to mobile website optimization. At WiderFunnel, we approach mobile optimization just like we would desktop optimization: with process.

You should evaluate and prioritize mobile web optimization in the context of all of your marketing. If you can achieve greater Return on Investment by optimizing your desktop experience (or another element of your marketing), you should start there.

But assuming your mobile website ranks high within your priorities, you should start examining it from your user’s perspective. The WiderFunnel team uses the LIFT Model framework to identify problem areas.

The LIFT Model allows us to identify barriers to conversion, using the six factors of Value Proposition, Clarity, Relevance, Anxiety, Distraction, and Urgency. For more on the LIFT Model, check out this blog post.

A LIFT illustration

I asked the WiderFunnel Strategy team to do a LIFT analysis of the food delivery website that gave me so much grief that Sunday night. Here are some of the potential barriers they identified on the checkout page alone:

Mobile website LIFT analysis
This wireframe is based on the food delivery app’s checkout page. Each of the numbered LIFT points corresponds with the list below.
  1. Relevance: There is valuable page real estate dedicated to changing the language, when a smartphone will likely detect your language on its own.
  2. Anxiety: There are only 3 options available in the navigation: Log In, Sign Up, and Help. None of these are helpful when a user is trying to navigate between key pages.
  3. Clarity: Placing the call-to-action at the top of the page creates disjointed eyeflow. The user must scan the page from top to bottom to ensure their order is correct.
  4. Clarity: The “Order Now” call-to-action and “Allergy & dietary information links” are very close together. Users may accidentally tap one, when they want to tap the other.
  5. Anxiety: There is no confirmation of the delivery address.
  6. Anxiety: There is no way to edit an order within the checkout. A user has to delete items, return to the menu and add new items.
  7. Clarity: Font size is very small making the content difficult to read.
  8. Clarity: The “Cash” and “Card” icons have no context. Is a user supposed to select one, or are these just the payment options available?
  9. Distraction: The dropdown menus in the footer include many links that might distract a user from completing their order.

Needless to say, my frustrations were confirmed. The WiderFunnel team ran into the same obstacles I had run into, and identified dozens of barriers that I hadn’t.

But what does this mean for you?

When you are first analyzing your mobile experience, you should try to step into your user’s shoes and actually use your experience. Give your team a task and a goal, and walk through the experience using a framework like LIFT. This will allow you to identify usability issues within your user’s mobile context.

Every LIFT point is a potential test idea that you can feed into your optimization program.

Case study examples

This wouldn’t be a WiderFunnel blog post without some case study examples.

This is where we put ‘best mobile practices’ to the test. Because the smallest usability tweak may make perfect sense to you, and be off-putting to your users.

In the following three examples, we put our recommendations to the test.

Mobile navigation optimization

In mobile design in particular, we tend to assume our users understand ‘universal’ symbols.

Aritzia Hamburger Menu
The ‘Hamburger Menu’ is a fixture on mobile websites. But does that mean it’s a universally understood symbol?

But, that isn’t always the case. And it is certainly worth testing to understand how you can make the navigation experience (often a huge pain point on mobile) easier.

You can’t just expect your users to know things. You have to make it as clear as possible. The more you ask your user to guess, the more frustrated they will become.

– Dennis Pavlina, Optimization Strategist, WiderFunnel

This example comes from an e-commerce client that sells artwork. In this experiment, we tested two variations against the original.

In the first, we increased font and icon size within the navigation and menu drop-down. This was a usability update meant to address the small, difficult to navigate menu. Remember the conversation about target size? We wanted to tackle the low-hanging fruit first.

With variation B, we dug a little deeper into the behavior of this client’s specific users.

Qualitative Hotjar recordings had shown that users were trying to navigate the mobile website using the homepage as a homebase. But this site actually has a powerful search functionality, and it is much easier to navigate using search. Of course, the search option was buried in the hamburger menu…

So, in the second variation (built on variation A), we removed Search from the menu and added it right into the main Nav.

Mobile website optimization - navigation
Wireframes of the control navigation versus our variations.

Results

Both variations beat the control. Variation A led to a 2.7% increase in transactions, and a 2.4% increase in revenue. Variation B decreased clicks to the menu icon by -24%, increased transactions by 8.1%, and lifted revenue by 9.5%.

Never underestimate the power of helping your users find their way on mobile. But be wary! Search worked for this client’s users, but it is not always the answer, particularly if what you are selling is complex, and your users need more guidance through the funnel.

Mobile product page optimization

Let’s look at another e-commerce example. This client is a large sporting goods store, and this experiment focused on their product detail pages.

On the original page, our Strategists noted a worst mobile practice: The buttons were small and arranged closely together, making them difficult to click.

There were also several optimization blunders:

  1. Two calls-to-action were given equal prominence: “Find in store” and “+ Add to cart”
  2. “Add to wishlist” was also competing with “Add to cart”
  3. Social icons were placed near the call-to-action, which could be distracting

We had evidence from an experiment on desktop that removing these distractions, and focusing on a single call-to-action, would increase transactions. (In that experiment, we saw transactions increase by 6.56%).

So, we tested addressing these issues in two variations.

In the first, we de-prioritized competing calls-to-action, and increased the ‘Size’ and ‘Qty’ fields. In the second, we wanted to address usability issues, making the color options, size options, and quantity field bigger and easier to click.

mobile website optimization - product page variations
The control page versus our variations.

Results

Both of our variations lost to the Control. I know what you’re thinking…what?!

Let’s dig deeper.

Looking at the numbers, users responded in the way we expected, with significant increases to the actions we wanted, and a significant reduction in the ones we did not.

Visits to “Reviews”, “Size”, “Quantity”, “Add to Cart” and the Cart page all increased. Visits to “Find in Store” decreased.

And yet, although the variations were more successful at moving users through to the next step, there was not a matching increase in motivation to actually complete a transaction.

It is hard to say for sure why this result happened without follow-up testing. However, it is possible that this client’s users have different intentions on mobile: Browsing and seeking product information vs. actually buying. Removing the “Find in Store” CTA may have caused anxiety.

This example brings us back to the mobile context. If an experiment wins within a desktop experience, this certainly doesn’t guarantee it will win on mobile.

I was shopping for shoes the other day, and was actually browsing the store’s mobile site while I was standing in the store. I was looking for product reviews. In that scenario, I was information-seeking on my phone, with every intention to buy…just not from my phone.

Are you paying attention to how your unique users use your mobile experience? It may be worthwhile to take the emphasis off of ‘increasing conversions on mobile’ in favor of researching user behavior on mobile, and providing your users with the mobile experience that best suits their needs.

Note: When you get a test result that contradicts usability best practices, it is important that you look carefully at your experiment design and secondary metrics. In this case, we have a potential theory, but would not recommend any large-scale changes without re-validating the result.

Mobile checkout optimization

This experiment was focused on one WiderFunnel client’s mobile checkout page. It was an insight-driving experiment, meaning the focus was on gathering insights about user behavior rather than on increasing conversion rates or revenue.

Evidence from this client’s business context suggested that users on mobile may prefer alternative payment methods, like Apple Pay and Google Wallet, to the standard credit card and PayPal options.

To make things even more interesting, this client wanted to determine the desire for alternative payment methods before implementing them.

The hypothesis: By adding alternative payment methods to the checkout page in an unobtrusive way, we can determine by the percent of clicks which new payment methods are most sought after by users.

We tested two variations against the Control.

In variation A, we pulled the credit card fields and call-to-action higher on the page, and added four alternative payment methods just below the CTA: PayPal, Apple Pay, Amazon Payments, and Google Wallet.

If a user clicked on one of the four alternative payment methods, they would see a message:

“Google Wallet coming soon!
We apologize for any inconvenience. Please choose an available deposit method.
Credit Card | PayPal”

In variation B, we flipped the order. We featured the alternative payment methods above the credit card fields. The focus was on increasing engagement with the payment options to gain better insights about user preference.

mobile website optimization - checkout page
The control against variations testing alternative payment methods.

Note: For this experiment, iOS devices did not display the Google Wallet option, and Android devices did not display Apple Pay.

Results

On iOS devices, Apple Pay received 18% of clicks, and Amazon Pay received 12%. On Android devices, Google Wallet received 17% of clicks, and Amazon Pay also received 17%.

The client can use these insights to build the best experience for mobile users, offering Apple Pay and Google Wallet as alternative payment methods rather than PayPal or Amazon Pay.

Unexpectedly, both variations also increased transactions! Variation A led to an 11.3% increase in transactions, and variation B led to an 8.5% increase.

Because your user’s motivation is already limited on mobile, you should try to create an experience with the fewest possible steps.

You can ask someone to grab their wallet, decipher their credit card number, expiration date, and ccv code, and type it all into a small form field. Or, you can test leveraging the digital payment options that may already be integrated with their mobile devices.

The future of mobile website optimization

Imagine you are in your favorite outdoor goods store, and you are ready to buy a new tent.

You are standing in front of piles of tents: 2-person, 3-person, 4-person tents; 3-season and extreme-weather tents; affordable and pricey tents; light-weight and heavier tents…

You pull out your smartphone, and navigate to the store’s mobile website. You are looking for more in-depth product descriptions and user reviews to help you make your decision.

A few seconds later, a store employee asks if they can help you out. They seem to know exactly what you are searching for, and they help you choose the right tent for your needs within minutes.

Imagine that while you were browsing products on your phone, that store employee received a notification that you are 1) in the store, 2) looking at product descriptions for tent A and tent B, and 3) standing by the tents.

Mobile optimization in the modern era is not about increasing conversions on your mobile website. It is about providing a seamless user experience. In the scenario above, the in-store experience and the mobile experience are inter-connected. One informs the other. And a transaction happens because of each touch point.

Mobile experiences cannot live in a vacuum. Today’s buyer switches seamlessly between devices [and] your optimization efforts must reflect that.

Yonny Zafrani, Mobile Product Manager, Dynamic Yield

We wear the internet on our wrists. We communicate via chat bots and messaging apps. We spend our leisure time on our phones: streaming, gaming, reading, sharing.

And while I’m not encouraging you to shift your optimization efforts entirely to mobile, you must consider the role mobile plays in your customers’ lives. The online experience is mobile. And your mobile experience should be an intentional step within the buyer journey.

What does your ideal mobile shopping experience look like? Where do you think mobile websites can improve? Do you agree or disagree with the ideas in this post? Share your thoughts in the comments section below!

The post Your mobile website optimization guide (or, how to stop pissing off your mobile users) appeared first on WiderFunnel Conversion Optimization.

See original: 

Your mobile website optimization guide (or, how to stop pissing off your mobile users)

Beyond A vs. B: How to get better results with better experiment design

Reading Time: 7 minutes

You’ve been pushing to do more testing at your organization.

You’ve heard that your competitors at ______ are A/B testing, and that their customer experience is (dare I say it?) better than yours.

You believe in marketing backed by science and data, and you have worked to get the executive team at your company on board with a tested strategy.

You’re excited to begin! To learn more about your customers and grow your business.

You run one A/B test. And then another. And then another. But you aren’t seeing that conversion rate lift you promised. You start to hear murmurs and doubts. You start to panic a little.

You could start testing as fast as you can, trying to get that first win. (But you shouldn’t).

Instead, you need to reexamine how you are structuring your tests. Because, as Alhan Keser writes,

Alhan Keser

If your results are disappointing, it may not only be what you are testing – it is definitely how you are testing. While there are several factors for success, one of the most important to consider is Design of Experiments (DOE).

This isn’t the first (or even the second) time we have written about Design of Experiments on the WiderFunnel blog. Because that’s how important it is. Seriously.

For this post, I teamed up with Director of Optimization Strategy, Nick So, to take a deeper look at the best ways to structure your experiments for maximum growth and insights.

Discover the best experiment structure for you!

Compare the pro’s and con’s of different Design of Experiment tactics with this simple download. The method you choose is up to you!



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.


Warning: Things will get a teensy bit technical, but this is a vital part of any high-performing marketing optimization program.

The basics: Defining A/B, MVT, and factorial

Marketers often use the term ‘A/B testing’ to refer to marketing experimentation in general. But there are multiple different ways to structure your experiments. A/B testing is just one of them.

Let’s look at a few: A/B testing, A/B/n testing, multivariate (MVT), and factorial design.

A/B test

In an A/B test, you are testing your original page / experience (A) against a single variation (B) to see which will result in a higher conversion rate. Variation B might feature a multitude of changes (i.e. a ‘cluster’) of changes, or an isolated change.

ab test widerfunnel
When you change multiple elements in a single variation, you might see lift, but what about insights?

In an A/B/n test, you are testing more than two variations of a page at once. “N” refers to the number of versions being tested, anywhere from two versions to the “nth” version.

Multivariate test (MVT)

With multivariate testing, you are testing each, individual change, isolated one against another, by mixing and matching every possible combination available.

Imagine you want to test a homepage re-design with four changes in a single variation:

  • Change A: New hero banner
  • Change B: New call-to-action (CTA) copy
  • Change C: New CTA color
  • Change D: New value proposition statement

Hypothetically, let’s assume that each change has the following impact on your conversion rate:

  • Change A = +10%
  • Change B = +5%
  • Change C = -25%
  • Change D = +5%

If you were to run a classic A/B test―your current control page (A) versus a combination of all four changes at once (B)―you would get a hypothetical decrease of -5% overall (10% + 5% – 25% +5%). You would assume that your re-design did not work and most likely discard the ideas.

With a multivariate test, however, each of the following would be a variation:

mvt widerfunnel

Multivariate testing is great because it shows you the positive or negative impact of every single change, and every single combination of every change, resulting in the most ideal combination (in this theoretical example: A + B + D).

However, this strategy is kind of impossible in the real world. Even if you have a ton of traffic, it would still take more time than most marketers have for a test with 15 variations to reach any kind of statistical significance.

The more variations you test, the more your traffic will be split while testing, and the longer it will take for your tests to reach statistical significance. Many companies simply can’t follow the principles of MVT because they don’t have enough traffic.

Enter factorial experiment design. Factorial design allows for the speed of pure A/B testing combined with the insights of multivariate testing.

Factorial design: The middle ground

Factorial design is another method of Design of Experiments. Similar to MVT, factorial design allows you to test more than one element change within the same variation.

The greatest difference is that factorial design doesn’t force you to test every possible combination of changes.

Rather than creating a variation for every combination of changed elements (as you would with MVT), you can design your experiment to focus on specific isolations that you hypothesize will have the biggest impact.

With basic factorial experiment design, you could set up the following variations in our hypothetical example:

VarA: Change A = +10%
VarB: Change A + B = +15%
VarC: Change A + B + C = -10%
VarD: Change A + B + C + D = -5%

Factorial design widerfunnel
In this basic example, variation A features a single change; VarB is built on VarA, and VarC is built on VarB.

NOTE: With factorial design, estimating the value (e.g. conversion rate lift) of each change is a bit more complex than shown above. I’ll explain.

Firstly, let’s imagine that our control page has a baseline conversion rate of 10% and that each variation receives 1,000 unique visitors during your test.

When you estimate the value of change A, you are using your control as a baseline.

factorial testing widerfunnel
Variation A versus the control.

Given the above information, you would estimate that change A is worth a 10% lift by comparing the 11% conversion rate of variation A against the 10% conversion rate of your control.

The estimated conversion rate lift of change A = (11 / 10 – 1) = 10%

But, when estimating the value of change B, variation A must become your new baseline.

factorial testing widerfunnel
Variation B versus variation A.

The estimated conversion rate lift of change B = (11.5 / 11 – 1) = 4.5%

As you can see, the “value” of change B is slightly different from the 5% difference shown above.

When you structure your tests with factorial design, you can work backwards to isolate the effect of each individual change by comparing variations. But, in this scenario, you have four variations instead of 15.

Mike St Laurent

We are essentially nesting A/B tests into larger experiments so that we can still get results quickly without sacrificing insights gained by isolations.

– Michael St Laurent, Optimization Strategist, WiderFunnel

Then, you would simply re-validate the hypothesized positive results (Change A + B + D) in a standard A/B test against the original control to see if the numbers align with your prediction.

Factorial allows you to get the best potential lift, with five total variations in two tests, rather than 15 variations in a single multivariate test.

But, wait…

It’s not always that simple. How do you hypothesize which elements will have the biggest impact? How do you choose which changes to combine and which to isolate?

The Strategist’s Exploration

The answer lies in the Explore (or research gathering) phase of your testing process.

At WiderFunnel, Explore is an expansive thinking zone, where all options are considered. Ideas are informed by your business context, persuasion principles, digital analytics, user research, and your past test insights and archive.

Experience is the other side to this coin. A seasoned optimization strategist can look at the proposed changes and determine which changes to combine (i.e. cluster), and which changes should be isolated due to risk or potential insights to be gained.

At WiderFunnel, we don’t just invest in the rigorous training of our Strategists. We also have a 10-year-deep test archive that our Strategy team continuously draws upon when determining which changes to cluster, and which to isolate.

Factorial design in action: A case study

Once upon a time, we were testing with Annie Selke, a retailer of luxury home-ware goods. This story follows two experiments we ran on Annie Selke’s product category page.

(You may have already read about what we did during this test, but now I’m going to get into the details of how we did it. It’s a beautiful illustration of factorial design in action!)

Experiment 4.7

In the first experiment, we tested three variations against the control. As the experiment number suggests, this was not the first test we ran with Annie Selke, in general. But it is the ‘first’ test in this story.

ab testing marketing control
Experiment 4.7 control product category page.

Variation A featured an isolated change to the “Sort By” filters below the image, making it a drop down menu.

ab testing marketing example
Replaced original ‘Sort By’ categories with a more traditional drop-down menu.

Evidence?

This change was informed by qualitative click map data, which showed low interaction with the original filters. Strategists also theorized that, without context, visitors may not even know that these boxes are filters (based on e-commerce best practices). This variation was built on the control.

Variation B was also built on the control, and featured another isolated change to reduce the left navigation.

ab testing marketing example
Reduced left-hand navigation.

Evidence?

Click map data showed that most visitors were clicking on “Size” and “Palette”, and past testing had revealed that Annie Selke visitors were sensitive to removing distractions. Plus, the persuasion principle, known as the Paradox of Choice, theorizes that more choice = more anxiety for visitors.

Unlike variation B, variation C was built on variation A, and featured a final isolated change: a collapsed left navigation.

Collapsed left-hand filter (built on VarA).
Collapsed left-hand filter (built on VarA).

Evidence?

This variation was informed by the same evidence as variation B.

Results

Variation A (built on the control) saw a decrease in transactions of -23.2%.
Variation B (built on the control) saw no change.
Variation C (built on variation A) saw a decrease in transactions of -1.9%.

But wait! Because variation C was built on variation A, we knew that the estimated value of change C (the collapsed filter), was 19.1%.

The next step was to validate our estimated lift of 19.1% in a follow up experiment.

Experiment 4.8

The follow-up test also featured three variations versus the original control. Because, you should never waste the opportunity to gather more insights!

Variation A was our validation variation. It featured the collapsed filter (change C) from 4.7’s variation C, but maintained the original “Sort By” functionality from 4.7’s control.

ab testing marketing example
Collapsed filter & original ‘Sort By’ functionality.

Variation B was built on variation A, and featured two changes emphasizing visitor fascination with colors. We 1) changed the left nav filter from “palette” to “color”, and 2) added color imagery within the left nav filter.

ab testing marketing example
Updated “palette” to “color”, and added color imagery. (A variation featuring two clustered changes).

Evidence?

Click map data suggested that Annie Selke visitors are most interested in refining their results by color, and past test results also showed visitor sensitivity to color.

Variation C was built on variation A, and featured a single isolated change: we made the collapsed left nav persistent as the visitor scrolled.

ab testing marketing example
Made the collapsed filter persistent.

Evidence?

Scroll maps and click maps suggested that visitors want to scroll down the page, and view many products.

Results

Variation A led to a 15.6% increase in transactions, which is pretty close to our estimated 19% lift, validating the value of the collapsed left navigation!

Variation B was the big winner, leading to a 23.6% increase in transactions. Based on this win, we could estimate the value of the emphasis on color.

Variation C resulted in a 9.8% increase in transactions, but because it was built on variation A (not on the control), we learned that the persistent left navigation was actually responsible for a decrease in transactions of -11.2%.

This is what factorial design looks like in action: big wins, and big insights, informed by human intelligence.

The best testing framework for you

What are your testing goals?

If you are in a situation where potential revenue gains outweigh the potential insights to be gained or your test has little long-term value, you may want to go with a standard A/B cluster test.

If you have lots and lots of traffic, and value insights above everything, multivariate may be for you.

If you want the growth-driving power of pure A/B testing, as well as insightful takeaways about your customers, you should explore factorial design.

A note of encouragement: With factorial design, your tests will get better as you continue to test. With every test, you will learn more about how your customers behave, and what they want. Which will make every subsequent hypothesis smarter, and every test more impactful.

One 10% win without insights may turn heads your direction now, but a test that delivers insights can turn into five 10% wins down the line. It’s similar to the compounding effect: collecting insights now can mean massive payouts over time.

– Michael St Laurent

The post Beyond A vs. B: How to get better results with better experiment design appeared first on WiderFunnel Conversion Optimization.

More – 

Beyond A vs. B: How to get better results with better experiment design

“The more tests, the better!” and other A/B testing myths, debunked

Reading Time: 8 minutes

Will the real A/B testing success metrics please stand up?

It’s 2017, and most marketers understand the importance of A/B testing. The strategy of applying the scientific method to marketing to prove whether an idea will have a positive impact on your bottom-line is no longer novel.

But, while the practice of A/B testing has become more and more common, too many marketers still buy into pervasive A/B testing myths. #AlternativeFacts.

This has been going on for years, but the myths continue to evolve. Other bloggers have already addressed myths like “A/B testing and conversion optimization are the same thing”, and “you should A/B test everything”.

As more A/B testing ‘experts’ pop up, A/B testing myths have become more specific. Driven by best practices and tips and tricks, these myths represent ideas about A/B testing that will derail your marketing optimization efforts if left unaddressed.

Avoid the pitfalls of ad-hoc A/B testing…

Get this guide, and learn how to build an optimization machine at your company. Discover how to use A/B testing as part of your bigger marketing optimization strategy!



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.



But never fear! With the help of WiderFunnel Optimization Strategist, Dennis Pavlina, I’m going to rebut four A/B testing myths that we hear over and over again. Because there is such a thing as a successful, sustainable A/B testing program…

Into the light, we go!

Myth #1: The more tests, the better!

A lot of marketers equate A/B testing success with A/B testing velocity. And I get it. The more tests you run, the faster you run them, the more likely you are to get a win, and prove the value of A/B testing in general…right?

Not so much. Obsessing over velocity is not going to get you the wins you’re hoping for in the long run.

Mike St Laurent

The key to sustainable A/B testing output, is to find a balance between short-term (maximum testing speed), and long-term (testing for data-collection and insights).

Michael St Laurent, Senior Optimization Strategist, WiderFunnel

When you focus solely on speed, you spend less time structuring your tests, and you will miss out on insights.

With every experiment, you must ensure that it directly addresses the hypothesis. You must track all of the most relevant goals to generate maximum insights, and QA all variations to ensure bugs won’t skew your data.

Dennis Pavlina

An emphasis on velocity can create mistakes that are easily avoided when you spend more time on preparation.

Dennis Pavlina, Optimization Strategist, WiderFunnel

Another problem: If you decide to test many ideas, quickly, you are sacrificing your ability to really validate and leverage an idea. One winning A/B test may mean quick conversion rate lift, but it doesn’t mean you’ve explored the full potential of that idea.

You can often apply the insights gained from one experiment, when building out the strategy for another experiment. Plus, those insights provide additional evidence for testing a particular concept. Lining up a huge list of experiments at once without taking into account these past insights can result in your testing program being more scattershot than evidence-based.

While you can make some noise with an ‘as-many-tests-as-possible’ strategy, you won’t see the big business impact that comes from a properly structured A/B testing strategy.

Myth #2: Statistical significance is the end-all, be-all

A quick definition

Statistical significance: The probability that a certain result is not due to chance. At WiderFunnel, we use a 95% confidence level. In other words, we can say that there is a 95% chance that the observed result is because of changes in our variation (and a 5% chance it is due to…well…chance).

If a test has a confidence level of less than 95% (positive or negative), it is inconclusive and does not have our official recommendation. The insights are deemed directional and subject to change.

Ok, here’s the thing about statistical significance: It is important, but marketers often talk about it as if it is the only determinant for completing an A/B test. In actuality, you cannot view it within a silo.

For example, a recent experiment we ran reached statistical significance three hours after it went live. Because statistical significance is viewed as the end-all, be-all, a result like this can be exciting! But, in three hours, we had not gathered a representative sample size.

Claire Vignon Keser

You should not wait for a test to be significant (because it may never happen) or stop a test as soon as it is significant. Instead, you need to wait for the calculated sample size to be reached before stopping a test. Use a test duration calculator to understand better when to stop a test.

After 24 hours, the same experiment had dropped to a confidence level of 88%, meaning that there was now only an 88% likelihood that the difference in conversion rates was not due to chance – i.e. statistically significant.

Traffic behaves differently over time for all businesses, so you should always run a test for full business cycles, even if you have reached statistical significance. This way, your experiment has taken into account all of the regular fluctuations in traffic that impact your business.

For an e-commerce business, a full business cycle is typically a one-week period; for subscription-based businesses, this might be one month or longer.

Myth #2, Part II: You have to run a test until reaches statistical significance

As Claire pointed out, this may never happen. And it doesn’t mean you should walk away from an A/B test, completely.

As I said above, anything below 95% confidence is deemed subject to change. But, with testing experience, an expert understanding of your testing tool, and by observing the factors I’m about to outline, you can discover actionable insights that are directional (directionally true or false).

  • Results stability: Is the conversion rate difference stable over time, or does it fluctuate? Stability is a positive indicator.
ab testing results stability
Check your graphs! Are conversion rates crossing? Are the lines smooth and flat, or are there spikes and valleys?
  • Experiment timeline: Did I run this experiment for at least a full business cycle? Did conversion rate stability last throughout that cycle?
  • Relativity: If my testing tool uses t-test to determine significance, am I looking at the hard numbers of actual conversions in addition to conversion rate? Does the calculated lift make sense?
  • LIFT & ROI: Is there still potential for the experiment to achieve X% lift? If so, you should let it run as long as it is viable, especially when considering the ROI.
  • Impact on other elements: If elements outside the experiment are unstable (social shares, average order value, etc.) the observed conversion rate may also be unstable.

You can use these factors to make the decision that makes the most sense for your business: implement the variation based on the observed trends, abandon the variation based on observed trends, and/or create a follow-up test!

Myth #3: An A/B test is only as good as its effect on conversion rates

Well, if conversion rate is the only success metric you are tracking, this may be true. But you’re underestimating the true growth potential of A/B testing if that’s how you structure your tests!

To clarify: Your main success metric should always be linked to your biggest revenue driver.

But, that doesn’t mean you shouldn’t track other relevant metrics! At WiderFunnel, we set up as many relevant secondary goals (clicks, visits, field completions, etc.) as possible for each experiment.

Dennis Pavlina

This ensures that we aren’t just gaining insights about the impact a variation has on conversion rate, but also the impact it’s having on visitor behavior.

– Dennis Pavlina

When you observe secondary goal metrics, your A/B testing becomes exponentially more valuable because every experiment generates a wide range of secondary insights. These can be used to create follow up experiments, identify pain points, and create a better understanding of how visitors move through your site.

An example

One of our clients provides an online consumer information service — users type in a question and get an Expert answer. This client has a 4-step funnel. With every test we run, we aim to increase transactions: the final, and most important conversion.

But, we also track secondary goals, like click-through-rates, and refunds/chargebacks, so that we can observe how a variation influences visitor behavior.

In one experiment, we made a change to step one of the funnel (the landing page). Our goal was to set clearer visitor expectations at the beginning of the purchasing experience. We tested 3 variations against the original, and all 3 won resulted in increased transactions (hooray!).

The secondary goals revealed important insights about visitor behavior, though! Firstly, each variation resulted in substantial drop-offs from step 1 to step 2…fewer people were entering the funnel. But, from there, we saw gradual increases in clicks to steps 3 and 4.

Our variations seemed to be filtering out visitors without strong purchasing intent. We also saw an interesting pattern with one of our variations: It increased clicks from step 3 to step 4 by almost 12% (a huge increase), but decreased actual conversions by -1.6%. This result was evidence that the call-to-action on step 4 was extremely weak (which led to a follow-up test!)

ab testing funnel analysis
You can see how each variation fared against the Control in this funnel analysis.

We also saw large decreases in refunds and chargebacks for this client, which further supported the idea that the right visitors (i.e. the wrong visitors) were the ones who were dropping off.

This is just a taste of what every A/B test could be worth to your business. The right goal tracking can unlock piles of insights about your target visitors.

Myth #4: A/B testing takes little to no thought or planning

Believe it or not, marketers still think this way. They still view A/B testing on a small scale, in simple terms.

But A/B testing is part of a greater whole—it’s one piece of your marketing optimization program—and you must build your tests accordingly. A one-off, ad-hoc test may yield short-term results, but the power of A/B testing lies in iteration, and in planning.

ab testing infinity optimization process
A/B testing is just a part of the marketing optimization machine.

At WiderFunnel, a significant amount of research goes into developing ideas for a single A/B test. Even tests that may seem intuitive, or common-sensical, are the result of research.

ab testing planning
The WiderFunnel strategy team gathers to share and discuss A/B testing insights.

Because, with any test, you want to make sure that you are addressing areas within your digital experiences that are the most in need of improvement. And you should always have evidence to support your use of resources when you decide to test an idea. Any idea.

So, what does a revenue-driving A/B testing program actually look like?

Today, tools and technology allow you to track almost any marketing metric. Meaning, you have an endless sea of evidence that you can use to generate ideas on how to improve your digital experiences.

Which makes A/B testing more important than ever.

An A/B test shows you, objectively, whether or not one of your many ideas will actually increase conversion rates and revenue. And, it shows you when an idea doesn’t align with your user expectations and will hurt your conversion rates.

And marketers recognize the value of A/B testing. We are firmly in the era of the data-driven CMO: Marketing ideas must be proven, and backed by sound data.

But results-driving A/B testing happens when you acknowledge that it is just one piece of a much larger puzzle.

One of our favorite A/B testing success stories is that of DMV.org, a non-government content website. If you want to see what a truly successful A/B testing strategy looks like, check out this case study. Here are the high level details:

We’ve been testing with DMV.org for almost four years. In fact, we just launched our 100th test with them. For DMV.org, A/B testing is a step within their optimization program.

Continuous user research and data gathering informs hypotheses that are prioritized and created into A/B tests (that are structured using proper Design of Experiments). Each A/B test delivers business growth and/or insights, and these insights are fed back into the data gathering. It’s a cycle of continuous improvement.

And here’s the kicker: Since DMV.org began A/B testing strategically, they have doubled their revenue year over year, and have seen an over 280% conversion rate increase. Those numbers kinda speak for themselves, huh?

What do you think?

Do you agree with the myths above? What are some misconceptions around A/B testing that you would like to see debunked? Let us know in the comments!

The post “The more tests, the better!” and other A/B testing myths, debunked appeared first on WiderFunnel Conversion Optimization.

Excerpt from:

“The more tests, the better!” and other A/B testing myths, debunked

How to Track Conversions & ROI With These Content Marketing Metrics

If you ever want to make a marketer nervous, ask them how effective their content marketing is. Even I would sweat a little if you asked me that question. It’s not because I don’t know the answer, or where to look to find the answer, it’s just because the process of answering the question can be a little complex. I could throw any manner of numbers out at you, but some of them are just vanity metrics and most are meaningless without also talking about the benchmarks, past performance and the goals that I’m reaching for. Because metrics can be…

The post How to Track Conversions & ROI With These Content Marketing Metrics appeared first on The Daily Egg.

Original post:

How to Track Conversions & ROI With These Content Marketing Metrics

Content-First Prototyping

Content is the core commodity of the digital economy. It is the gold we fashion into luxury experience, the diamond we encase in loyalty programs and upsells. Yet, as designers, we often plug it in after the fact. We prototype our interaction and visual design to exhaustion, but accept that the “real words” can just be dropped in later. There is a better way.
More and more, the digital goods we create operate within a dynamic system of content, functionality, code and intent.

Continue reading:  

Content-First Prototyping

A Better iOS Architecture: A Deep Look At The Model-View-Controller Pattern

If you’ve ever written an iOS app beyond a trivial “Hello world” app with just one screen and a few views, then you might have noticed that a lot of code seems to “naturally” go into view controllers. Because view controllers in iOS carry many responsibilities and are closely related to the app screens, a lot of code ends up being written in them because it’s just easier and faster that way.

Continued here:  

A Better iOS Architecture: A Deep Look At The Model-View-Controller Pattern

Thumbnail

3 Things That The “Used Car Salesman” Can Teach Us About Online Marketing

“Used car salesman” is synonymous for sleazy, pushy, crooked sales. It’s too bad, really. Sure, there may be some dishonest used car salespeople, but certainly the entire industry can’t be that bad. Unfortunately, the stereotype persists. Why? Because we’ve all experienced the greasy, false friendship of that kind of salesperson. It’s off-putting, to say the least. Source Is it possible that online marketing can come across in the same way as the annoying used car salesman? The answer is yes. Because of its digital facade, most of us aren’t aware that some of the marketing techniques we’re using can come…

The post 3 Things That The “Used Car Salesman” Can Teach Us About Online Marketing appeared first on The Daily Egg.

See original:

3 Things That The “Used Car Salesman” Can Teach Us About Online Marketing