Tag Archives: because

Why are You Neglecting the Highest-Traffic Lowest-Converting Page on Your Website?

I’m not talking about your home page. Sure that gets the most traffic, but notice the qualifier in the post title; highest-traffic “lowest-converting”.

But why would you care about a low converting page? Because chances are, it’s not converting because you forgot to add a call to action (CTA).

I’m sure you know about some pages like this on your website, but you’re using one of the following excuses to do nothing about it:

  1. I don’t have the bandwidth to deal with it.
  2. It’s not my responsibility.
  3. I don’t know what to do with it.
  4. I’ll get to it later.

The last excuse is the absolute worst. Because you never will “get to it”.

It’s 2018 – Stop Wasting Time Ignoring This Page

Don’t start this year with yet another failed attempt to go to the gym. Commit one day to optimizing just one page.

For Unbounce, that page is “What is a landing page?“. We’ve held the #1 spot in Google for this term since early 2010, and guess what? We haven’t updated it since early 2010.

Every time we look at Google Analytics, we see this:

10,000 unique visitors every month to that page. And 84.15% of them are NEW visitors. That’s an incredible amount of value.

What does the page look like?

It was embarrassing, to say the least. Spoiler alert I updated it last night. But here’s a screenshot of the abomination that was our previous 8 years of letting visitors down.

A few observations

  • The content is ancient, and has a lot of useless information. Some of which is fundamentally wrong.
  • The CSS is all broken making the layout and reading experience terrible.
  • It links to a bad blog post I wrote in 2010 that has a photo of Miley Cyrus wearing a carrot costume.

You read that right. Miley Cyrus in a carrot costume is the call to action on the highest traffic page on our website (aside from our homepage). #facepalm

How to Convert Top-of-Funnel (TOFU) Traffic

“What is a Landing Page?” is the most TOFU page on our website, which means we need to choose carefully when we ask people to do something.

I decided to go with three options in a choose-your-own-adventure format, as a learning exercise so we can study what these visitors are actually looking for.

Option 1: “I’m new to landing pages, and want to learn more.”
CTA >> [ Watch The Landing Page Sessions Video Series ]

Option 2: “I have a landing page, but I’m not sure how good it is.”
CTA >> [ Grade Your Page With The Landing Page Analyzer ]

Option 3: “I need to build a landing page.”
CTA >> [ Try The Unbounce Builder in Preview Mode ]

The New “What is a…” Page

(Click to see the full-length page in a scrolling lightbox.)

High-Traffic, Yes. High-Converting? We’ll see.

I’ll be looking at the analytics (Hotjar click and scroll heatmaps), Google Analytics (changes in basic behavior), KISS Metrics (changes in signups), and I’ll report back with the results later in Product Awareness Month.

Find your highest-traffic lowest-converting page, now

Do it.

Cheers
Oli Gardner

p.s.

Originally posted here: 

Why are You Neglecting the Highest-Traffic Lowest-Converting Page on Your Website?

Your mobile website optimization guide (or, how to stop pissing off your mobile users)

Reading Time: 15 minutes

One lazy Sunday evening, I decided to order Thai delivery for dinner. It was a Green-Curry-and-Crispy-Wonton kind of night.

A quick google search from my iPhone turned up an ad for a food delivery app. In that moment, I wanted to order food fast, without having to dial a phone number or speak to a human. So, I clicked.

From the ad, I was taken to the company’s mobile website. There was a call-to-action to “Get the App” below the fold, but I didn’t want to download a whole app for this one meal. I would just order from the mobile site.

Dun, dun, duuuun.

Over the next minute, I had one of the most frustrating ordering experiences of my life. Labeless hamburger menus, the inability to edit my order, and an overall lack of guidance through the ordering process led me to believe I would never be able to adjust my order from ‘Chicken Green Curry’ to ‘Prawn Green Curry’.

After 60 seconds of struggling, I gave up, utterly defeated.

I know this wasn’t a life-altering tragedy, but it sure was an awful mobile experience. And I bet you have had a similar experience in the last 24 hours.

Let’s think about this for a minute:

  1. This company paid good money for my click
  2. I was ready to order online: I was their customer to lose
  3. I struggled for about 30 seconds longer than most mobile users would have
  4. I gave up and got a mediocre burrito from the Mexican place across the street.

Not only was I frustrated, but I didn’t get my tasty Thai. The experience left a truly bitter taste in my mouth.

10 test ideas for optimizing your mobile website!

Get this checklist of 10 experiment ideas you should test on your mobile website.




Why is mobile website optimization important?

In 2017, every marketer ‘knows’ the importance of the mobile shopping experience. Americans spend more time on mobile devices than any other. But we are still failing to meet our users where they are on mobile.

Americans spend 54% of online time on mobile devices. Source: KPCB.

For most of us, it is becoming more and more important to provide a seamless mobile experience. But here’s where it gets a little tricky…

Conversion optimization”, and the term “optimization” in general, often imply improving conversion rates. But a seamless mobile experience does not necessarily mean a high-converting mobile experience. It means one that meets your user’s needs and propels them along the buyer journey.

I am sure there are improvements you can test on your mobile experience that will lift your mobile conversion rates, but you shouldn’t hyper-focus on a single metric. Instead, keep in mind that mobile may just be a step within your user’s journey to purchase.

So, let’s get started! First, I’ll delve into your user’s mobile mindset, and look at how to optimize your mobile experience. For real.

You ready?

What’s different about mobile?

First things first: let’s acknowledge that your user is the same human being whether they are shopping on a mobile device, a desktop computer, a laptop, or in-store. Agreed?

So, what’s different about mobile? Well, back in 2013, Chris Goward said, “Mobile is a state of being, a context, a verb, not a device. When your users are on mobile, they are in a different context, a different environment, with different needs.”

Your user is the same person when she is shopping on her iPhone, but she is in a different context. She may be in a store comparing product reviews on her phone, or she may be on the go looking for a good cup of coffee, or she may be trying to order Thai delivery from her couch.

Your user is the same person on mobile, but in a different context, with different needs.

This is why many mobile optimization experts recommend having a mobile website versus using responsive design.

Responsive design is not an optimization strategy. We should stop treating mobile visitors as ‘mini-desktop visitors’. People don’t use mobile devices instead of desktop devices, they use it in addition to desktop in a whole different way.

– Talia Wolf, Founder & Chief Optimizer at GetUplift

Step one, then, is to understand who your target customer is, and what motivates them to act in any context. This should inform all of your marketing and the creation of your value proposition.

(If you don’t have a clear picture of your target customer, you should re-focus and tackle that question first.)

Step two is to understand how your user’s mobile context affects their existing motivation, and how to facilitate their needs on mobile to the best of your ability.

Understanding the mobile context

To understand the mobile context, let’s start with some stats and work backwards.

  • Americans spend more than half (54%) of their online time on mobile devices (Source: KPCB, 2016)
  • Mobile accounts for 60% of time spent shopping online, but only 16% of all retail dollars spent (Source: ComScore, 2015)

Insight: Americans are spending more than half of their online time on their mobile devices, but there is a huge gap between time spent ‘shopping’ online, and actually buying.

  • 29% of smartphone users will immediately switch to another site or app if the original site doesn’t satisfy their needs (Source: Google, 2015)
  • Of those, 70% switch because of lagging load times and 67% switch because it takes too many steps to purchase or get desired information (Source: Google, 2015)

Insight: Mobile users are hypersensitive to slow load times, and too many obstacles.

So, why the heck are our expectations for immediate gratification so high on mobile? I have a few theories.

We’re reward-hungry

Mobile devices provide constant access to the internet, which means a constant expectation for reward.

“The fact that we don’t know what we’ll find when we check our email, or visit our favorite social site, creates excitement and anticipation. This leads to a small burst of pleasure chemicals in our brains, which drives us to use our phones more and more.” – TIME, “You asked: Am I addicted to my phone?

If non-stop access has us primed to expect non-stop reward, is it possible that having a negative mobile experience is even more detrimental to our motivation than a negative experience in another context?

When you tap into your Facebook app and see three new notifications, you get a burst of pleasure. And you do this over, and over, and over again.

So, when you tap into your Chrome browser and land on a mobile website that is difficult to navigate, it makes sense that you would be extra annoyed. (No burst of fun reward chemicals!)

A mobile device is a personal device

Another facet to mobile that we rarely discuss is the fact that mobile devices are personal devices. Because our smartphones and wearables are with us almost constantly, they often feel very intimate.

In fact, our smartphones are almost like another limb. According to research from dscout, the average cellphone user touches his or her phone 2,167 times per day. Our thumbprints are built into them, for goodness’ sake.

Just think about your instinctive reaction when someone grabs your phone and starts scrolling through your pictures…

It is possible, then, that our expectations are higher on mobile because the device itself feels like an extension of us. Any experience you have on mobile should speak to your personal situation. And if the experience is cumbersome or difficult, it may feel particularly dissonant because it’s happening on your mobile device.

User expectations on mobile are extremely high. And while you can argue that mobile apps are doing a great job of meeting those expectations, the mobile web is failing.

If yours is one of the millions of organizations without a mobile app, your mobile website has got to work harder. Because a negative experience with your brand on mobile may have a stronger effect than you can anticipate.

Even if you have a mobile app, you should recognize that not everyone is going to use it. You can’t completely disregard your mobile website. (As illustrated by my extremely negative experience trying to order food.)

You need to think about how to meet your users where they are in the buyer journey on your mobile website:

  1. What are your users actually doing on mobile?
  2. Are they just seeking information before purchasing from a computer?
  3. Are they seeking information on your mobile site while in your actual store?

The great thing about optimization is that you can test to pick off low-hanging fruit, while you are investigating more impactful questions like those above. For instance, while you are gathering data about how your users are using your mobile site, you can test usability improvements.

Usability on mobile websites

If you are looking take get a few quick wins to prove the importance of a mobile optimization program, usability is a good place to begin.

The mobile web presents unique usability challenges for marketers. And given your users’ ridiculously high expectations, your mobile experience must address these challenges.

mobile website optimization - usability
This image represents just a few mobile usability best practices.

Below are four of the core mobile limitations, along with recommendations from the WiderFunnel Strategy team around how to address (and test) them.

Note: For this section, I relied heavily on research from the Nielsen Norman Group. For more details, click here.

1. The small screen struggle

No surprise, here. Compared to desktop and laptop screens, even the biggest smartphone screen is smaller―which means they display less content.

“The content displayed above the fold on a 30-inch monitor requires 5 screenfuls on a small 4-inch screen. Thus mobile users must (1) incur a higher interaction cost in order to access the same amount of information; (2) rely on their short-term memory to refer to information that is not visible on the screen.” – Nielsen Norman Group, “Mobile User Experience: Limitations and Strengths

Strategist recommendations:

Consider persistent navigation and calls-to-action. Because of the smaller screen size, your users often need to do a lot of scrolling. If your navigation and main call-to-action aren’t persistent, you are asking your users to scroll down for information, and scroll back up for relevant links.

Note: Anything persistent takes up screen space as well. Make sure to test this idea before implementing it to make sure you aren’t stealing too much focus from other important elements on your page.

2. The touchy touchscreen

Two main issues with the touchscreen (an almost universal trait of today’s mobile devices) are typing and target size.

Typing on a soft keyboard, like the one on your user’s iPhone, requires them to constantly divide their attention between what they are typing, and the keypad area. Not to mention the small keypad and crowded keys…

Target size refers to a clickable target, which needs to be a lot larger on a touchscreen than it is does when your user has a mouse.

So, you need to make space for larger targets (bigger call-to-action buttons) on a smaller screen.

Strategist recommendations:

Test increasing the size of your clickable elements. Google provides recommendations for target sizing:

You should ensure that the most important tap targets on your site—the ones users will be using the most often—are large enough to be easy to press, at least 48 CSS pixels tall/wide (assuming you have configured your viewport properly).

Less frequently-used links can be smaller, but should still have spacing between them and other links, so that a 10mm finger pad would not accidentally press both links at once.

You may also want to test improving the clarity around what is clickable and what isn’t. This can be achieved through styling, and is important for reducing ‘exploratory clicking’.

When a user has to click an element to 1) determine whether or not it is clickable, and 2) determine where it will lead, this eats away at their finite motivation.

Another simple tweak: Test your call-to-action placement. Does it match with the motion range of a user’s thumb?

3. Mobile shopping experience, interrupted

As the term mobile implies, mobile devices are portable. And because we can use ‘em in many settings, we are more likely to be interrupted.

“As a result, attention on mobile is often fragmented and sessions on mobile devices are short. In fact, the average session duration is 72 seconds […] versus the average desktop session of 150 seconds.”Nielsen Norman Group

Strategist recommendations:

You should design your mobile experience for interruptions, prioritize essential information, and simplify tasks and interactions. This goes back to meeting your users where they are within the buyer journey.

According to research by SessionM (published in 2015), 90% of smartphone users surveyed used their phones while shopping in a physical store to 1) compare product prices, 2) look up product information, and 3) check product reviews online.

You should test adjusting your page length and messaging hierarchy to facilitate your user’s main goals. This may be browsing and information-seeking versus purchasing.

4. One window at a time

As I’m writing this post, I have 11 tabs open in Google Chrome, split between two screens. If I click on a link that takes me to a new website or page, it’s no big deal.

But on mobile, your user is most likely viewing one window at a time. They can’t split their screen to look at two windows simultaneously, so you shouldn’t ask them to. Mobile tasks should be easy to complete in one app or on one website.

The more your user has to jump from page to page, the more they have to rely on their memory. This increases cognitive load, and decreases the likelihood that they will complete an action.

Strategist recommendations:

Your navigation should be easy to find and it should contain links to your most relevant and important content. This way, if your user has to travel to a new page to access specific content, they can find their way back to other important pages quickly and easily.

In e-commerce, we often see people “pogo-sticking”—jumping from one page to another continuously—because they feel that they need to navigate to another page to confirm that the information they have provided is correct.

A great solution is to ensure that your users can view key information that they may want to confirm (prices / products / address) on any page. This way, they won’t have to jump around your website and remember these key pieces of information.

Implementing mobile website optimization

As I’m sure you’ve noticed by now, the phrase “you should test” is peppered throughout this post. Because understanding the mobile context, and reviewing usability challenges and recommendations are first steps.

If you can, you should test any recommendation made in this post. Which brings us to mobile website optimization. At WiderFunnel, we approach mobile optimization just like we would desktop optimization: with process.

You should evaluate and prioritize mobile web optimization in the context of all of your marketing. If you can achieve greater Return on Investment by optimizing your desktop experience (or another element of your marketing), you should start there.

But assuming your mobile website ranks high within your priorities, you should start examining it from your user’s perspective. The WiderFunnel team uses the LIFT Model framework to identify problem areas.

The LIFT Model allows us to identify barriers to conversion, using the six factors of Value Proposition, Clarity, Relevance, Anxiety, Distraction, and Urgency. For more on the LIFT Model, check out this blog post.

A LIFT illustration

I asked the WiderFunnel Strategy team to do a LIFT analysis of the food delivery website that gave me so much grief that Sunday night. Here are some of the potential barriers they identified on the checkout page alone:

Mobile website LIFT analysis
This wireframe is based on the food delivery app’s checkout page. Each of the numbered LIFT points corresponds with the list below.
  1. Relevance: There is valuable page real estate dedicated to changing the language, when a smartphone will likely detect your language on its own.
  2. Anxiety: There are only 3 options available in the navigation: Log In, Sign Up, and Help. None of these are helpful when a user is trying to navigate between key pages.
  3. Clarity: Placing the call-to-action at the top of the page creates disjointed eyeflow. The user must scan the page from top to bottom to ensure their order is correct.
  4. Clarity: The “Order Now” call-to-action and “Allergy & dietary information links” are very close together. Users may accidentally tap one, when they want to tap the other.
  5. Anxiety: There is no confirmation of the delivery address.
  6. Anxiety: There is no way to edit an order within the checkout. A user has to delete items, return to the menu and add new items.
  7. Clarity: Font size is very small making the content difficult to read.
  8. Clarity: The “Cash” and “Card” icons have no context. Is a user supposed to select one, or are these just the payment options available?
  9. Distraction: The dropdown menus in the footer include many links that might distract a user from completing their order.

Needless to say, my frustrations were confirmed. The WiderFunnel team ran into the same obstacles I had run into, and identified dozens of barriers that I hadn’t.

But what does this mean for you?

When you are first analyzing your mobile experience, you should try to step into your user’s shoes and actually use your experience. Give your team a task and a goal, and walk through the experience using a framework like LIFT. This will allow you to identify usability issues within your user’s mobile context.

Every LIFT point is a potential test idea that you can feed into your optimization program.

Case study examples

This wouldn’t be a WiderFunnel blog post without some case study examples.

This is where we put ‘best mobile practices’ to the test. Because the smallest usability tweak may make perfect sense to you, and be off-putting to your users.

In the following three examples, we put our recommendations to the test.

Mobile navigation optimization

In mobile design in particular, we tend to assume our users understand ‘universal’ symbols.

Aritzia Hamburger Menu
The ‘Hamburger Menu’ is a fixture on mobile websites. But does that mean it’s a universally understood symbol?

But, that isn’t always the case. And it is certainly worth testing to understand how you can make the navigation experience (often a huge pain point on mobile) easier.

You can’t just expect your users to know things. You have to make it as clear as possible. The more you ask your user to guess, the more frustrated they will become.

– Dennis Pavlina, Optimization Strategist, WiderFunnel

This example comes from an e-commerce client that sells artwork. In this experiment, we tested two variations against the original.

In the first, we increased font and icon size within the navigation and menu drop-down. This was a usability update meant to address the small, difficult to navigate menu. Remember the conversation about target size? We wanted to tackle the low-hanging fruit first.

With variation B, we dug a little deeper into the behavior of this client’s specific users.

Qualitative Hotjar recordings had shown that users were trying to navigate the mobile website using the homepage as a homebase. But this site actually has a powerful search functionality, and it is much easier to navigate using search. Of course, the search option was buried in the hamburger menu…

So, in the second variation (built on variation A), we removed Search from the menu and added it right into the main Nav.

Mobile website optimization - navigation
Wireframes of the control navigation versus our variations.

Results

Both variations beat the control. Variation A led to a 2.7% increase in transactions, and a 2.4% increase in revenue. Variation B decreased clicks to the menu icon by -24%, increased transactions by 8.1%, and lifted revenue by 9.5%.

Never underestimate the power of helping your users find their way on mobile. But be wary! Search worked for this client’s users, but it is not always the answer, particularly if what you are selling is complex, and your users need more guidance through the funnel.

Mobile product page optimization

Let’s look at another e-commerce example. This client is a large sporting goods store, and this experiment focused on their product detail pages.

On the original page, our Strategists noted a worst mobile practice: The buttons were small and arranged closely together, making them difficult to click.

There were also several optimization blunders:

  1. Two calls-to-action were given equal prominence: “Find in store” and “+ Add to cart”
  2. “Add to wishlist” was also competing with “Add to cart”
  3. Social icons were placed near the call-to-action, which could be distracting

We had evidence from an experiment on desktop that removing these distractions, and focusing on a single call-to-action, would increase transactions. (In that experiment, we saw transactions increase by 6.56%).

So, we tested addressing these issues in two variations.

In the first, we de-prioritized competing calls-to-action, and increased the ‘Size’ and ‘Qty’ fields. In the second, we wanted to address usability issues, making the color options, size options, and quantity field bigger and easier to click.

mobile website optimization - product page variations
The control page versus our variations.

Results

Both of our variations lost to the Control. I know what you’re thinking…what?!

Let’s dig deeper.

Looking at the numbers, users responded in the way we expected, with significant increases to the actions we wanted, and a significant reduction in the ones we did not.

Visits to “Reviews”, “Size”, “Quantity”, “Add to Cart” and the Cart page all increased. Visits to “Find in Store” decreased.

And yet, although the variations were more successful at moving users through to the next step, there was not a matching increase in motivation to actually complete a transaction.

It is hard to say for sure why this result happened without follow-up testing. However, it is possible that this client’s users have different intentions on mobile: Browsing and seeking product information vs. actually buying. Removing the “Find in Store” CTA may have caused anxiety.

This example brings us back to the mobile context. If an experiment wins within a desktop experience, this certainly doesn’t guarantee it will win on mobile.

I was shopping for shoes the other day, and was actually browsing the store’s mobile site while I was standing in the store. I was looking for product reviews. In that scenario, I was information-seeking on my phone, with every intention to buy…just not from my phone.

Are you paying attention to how your unique users use your mobile experience? It may be worthwhile to take the emphasis off of ‘increasing conversions on mobile’ in favor of researching user behavior on mobile, and providing your users with the mobile experience that best suits their needs.

Note: When you get a test result that contradicts usability best practices, it is important that you look carefully at your experiment design and secondary metrics. In this case, we have a potential theory, but would not recommend any large-scale changes without re-validating the result.

Mobile checkout optimization

This experiment was focused on one WiderFunnel client’s mobile checkout page. It was an insight-driving experiment, meaning the focus was on gathering insights about user behavior rather than on increasing conversion rates or revenue.

Evidence from this client’s business context suggested that users on mobile may prefer alternative payment methods, like Apple Pay and Google Wallet, to the standard credit card and PayPal options.

To make things even more interesting, this client wanted to determine the desire for alternative payment methods before implementing them.

The hypothesis: By adding alternative payment methods to the checkout page in an unobtrusive way, we can determine by the percent of clicks which new payment methods are most sought after by users.

We tested two variations against the Control.

In variation A, we pulled the credit card fields and call-to-action higher on the page, and added four alternative payment methods just below the CTA: PayPal, Apple Pay, Amazon Payments, and Google Wallet.

If a user clicked on one of the four alternative payment methods, they would see a message:

“Google Wallet coming soon!
We apologize for any inconvenience. Please choose an available deposit method.
Credit Card | PayPal”

In variation B, we flipped the order. We featured the alternative payment methods above the credit card fields. The focus was on increasing engagement with the payment options to gain better insights about user preference.

mobile website optimization - checkout page
The control against variations testing alternative payment methods.

Note: For this experiment, iOS devices did not display the Google Wallet option, and Android devices did not display Apple Pay.

Results

On iOS devices, Apple Pay received 18% of clicks, and Amazon Pay received 12%. On Android devices, Google Wallet received 17% of clicks, and Amazon Pay also received 17%.

The client can use these insights to build the best experience for mobile users, offering Apple Pay and Google Wallet as alternative payment methods rather than PayPal or Amazon Pay.

Unexpectedly, both variations also increased transactions! Variation A led to an 11.3% increase in transactions, and variation B led to an 8.5% increase.

Because your user’s motivation is already limited on mobile, you should try to create an experience with the fewest possible steps.

You can ask someone to grab their wallet, decipher their credit card number, expiration date, and ccv code, and type it all into a small form field. Or, you can test leveraging the digital payment options that may already be integrated with their mobile devices.

The future of mobile website optimization

Imagine you are in your favorite outdoor goods store, and you are ready to buy a new tent.

You are standing in front of piles of tents: 2-person, 3-person, 4-person tents; 3-season and extreme-weather tents; affordable and pricey tents; light-weight and heavier tents…

You pull out your smartphone, and navigate to the store’s mobile website. You are looking for more in-depth product descriptions and user reviews to help you make your decision.

A few seconds later, a store employee asks if they can help you out. They seem to know exactly what you are searching for, and they help you choose the right tent for your needs within minutes.

Imagine that while you were browsing products on your phone, that store employee received a notification that you are 1) in the store, 2) looking at product descriptions for tent A and tent B, and 3) standing by the tents.

Mobile optimization in the modern era is not about increasing conversions on your mobile website. It is about providing a seamless user experience. In the scenario above, the in-store experience and the mobile experience are inter-connected. One informs the other. And a transaction happens because of each touch point.

Mobile experiences cannot live in a vacuum. Today’s buyer switches seamlessly between devices [and] your optimization efforts must reflect that.

Yonny Zafrani, Mobile Product Manager, Dynamic Yield

We wear the internet on our wrists. We communicate via chat bots and messaging apps. We spend our leisure time on our phones: streaming, gaming, reading, sharing.

And while I’m not encouraging you to shift your optimization efforts entirely to mobile, you must consider the role mobile plays in your customers’ lives. The online experience is mobile. And your mobile experience should be an intentional step within the buyer journey.

What does your ideal mobile shopping experience look like? Where do you think mobile websites can improve? Do you agree or disagree with the ideas in this post? Share your thoughts in the comments section below!

The post Your mobile website optimization guide (or, how to stop pissing off your mobile users) appeared first on WiderFunnel Conversion Optimization.

See original: 

Your mobile website optimization guide (or, how to stop pissing off your mobile users)

Beyond A vs. B: How to get better results with better experiment design

Reading Time: 7 minutes

You’ve been pushing to do more testing at your organization.

You’ve heard that your competitors at ______ are A/B testing, and that their customer experience is (dare I say it?) better than yours.

You believe in marketing backed by science and data, and you have worked to get the executive team at your company on board with a tested strategy.

You’re excited to begin! To learn more about your customers and grow your business.

You run one A/B test. And then another. And then another. But you aren’t seeing that conversion rate lift you promised. You start to hear murmurs and doubts. You start to panic a little.

You could start testing as fast as you can, trying to get that first win. (But you shouldn’t).

Instead, you need to reexamine how you are structuring your tests. Because, as Alhan Keser writes,

Alhan Keser

If your results are disappointing, it may not only be what you are testing – it is definitely how you are testing. While there are several factors for success, one of the most important to consider is Design of Experiments (DOE).

This isn’t the first (or even the second) time we have written about Design of Experiments on the WiderFunnel blog. Because that’s how important it is. Seriously.

For this post, I teamed up with Director of Optimization Strategy, Nick So, to take a deeper look at the best ways to structure your experiments for maximum growth and insights.

Discover the best experiment structure for you!

Compare the pro’s and con’s of different Design of Experiment tactics with this simple download. The method you choose is up to you!



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.


Warning: Things will get a teensy bit technical, but this is a vital part of any high-performing marketing optimization program.

The basics: Defining A/B, MVT, and factorial

Marketers often use the term ‘A/B testing’ to refer to marketing experimentation in general. But there are multiple different ways to structure your experiments. A/B testing is just one of them.

Let’s look at a few: A/B testing, A/B/n testing, multivariate (MVT), and factorial design.

A/B test

In an A/B test, you are testing your original page / experience (A) against a single variation (B) to see which will result in a higher conversion rate. Variation B might feature a multitude of changes (i.e. a ‘cluster’) of changes, or an isolated change.

ab test widerfunnel
When you change multiple elements in a single variation, you might see lift, but what about insights?

In an A/B/n test, you are testing more than two variations of a page at once. “N” refers to the number of versions being tested, anywhere from two versions to the “nth” version.

Multivariate test (MVT)

With multivariate testing, you are testing each, individual change, isolated one against another, by mixing and matching every possible combination available.

Imagine you want to test a homepage re-design with four changes in a single variation:

  • Change A: New hero banner
  • Change B: New call-to-action (CTA) copy
  • Change C: New CTA color
  • Change D: New value proposition statement

Hypothetically, let’s assume that each change has the following impact on your conversion rate:

  • Change A = +10%
  • Change B = +5%
  • Change C = -25%
  • Change D = +5%

If you were to run a classic A/B test―your current control page (A) versus a combination of all four changes at once (B)―you would get a hypothetical decrease of -5% overall (10% + 5% – 25% +5%). You would assume that your re-design did not work and most likely discard the ideas.

With a multivariate test, however, each of the following would be a variation:

mvt widerfunnel

Multivariate testing is great because it shows you the positive or negative impact of every single change, and every single combination of every change, resulting in the most ideal combination (in this theoretical example: A + B + D).

However, this strategy is kind of impossible in the real world. Even if you have a ton of traffic, it would still take more time than most marketers have for a test with 15 variations to reach any kind of statistical significance.

The more variations you test, the more your traffic will be split while testing, and the longer it will take for your tests to reach statistical significance. Many companies simply can’t follow the principles of MVT because they don’t have enough traffic.

Enter factorial experiment design. Factorial design allows for the speed of pure A/B testing combined with the insights of multivariate testing.

Factorial design: The middle ground

Factorial design is another method of Design of Experiments. Similar to MVT, factorial design allows you to test more than one element change within the same variation.

The greatest difference is that factorial design doesn’t force you to test every possible combination of changes.

Rather than creating a variation for every combination of changed elements (as you would with MVT), you can design your experiment to focus on specific isolations that you hypothesize will have the biggest impact.

With basic factorial experiment design, you could set up the following variations in our hypothetical example:

VarA: Change A = +10%
VarB: Change A + B = +15%
VarC: Change A + B + C = -10%
VarD: Change A + B + C + D = -5%

Factorial design widerfunnel
In this basic example, variation A features a single change; VarB is built on VarA, and VarC is built on VarB.

NOTE: With factorial design, estimating the value (e.g. conversion rate lift) of each change is a bit more complex than shown above. I’ll explain.

Firstly, let’s imagine that our control page has a baseline conversion rate of 10% and that each variation receives 1,000 unique visitors during your test.

When you estimate the value of change A, you are using your control as a baseline.

factorial testing widerfunnel
Variation A versus the control.

Given the above information, you would estimate that change A is worth a 10% lift by comparing the 11% conversion rate of variation A against the 10% conversion rate of your control.

The estimated conversion rate lift of change A = (11 / 10 – 1) = 10%

But, when estimating the value of change B, variation A must become your new baseline.

factorial testing widerfunnel
Variation B versus variation A.

The estimated conversion rate lift of change B = (11.5 / 11 – 1) = 4.5%

As you can see, the “value” of change B is slightly different from the 5% difference shown above.

When you structure your tests with factorial design, you can work backwards to isolate the effect of each individual change by comparing variations. But, in this scenario, you have four variations instead of 15.

Mike St Laurent

We are essentially nesting A/B tests into larger experiments so that we can still get results quickly without sacrificing insights gained by isolations.

– Michael St Laurent, Optimization Strategist, WiderFunnel

Then, you would simply re-validate the hypothesized positive results (Change A + B + D) in a standard A/B test against the original control to see if the numbers align with your prediction.

Factorial allows you to get the best potential lift, with five total variations in two tests, rather than 15 variations in a single multivariate test.

But, wait…

It’s not always that simple. How do you hypothesize which elements will have the biggest impact? How do you choose which changes to combine and which to isolate?

The Strategist’s Exploration

The answer lies in the Explore (or research gathering) phase of your testing process.

At WiderFunnel, Explore is an expansive thinking zone, where all options are considered. Ideas are informed by your business context, persuasion principles, digital analytics, user research, and your past test insights and archive.

Experience is the other side to this coin. A seasoned optimization strategist can look at the proposed changes and determine which changes to combine (i.e. cluster), and which changes should be isolated due to risk or potential insights to be gained.

At WiderFunnel, we don’t just invest in the rigorous training of our Strategists. We also have a 10-year-deep test archive that our Strategy team continuously draws upon when determining which changes to cluster, and which to isolate.

Factorial design in action: A case study

Once upon a time, we were testing with Annie Selke, a retailer of luxury home-ware goods. This story follows two experiments we ran on Annie Selke’s product category page.

(You may have already read about what we did during this test, but now I’m going to get into the details of how we did it. It’s a beautiful illustration of factorial design in action!)

Experiment 4.7

In the first experiment, we tested three variations against the control. As the experiment number suggests, this was not the first test we ran with Annie Selke, in general. But it is the ‘first’ test in this story.

ab testing marketing control
Experiment 4.7 control product category page.

Variation A featured an isolated change to the “Sort By” filters below the image, making it a drop down menu.

ab testing marketing example
Replaced original ‘Sort By’ categories with a more traditional drop-down menu.

Evidence?

This change was informed by qualitative click map data, which showed low interaction with the original filters. Strategists also theorized that, without context, visitors may not even know that these boxes are filters (based on e-commerce best practices). This variation was built on the control.

Variation B was also built on the control, and featured another isolated change to reduce the left navigation.

ab testing marketing example
Reduced left-hand navigation.

Evidence?

Click map data showed that most visitors were clicking on “Size” and “Palette”, and past testing had revealed that Annie Selke visitors were sensitive to removing distractions. Plus, the persuasion principle, known as the Paradox of Choice, theorizes that more choice = more anxiety for visitors.

Unlike variation B, variation C was built on variation A, and featured a final isolated change: a collapsed left navigation.

Collapsed left-hand filter (built on VarA).
Collapsed left-hand filter (built on VarA).

Evidence?

This variation was informed by the same evidence as variation B.

Results

Variation A (built on the control) saw a decrease in transactions of -23.2%.
Variation B (built on the control) saw no change.
Variation C (built on variation A) saw a decrease in transactions of -1.9%.

But wait! Because variation C was built on variation A, we knew that the estimated value of change C (the collapsed filter), was 19.1%.

The next step was to validate our estimated lift of 19.1% in a follow up experiment.

Experiment 4.8

The follow-up test also featured three variations versus the original control. Because, you should never waste the opportunity to gather more insights!

Variation A was our validation variation. It featured the collapsed filter (change C) from 4.7’s variation C, but maintained the original “Sort By” functionality from 4.7’s control.

ab testing marketing example
Collapsed filter & original ‘Sort By’ functionality.

Variation B was built on variation A, and featured two changes emphasizing visitor fascination with colors. We 1) changed the left nav filter from “palette” to “color”, and 2) added color imagery within the left nav filter.

ab testing marketing example
Updated “palette” to “color”, and added color imagery. (A variation featuring two clustered changes).

Evidence?

Click map data suggested that Annie Selke visitors are most interested in refining their results by color, and past test results also showed visitor sensitivity to color.

Variation C was built on variation A, and featured a single isolated change: we made the collapsed left nav persistent as the visitor scrolled.

ab testing marketing example
Made the collapsed filter persistent.

Evidence?

Scroll maps and click maps suggested that visitors want to scroll down the page, and view many products.

Results

Variation A led to a 15.6% increase in transactions, which is pretty close to our estimated 19% lift, validating the value of the collapsed left navigation!

Variation B was the big winner, leading to a 23.6% increase in transactions. Based on this win, we could estimate the value of the emphasis on color.

Variation C resulted in a 9.8% increase in transactions, but because it was built on variation A (not on the control), we learned that the persistent left navigation was actually responsible for a decrease in transactions of -11.2%.

This is what factorial design looks like in action: big wins, and big insights, informed by human intelligence.

The best testing framework for you

What are your testing goals?

If you are in a situation where potential revenue gains outweigh the potential insights to be gained or your test has little long-term value, you may want to go with a standard A/B cluster test.

If you have lots and lots of traffic, and value insights above everything, multivariate may be for you.

If you want the growth-driving power of pure A/B testing, as well as insightful takeaways about your customers, you should explore factorial design.

A note of encouragement: With factorial design, your tests will get better as you continue to test. With every test, you will learn more about how your customers behave, and what they want. Which will make every subsequent hypothesis smarter, and every test more impactful.

One 10% win without insights may turn heads your direction now, but a test that delivers insights can turn into five 10% wins down the line. It’s similar to the compounding effect: collecting insights now can mean massive payouts over time.

– Michael St Laurent

The post Beyond A vs. B: How to get better results with better experiment design appeared first on WiderFunnel Conversion Optimization.

More – 

Beyond A vs. B: How to get better results with better experiment design

“The more tests, the better!” and other A/B testing myths, debunked

Reading Time: 8 minutes

Will the real A/B testing success metrics please stand up?

It’s 2017, and most marketers understand the importance of A/B testing. The strategy of applying the scientific method to marketing to prove whether an idea will have a positive impact on your bottom-line is no longer novel.

But, while the practice of A/B testing has become more and more common, too many marketers still buy into pervasive A/B testing myths. #AlternativeFacts.

This has been going on for years, but the myths continue to evolve. Other bloggers have already addressed myths like “A/B testing and conversion optimization are the same thing”, and “you should A/B test everything”.

As more A/B testing ‘experts’ pop up, A/B testing myths have become more specific. Driven by best practices and tips and tricks, these myths represent ideas about A/B testing that will derail your marketing optimization efforts if left unaddressed.

Avoid the pitfalls of ad-hoc A/B testing…

Get this guide, and learn how to build an optimization machine at your company. Discover how to use A/B testing as part of your bigger marketing optimization strategy!



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.



But never fear! With the help of WiderFunnel Optimization Strategist, Dennis Pavlina, I’m going to rebut four A/B testing myths that we hear over and over again. Because there is such a thing as a successful, sustainable A/B testing program…

Into the light, we go!

Myth #1: The more tests, the better!

A lot of marketers equate A/B testing success with A/B testing velocity. And I get it. The more tests you run, the faster you run them, the more likely you are to get a win, and prove the value of A/B testing in general…right?

Not so much. Obsessing over velocity is not going to get you the wins you’re hoping for in the long run.

Mike St Laurent

The key to sustainable A/B testing output, is to find a balance between short-term (maximum testing speed), and long-term (testing for data-collection and insights).

Michael St Laurent, Senior Optimization Strategist, WiderFunnel

When you focus solely on speed, you spend less time structuring your tests, and you will miss out on insights.

With every experiment, you must ensure that it directly addresses the hypothesis. You must track all of the most relevant goals to generate maximum insights, and QA all variations to ensure bugs won’t skew your data.

Dennis Pavlina

An emphasis on velocity can create mistakes that are easily avoided when you spend more time on preparation.

Dennis Pavlina, Optimization Strategist, WiderFunnel

Another problem: If you decide to test many ideas, quickly, you are sacrificing your ability to really validate and leverage an idea. One winning A/B test may mean quick conversion rate lift, but it doesn’t mean you’ve explored the full potential of that idea.

You can often apply the insights gained from one experiment, when building out the strategy for another experiment. Plus, those insights provide additional evidence for testing a particular concept. Lining up a huge list of experiments at once without taking into account these past insights can result in your testing program being more scattershot than evidence-based.

While you can make some noise with an ‘as-many-tests-as-possible’ strategy, you won’t see the big business impact that comes from a properly structured A/B testing strategy.

Myth #2: Statistical significance is the end-all, be-all

A quick definition

Statistical significance: The probability that a certain result is not due to chance. At WiderFunnel, we use a 95% confidence level. In other words, we can say that there is a 95% chance that the observed result is because of changes in our variation (and a 5% chance it is due to…well…chance).

If a test has a confidence level of less than 95% (positive or negative), it is inconclusive and does not have our official recommendation. The insights are deemed directional and subject to change.

Ok, here’s the thing about statistical significance: It is important, but marketers often talk about it as if it is the only determinant for completing an A/B test. In actuality, you cannot view it within a silo.

For example, a recent experiment we ran reached statistical significance three hours after it went live. Because statistical significance is viewed as the end-all, be-all, a result like this can be exciting! But, in three hours, we had not gathered a representative sample size.

Claire Vignon Keser

You should not wait for a test to be significant (because it may never happen) or stop a test as soon as it is significant. Instead, you need to wait for the calculated sample size to be reached before stopping a test. Use a test duration calculator to understand better when to stop a test.

After 24 hours, the same experiment had dropped to a confidence level of 88%, meaning that there was now only an 88% likelihood that the difference in conversion rates was not due to chance – i.e. statistically significant.

Traffic behaves differently over time for all businesses, so you should always run a test for full business cycles, even if you have reached statistical significance. This way, your experiment has taken into account all of the regular fluctuations in traffic that impact your business.

For an e-commerce business, a full business cycle is typically a one-week period; for subscription-based businesses, this might be one month or longer.

Myth #2, Part II: You have to run a test until reaches statistical significance

As Claire pointed out, this may never happen. And it doesn’t mean you should walk away from an A/B test, completely.

As I said above, anything below 95% confidence is deemed subject to change. But, with testing experience, an expert understanding of your testing tool, and by observing the factors I’m about to outline, you can discover actionable insights that are directional (directionally true or false).

  • Results stability: Is the conversion rate difference stable over time, or does it fluctuate? Stability is a positive indicator.
ab testing results stability
Check your graphs! Are conversion rates crossing? Are the lines smooth and flat, or are there spikes and valleys?
  • Experiment timeline: Did I run this experiment for at least a full business cycle? Did conversion rate stability last throughout that cycle?
  • Relativity: If my testing tool uses t-test to determine significance, am I looking at the hard numbers of actual conversions in addition to conversion rate? Does the calculated lift make sense?
  • LIFT & ROI: Is there still potential for the experiment to achieve X% lift? If so, you should let it run as long as it is viable, especially when considering the ROI.
  • Impact on other elements: If elements outside the experiment are unstable (social shares, average order value, etc.) the observed conversion rate may also be unstable.

You can use these factors to make the decision that makes the most sense for your business: implement the variation based on the observed trends, abandon the variation based on observed trends, and/or create a follow-up test!

Myth #3: An A/B test is only as good as its effect on conversion rates

Well, if conversion rate is the only success metric you are tracking, this may be true. But you’re underestimating the true growth potential of A/B testing if that’s how you structure your tests!

To clarify: Your main success metric should always be linked to your biggest revenue driver.

But, that doesn’t mean you shouldn’t track other relevant metrics! At WiderFunnel, we set up as many relevant secondary goals (clicks, visits, field completions, etc.) as possible for each experiment.

Dennis Pavlina

This ensures that we aren’t just gaining insights about the impact a variation has on conversion rate, but also the impact it’s having on visitor behavior.

– Dennis Pavlina

When you observe secondary goal metrics, your A/B testing becomes exponentially more valuable because every experiment generates a wide range of secondary insights. These can be used to create follow up experiments, identify pain points, and create a better understanding of how visitors move through your site.

An example

One of our clients provides an online consumer information service — users type in a question and get an Expert answer. This client has a 4-step funnel. With every test we run, we aim to increase transactions: the final, and most important conversion.

But, we also track secondary goals, like click-through-rates, and refunds/chargebacks, so that we can observe how a variation influences visitor behavior.

In one experiment, we made a change to step one of the funnel (the landing page). Our goal was to set clearer visitor expectations at the beginning of the purchasing experience. We tested 3 variations against the original, and all 3 won resulted in increased transactions (hooray!).

The secondary goals revealed important insights about visitor behavior, though! Firstly, each variation resulted in substantial drop-offs from step 1 to step 2…fewer people were entering the funnel. But, from there, we saw gradual increases in clicks to steps 3 and 4.

Our variations seemed to be filtering out visitors without strong purchasing intent. We also saw an interesting pattern with one of our variations: It increased clicks from step 3 to step 4 by almost 12% (a huge increase), but decreased actual conversions by -1.6%. This result was evidence that the call-to-action on step 4 was extremely weak (which led to a follow-up test!)

ab testing funnel analysis
You can see how each variation fared against the Control in this funnel analysis.

We also saw large decreases in refunds and chargebacks for this client, which further supported the idea that the right visitors (i.e. the wrong visitors) were the ones who were dropping off.

This is just a taste of what every A/B test could be worth to your business. The right goal tracking can unlock piles of insights about your target visitors.

Myth #4: A/B testing takes little to no thought or planning

Believe it or not, marketers still think this way. They still view A/B testing on a small scale, in simple terms.

But A/B testing is part of a greater whole—it’s one piece of your marketing optimization program—and you must build your tests accordingly. A one-off, ad-hoc test may yield short-term results, but the power of A/B testing lies in iteration, and in planning.

ab testing infinity optimization process
A/B testing is just a part of the marketing optimization machine.

At WiderFunnel, a significant amount of research goes into developing ideas for a single A/B test. Even tests that may seem intuitive, or common-sensical, are the result of research.

ab testing planning
The WiderFunnel strategy team gathers to share and discuss A/B testing insights.

Because, with any test, you want to make sure that you are addressing areas within your digital experiences that are the most in need of improvement. And you should always have evidence to support your use of resources when you decide to test an idea. Any idea.

So, what does a revenue-driving A/B testing program actually look like?

Today, tools and technology allow you to track almost any marketing metric. Meaning, you have an endless sea of evidence that you can use to generate ideas on how to improve your digital experiences.

Which makes A/B testing more important than ever.

An A/B test shows you, objectively, whether or not one of your many ideas will actually increase conversion rates and revenue. And, it shows you when an idea doesn’t align with your user expectations and will hurt your conversion rates.

And marketers recognize the value of A/B testing. We are firmly in the era of the data-driven CMO: Marketing ideas must be proven, and backed by sound data.

But results-driving A/B testing happens when you acknowledge that it is just one piece of a much larger puzzle.

One of our favorite A/B testing success stories is that of DMV.org, a non-government content website. If you want to see what a truly successful A/B testing strategy looks like, check out this case study. Here are the high level details:

We’ve been testing with DMV.org for almost four years. In fact, we just launched our 100th test with them. For DMV.org, A/B testing is a step within their optimization program.

Continuous user research and data gathering informs hypotheses that are prioritized and created into A/B tests (that are structured using proper Design of Experiments). Each A/B test delivers business growth and/or insights, and these insights are fed back into the data gathering. It’s a cycle of continuous improvement.

And here’s the kicker: Since DMV.org began A/B testing strategically, they have doubled their revenue year over year, and have seen an over 280% conversion rate increase. Those numbers kinda speak for themselves, huh?

What do you think?

Do you agree with the myths above? What are some misconceptions around A/B testing that you would like to see debunked? Let us know in the comments!

The post “The more tests, the better!” and other A/B testing myths, debunked appeared first on WiderFunnel Conversion Optimization.

Excerpt from:

“The more tests, the better!” and other A/B testing myths, debunked

How to Track Conversions & ROI With These Content Marketing Metrics

If you ever want to make a marketer nervous, ask them how effective their content marketing is. Even I would sweat a little if you asked me that question. It’s not because I don’t know the answer, or where to look to find the answer, it’s just because the process of answering the question can be a little complex. I could throw any manner of numbers out at you, but some of them are just vanity metrics and most are meaningless without also talking about the benchmarks, past performance and the goals that I’m reaching for. Because metrics can be…

The post How to Track Conversions & ROI With These Content Marketing Metrics appeared first on The Daily Egg.

Original post:

How to Track Conversions & ROI With These Content Marketing Metrics

Content-First Prototyping

Content is the core commodity of the digital economy. It is the gold we fashion into luxury experience, the diamond we encase in loyalty programs and upsells. Yet, as designers, we often plug it in after the fact. We prototype our interaction and visual design to exhaustion, but accept that the “real words” can just be dropped in later. There is a better way.
More and more, the digital goods we create operate within a dynamic system of content, functionality, code and intent.

Continue reading:  

Content-First Prototyping

A Better iOS Architecture: A Deep Look At The Model-View-Controller Pattern

If you’ve ever written an iOS app beyond a trivial “Hello world” app with just one screen and a few views, then you might have noticed that a lot of code seems to “naturally” go into view controllers. Because view controllers in iOS carry many responsibilities and are closely related to the app screens, a lot of code ends up being written in them because it’s just easier and faster that way.

Continued here:  

A Better iOS Architecture: A Deep Look At The Model-View-Controller Pattern

3 Things That The “Used Car Salesman” Can Teach Us About Online Marketing

“Used car salesman” is synonymous for sleazy, pushy, crooked sales. It’s too bad, really. Sure, there may be some dishonest used car salespeople, but certainly the entire industry can’t be that bad. Unfortunately, the stereotype persists. Why? Because we’ve all experienced the greasy, false friendship of that kind of salesperson. It’s off-putting, to say the least. Source Is it possible that online marketing can come across in the same way as the annoying used car salesman? The answer is yes. Because of its digital facade, most of us aren’t aware that some of the marketing techniques we’re using can come…

The post 3 Things That The “Used Car Salesman” Can Teach Us About Online Marketing appeared first on The Daily Egg.

See original:

3 Things That The “Used Car Salesman” Can Teach Us About Online Marketing

Lessons Learned from Year 1 of the Call to Action Podcast [PODCAST]

Birthday cake
Image by Morrowlight via Shutterstock.

The first year of becoming a parent is a rollercoaster of emotions, from terror to pure joy and everything in between. You can’t sleep, you don’t have time to eat and you go from laughing to crying in mere seconds. But in the end it’s all worth it, because like every proud parent, your podcast’s achievements become your own.

Nope, that wasn’t a typo. We’re talking about a different kind of baby — one you can download on iTunes.

That’s right, folks, the Call to Action podcast just turned one, and we’re celebrating as many others have celebrated their first birthday: by diving headfirst into a volleyball-sized cupcake. (I wish!)

Join me, Unbounce’s Multimedia Producer, Stephanie Saretsky (a.k.a. Beansie), and Content Strategist Dan Levy as we chat about the lessons learned during the podcast’s first year, including how dang tricky it is to measure the value of a podcast and why not all ideas are good ideas. On top of that, we chat about what’s to come for season two, and how you can get involved.

Season 1 highlights:

  • Cracking the iTunes ranking algorithm — ranking first in Marketing, first in Business and fourth in the entire iTunes store.
  • Learning that it was Aaliyah and not Destiny’s Child that taught us to “…dust [ourselves] off and try again.”
  • Answering the big question: Are you more of a Tom Haverford or a Ron Swanson?

Listen to the podcast

Mentioned in the podcast

Read the transcript

Stephanie: This is kind of a special episode. As you know, I’m Stephanie Saretsky. I’m the host of Call to Action. But with me, I have our content strategist, Dan Levy.

Dan: Hey, Stephanie.

Stephanie: How’s it going, Dan?

Dan: It’s going pretty well. We don’t usually talk to each other, do we?

Stephanie: No, I usually kind of just introduce you and then let you interview all of our awesome guests.

Dan: Of course, the secret is that you are in the room next to me when I interview those guests, so it’s not like we don’t actually talk in real life.

Stephanie: It’s true. Behind the scenes, we’re actually always together.

Dan: Crazy.

Stephanie: So today, we thought it would be fun because it’s been one whole year since we launched the Call to Action podcast. It first went live last January 28, which was a Wednesday. So we thought it would be fun to get together and chat about what went well, what didn’t go so well and what we’re excited about doing for this year.

Dan: Yeah, it’s a good opportunity to take a step back and to also look ahead. And of course, a big part of that is to get your feedback on what you’ve enjoyed and what you think we could do better, and what you’d like to see for the rest of the year and beyond that. So we hope that you’ll enjoy this walk down memory lane. We hope there will also be some valuable lessons and insights for you in how to launch a podcast and everything from ranking on iTunes, to how to get guests, to how to treat your guests.

One thing that’s been I think a huge success so far is that we’ve had a lot of guests on, who’ve come to us later and said that they really enjoyed the experience and asking us how we manage things on our end so that they could make sure to give their guests as good an experience on their podcast. So that’s been a huge win for us and we hope to share some of that love.

Stephanie: Yeah, it’s definitely something that we’ve been trying to spread awareness of since podcasting kind of exploded in the marketing world last year. So from posts on our blog to just well-crafted episodes every week, it’s something that Dan and I are trying to really bring, an excellent product. So again, any feedback would be appreciated. You can always contact us at podcast.unbounce.com.

Dan: So talking about when we launched, bring me back. Why did we launch a podcast again?

Stephanie: So this is kind of funny. Something that Unbounce does every quarter is a ShipIt Day. And I’m sure if you work at a tech startup, you’re probably somewhat familiar with the concept. So it’s two days, and one day you spend planning what projects you’re going to work on, and then the second day you ship it. So the marketing team decided that we were going to do our own ShipIt Day because at that point in the company’s history, we weren’t really involved and ShipIt Day was something that the dev teams more did on their own, which has since changed. But our marketing manager at the time thought it would be fun for us to try and do it ourselves.

Dan: It’s funny you mention our marketing manager at the time. That’s Corey, and he actually plays a pretty important part in the genesis of the podcast. I might be skipping ahead here, but one of the reasons we did launch a podcast, before we get to the actual KPIs and everything, one of the reasons, to be totally honest, was because Corey told us that he didn’t have time to read our blog, and that he didn’t really like to read. And so if somebody could just speak our blog post to him into his ear every morning, he’d be a happy camper. And that was actually the seed of the Call to Action podcast.

Stephanie: Yeah, because the Unbounce blog is something that’s been like a flagship at Unbounce since it was started. I came from the radio world so I had a background in audio. And since I started at Unbounce, I was like, okay, a podcast would be something that I would really like to do and kind of throwing around ideas. And then Dan, when we had this ShipIt Day idea, he was like: “Hey, okay, so Corey doesn’t like to read. Why don’t we try to put together a podcast where we’re actually synthesizing popular blog posts of ours?” And I was like: “Sweet, that’s awesome; that’s a super concise idea.”

We have a huge bank of really awesome posts that we can pull from and it should be fairly easy editing-wise because there’s not a lot of post production that needs to be done. So Dan called up Elizabeth Martsen, who at the time was at Portent, Inc. as their PPC manager. And she had written a really awesome post for us about comparing PPC to online dating, which was really fun. So she was super awesome. She was like: “Yeah, I’ll totally do it.” She was able to do it within the next three days. So it was super fast. We got the interview done, cut and edited.

And then we presented the episode to our team, and it went really well. So we were like, okay, I think this needs to be an actual thing. So this was in October, I believe. And then we proceeded to interview six other people. We did that in a span of two weeks.

Dan: And that was one of my faults. I proceeded to go on my honeymoon for a month to Thailand. And so I was like, yeah, let’s do this podcast thing, and I’m leaving for a month. So we rushed to do six interviews really quickly, and then I said to Stephanie: “Have fun.” And when I got back, like a little, nicely wrapped present, they were all edited and ready to go, which was awesome.

Stephanie: Yeah. So that was really – yeah, fast paced but it was really good. We had some really awesome first contributors. And then at the same time, I spent the time putting together the launch brief. So at this time in Unbounce, we had about eight of us on the marketing team so we were still in the instance where I would put together a brief, and I would send it to our marketing manager. And then we would have a meeting and talk about all the strategies, whether we hit the key points and then he would send it back to me, and then I would iterate on it. And then we would finally get to the last iteration of the brief.

Initially, we had wanted to launch at the beginning of January as like a new year, new podcast thing. However, it became clear, because of the pace that we had to record a bunch so that we could launch with a certain amount, and we’ll get to that later. And just the size of the launch that this project was going to entail, that the first of January wasn’t going to be feasible for us. So we ended up moving it to January 28 and the rest is history.

Dan: Yeah, of course when we launch any piece of content marketing at Unbounce, we try to not start – in this case, we did notably want to do a podcast but we do try to start with a goal. And besides getting Corey to listen to our blog, remind me what was the goal on that brief that we set out to accomplish?

Stephanie: The initial goal had two goals. The primary goal was awareness. We really wanted this podcast to reach a new audience. And the way that we saw this happening was through the iTunes store. So right away, the biggest thing for us was to get into the New and Noteworthy section in iTunes and to rank in the top 10 in iTunes. So that was huge. So I spent an entire month researching on how to do that.

And I will let you in on a little secret: it was so hard – or at least a year ago, it was so hard – to find any definitive points on how to do well in iTunes. There’s so much conflicting information. iTunes is notorious for having that stuff on lockdown. Like you can’t do keywords anymore, there’s no –

Dan: You thought the Google algorithm and something like Google quality score was hard to unpack; wait ’til you encounter the mystery of iTunes.

Stephanie: Yeah, and there’s so much conflicting information. So one thing that people say is huge is rating velocity; so how many stars or reviews you get. So that’s one thing. So try and get as many reviews and as many stars as you can. Try and get as many people downloading as many episodes on the same day as possible. So download speed, so launching with more than three. Some people say one is fine; some people say you need at least five.

Some people say that you should be posting your podcast once a day; some people are like once a week. Initially, we had thought we would do biweekly but then we decided to go for a week just in case this download velocity was a huge deal. And we found that at the time, we did have enough in our bank and enough capacity to produce once a week.

Dan: That’s one of the reasons we both recorded a bunch of episodes right off the bat was so that we could launch with several episodes. And also one of the reasons that we did go for the MVP — the minimum viable product — we decided that it was important to keep it as lean as possible. So we interviewed blog authors and limited the scope of the podcast initially to people who we had interviewed on the blog, who we had a relationship with, and that there was a post that we could easily write some questions around and jump right into the content. Rather than creating totally fresh content, doing fresh reporting, for example. That would have added to our workload.

Stephanie: And so this is where something that kind of comes in stats-wise is interesting and something that we’ll unpack a little bit later on is because we were so concerned with our ratings velocity, our download speed and just getting as many people to listen to it as possible, when it came time to launch, we put a lot of effort into an email campaign, a social campaign. And really, even though the main goal was awareness and getting it to a new audience in retrospect we were actually launching to our current audience, and we were really banking on also hooking the people that were reading our blog and being like: “Hey, this is a post that you liked; here is an episode.”

We’re going to go more in-depth on this post. You’re going to hear a little bit of new information from the author’s mouth. And so that was something that we were banking on so that we could have a really awesome launch.

Dan: It’s a bit of the chicken and the egg scenario because we needed that critical mass of people listening to our podcast right away in order to rank in the iTunes store and reach that new audience. And in order to do that, we had to leverage our existing audience. So it wasn’t perfect because we were marketing to existing leads, but we did get our podcast ranking really quickly, which we hoped and we think did reach a whole slew of Coreys out there who don’t read the blog but who like to listen to podcasts.

Stephanie: And launch day was amazing. We quickly went to number one in marketing. We went to number one in business, and we were at number four in the entire iTunes store after This American Life, Serial and –

Dan: Radio Lab.

Stephanie: No, it was Invisibilia, the new NPR podcast. Which is, if you’re a podcast fan – and I’m sure you are if you’re listening to this episode right now, like that is huge. Dan and I were freaking out.

Dan: Those are the three biggest podcasts in the world –

Stephanie: Ever.

Dan: – and number four was us.

Stephanie: It was awesome. I still have that screenshot and I just look at it when I feel sad. Yeah. So that was great. We had an amazing launch and yeah.

Dan: Yeah, the launch was really exciting. Of course, we wanted to then keep our momentum going, and we soon realized that the format that we had originally launched with was limiting in some ways, right?

Stephanie: Yes. Because even though we launched with this MVP, because it meant that it was consistent, it was narrow and we could do a lot of it quickly, which is important if you’re doing a weekly show; it has to be somewhat easy for the producer, which is myself, to actually edit it and be able to do all my other work. It soon became apparent that it was limiting in what we could actually think about. And also, initially, like the very, very first iteration of the podcast, we were trying to promote another core piece of content that we had just published, which was our marketing glossary. So we were starting every episode with a definition, read by our cofounder, Oli Gardner, of a marketing term that would then be featured in the actual interview itself.

Dan: Yeah. So we were excited about that idea. Some of the feedback that we got was that people didn’t necessarily see the connection between that word and then the interview afterwards. They thought that it was filler, or it was just a roadblock on the way to the interview, which is what they really wanted to get at. And so we quickly – I don’t know, how long were we doing that for?

Stephanie: We did that for at least two to three months, actually. I think we moved onto our second format change in about May.

Dan: Yeah, I think once we realized that it was even a stress for us to find words that connected to the interview, that it was time to stop. That yeah, it was convenient in the sense that we were leveraging existing content and that we were promoting it, but it didn’t quite work so we moved away from that.

Stephanie: And we just got so much feedback being like this seems like it’s just thrown in here. So we were like, okay, let’s try and give it more of a story because Dan and I both are super interested in podcasts that have a lot of story content. So we were kind of like, okay, how can we make this more podcast-y, which sounds a little weird but like how do we make this sound like it’s not your typical marketing podcast?

Dan: The podcasters that we were looking up to were those three other podcasts, This American Life, Serial and Invisibilia, Radio Lab — lofty goals because these are radio professionals who this is their full-time job. But there’s also other podcasts that are lower production but that really connect with their listeners in maybe a more personal way and a more informal way. And so we wanted to make sure that we were honoring the tradition. As new as it is, there is a podcasting tradition already and expectations of podcast audience; we wanted to make sure we were honoring those.

Stephanie: But the challenge was then also making sure that the interview was actionable at the time.

Dan: Exactly. Because something that we’ve always talked about is that Unbounce content needs to be actionable. And if you read our blog posts, they’re super tactical, they’re really in depth. We really break down a marketing problem and how to solve it. And that’s great in blog form. In podcast form, I think there are limits to it because people listen to podcasts at the gym, washing dishes, in the car; they don’t necessarily have a pen and paper.

They’re not in deep learning mode. They want to learn something, they want to get something out of it for sure but it’s not necessarily the same type of – they’re not looking for the same type of content that a blog reader would. So the challenge was how to keep it actionable without getting too bogged down into tactics and details.

Stephanie: And that was something that we noticed when we were able to suss out what made a really good episode last year, was we had a few episodes that were super technical; topics like PPC come to mind, where it’s a lot of great information but pulling that out and making that interesting to listen to was difficult.

Dan: And interesting for us.

Stephanie: Yeah. I won’t – never mind.

Dan: Yeah.

Stephanie: Whereas, say, some of our really awesome episodes last year, and one that comes to mind for me is an episode that we did with HubSpot’s Ginny Soskey, which is one of my favorite episodes today. Was that it was actionable but it also was very conversational and you guys were actually discussing the state of content marketing and the thought of publishing a lot of blog posts, or publishing a little blog posts. But it went beyond here was our experiment and this is what we saw.

Dan: That’s it. Because Ginny posted this amazing, in-depth report on this blog publishing experiment that they’d run. And the numbers were there, and the charts were there, and it’s just a really great post, but just recounting that is not nearly as interesting as the way she impacts it in the post itself. So we realized that this wasn’t just about talking blog posts, but it was talking around them and getting a little bit deeper into the bigger ideas and the bigger issues behind the posts. So there might be a post about a blog publishing experiment, but what’s the interview about?

Well, maybe it’s actually more about what is this content marketing stuff about?  How do you stay on goal while still providing value to the audience?  That’s a much more interesting conversation, I think, to have than charts and numbers, which could get a little bit tedious in the verbal form.

Stephanie: Yeah. So it’s just not as fun, and then it wasn’t as fun for Dan and I. So we found that they were received better by our audience, but then also more enjoyable for us to actually work on.

Dan: Yeah, and the other thing that you hint at there is that we moved beyond just talking about our own blog posts; just talking to authors who had written for our own blog. We realized that there’s a whole ecosystem of really smart, amazing marketing content out there and we wanted to speak to those authors, as well. So we started to talk to the HubSpots and the Buffers and really great marketing thought leaders who may have published elsewhere to bring those insights to our audience.

Stephanie: Actually, what are some of your favorite moments from the last year?

Dan: Good question. Somehow Parks and Recreation keeps coming up, and I actually didn’t even watch that show until really recently. One of my favorite moments was when Allison Otting from Disruptive Advertising asked me if I was more of a Tom Haverford or a Ron Swanson. And I kind of like played along for a little bit and then I was like, I don’t actually know who these characters are. I thought that was pretty funny.

Stephanie: Yeah, that was a really good episode. I actually had forgotten about that at this point.

Dan: How about you?

Stephanie: I think one of my favorites… that’s a hard question. Actually –

Dan: If at first you don’t succeed…

Stephanie: Oh, yeah. Oh, my gosh, yeah. This was the best. Jonathan Dane was on and actually, at this point, this has been one of our most popular podcasts because he really – he can take something like PPC and make it sound like the most fun thing in the entire world. Actually in that title was a huge come around for us. It was something like why PPC is just like Nerf guns or something?

Dan: Right, PPC as explained through Nerf guns.

Stephanie: Yes, that was it. It was awesome. And so at the very end, we ended off on this kind of inspirational note of like if at first you don’t succeed, dust yourself off and try again. And then –

Dan: I think I said – what did I say?  I said something like in the wise words of Destiny’s Child?

Stephanie: Yeah, Destiny’s Child. And then Jonathan was like no, no, I think that’s Taylor Swift. And then – oh, no. Did you say Taylor Swift and then he said Destiny’s Child?  Anyway –

Dan: You know what he said – I’ll tell you what happened. I got this.

Stephanie: Tell me, Dan.

Dan: So he said something about shaking it off, which is a Taylor Swift reference. What I heard was dust yourself off, which of course is Destiny’s Child reference. However, Stephanie kept her mouth shut, you know, like a good professional, until she couldn’t take it anymore and she set that straight.

Stephanie: Yeah, so it was actually Aaliyah.

Dan: We were both wrong.

Stephanie: Which was hilarious, and then we actually put the song into the end of the episode and it was, yeah, really funny and a really excellent way to end it. But then I also think that one of just the more enjoyable interviews that we had was when we had our own Haley Mullen, who is our community manager on the show. And Haley’s hilarious, if you’ve interacted with the Unbounce Twitter, ever. She’s so funny and it was just a really awesome interview to produce because listening to you and her talk was just fun.

Dan: Yeah, and I realized talking to somebody that you do have a previous relationship with, but you don’t necessarily have these specific conversations, they go in really interesting, unexpected places.

Stephanie: Also another good one that we did for us, we did our April Fool’s episode.

Dan: We did, yes.

Stephanie: Which was pretty funny, actually. So usually how a Call to Action episode gets started is that I’ll pull questions from a post and then Dan will edit them to be in his own voice. And then we’ll actually interview the guest, usually on a Thursday. And so what we actually did for this one is we did a full script with read-throughs and everything, and then we went in and actually recorded it like a radio play.

Dan: Yeah, and that actually went through several iterations because the first time we played it for some people and they were like: we don’t get the joke. We thought it was hilarious. But then we realized that – I think we played it a little bit too straight. And we rerecorded it where I was a bit more of a proxy for the audience in asking – being a bit more skeptical myself and slowly getting irritated by this character I was interviewing, who was like this total, arrogant blowhard marketer. And we think that the result was a lot better in the end.

Stephanie: Yeah, which is actually a really important content lesson. That something that you might think is really funny, or even really just awesome, it may just be you. Just run it past some people and be like, how does this sound?  And they’ll tell you: “We don’t get it. Is this actually a thing that’s happening?” And we’re like: “No, obviously we’re not developing landing pages to infinity or the Uber for landing pages; that’s silly.” They’re like: “No, it sounds real.”

Dan: Well, that’s it. And it goes to show how far off the rails digital marketing sometimes can get when something that’s so absurd could actually sound plausible to people.

Another episode, on a more serious note, that I really, really liked was my interview with Kevin Lee from Buffer. Where suddenly, the tables turned. I forget what we were talking about exactly but I asked him a question and he got, like, really quiet. He’s a really thoughtful guy, Kevin, and he’s the kind of person that doesn’t say anything without really thinking it through.

And if he doesn’t know the answer, then he’s really, in true Buffer style, kind of transparent about it and really humble. And so he said something like: I don’t know, what do you think?  And I got really quiet because, you know, I’m the interviewer; I’m not really used to being asked that. And then suddenly, I started kind of pouring out my guts to him and it became this back and forth; it was almost like content marketing therapy.

Stephanie: Yeah, I think you guys were talking about how do you tell people what you do.

Dan: Right.

Stephanie: And what is content market, basically.

Dan: Yeah, it got super existential.

Stephanie: Which, as we were talking about before, is a place that we actually do want to take the podcast to. Because you know, we want to be actionable but at the same time, the podcast is really one of the mediums at Unbounce that we can address these existential questions that we maybe can’t really do on the blog or we can’t really do in, say, like a video marketing or any other content form that we have.

Dan: Yeah, I think we’re always – as marketers, we’re often moving really quickly; we’re in campaign mode. There isn’t always the time to take a step back and reflect on what we do as a profession and on the craft of marketing. And I think that’s an area that we really enjoy exploring. We’re marketers talking to marketers. We have a tool for marketers, which helps them with their marketing. It’s all very meta and we think this is a good forum to take a step back to sort of share best practices, to be open about where we maybe have made mistakes, about things that we’re not quite sure of yet and to be able to talk those things through with each other in what we hope is a safe space.

Stephanie: Yeah, which actually brings up something that I addressed earlier that I kind of want to go into a little bit more, is the stats problem with podcasts. Because that’s actually something that we’re at right now, is we’re kind of evaluating how the podcast is performing as a company tool. And it’s really hard if you are familiar with podcasts, or if you have one yourself, you know what I’m talking about. Because podcasts are almost impossible to track as a KPI. Like you can get download rates; if you have awesome analytics, you can get download rates.

You can see what country they’re from, what device they’re on but it’s just a download. You don’t know if they listened to it. You can’t see how many subscribers you have. So basically, my rule of thumb would be to just track the numbers for the first couple of days and if they’re standard, I assume that’s how many subscribers that we have, which is very nebulous; it’s not an actual –

Dan: By subscribers on iTunes, right?

Stephanie: Exactly. So in my head, I’m like, okay, say, the morning of, like two hours after it launches we have 300 downloads every week. I can assume that at least 30 people are downloading this podcast automatically, meaning that they’re a subscriber. But iTunes isn’t telling me this. There’s no stat that says how many subscribers you have. So it’s not really – you can’t tag an individual listener and you can’t tell if they’ve actually listened to the episode; you can only just see that they’ve actually downloaded it onto their device.

Dan: Yeah, and that’s just like the most high-level KPI: how many people are subscribing and listening to your podcast. Once you get further down into the funnel, into like generating leads and even to tracking conversions down the line, it gets really, really dicey. And I’m not saying it’s impossible but I think we’ve made a decision here that we’re going to treat the podcast very much as a top of the funnel discovery channel. And so it really is about speaking to a fresh, new audience; getting them aware of all these marketing problems that we talk about and, of course, how Unbounce might help them find that solution.

But for us, it’s not a direct conversion channel. And I think that’s okay. We’re conversion centered marketers but we’re also inbound marketers who really trust and believe in our overall strategy. And we know that we have tons of pieces of content: we’ve got PPC, we’ve got email marketing, we’ve got lead nurturing, we’ve got much more conversion centered content that we create that’s doing that job for us. And so that frees us up to treat the podcast at what we think a podcast is good at, which is just communicating with people, engaging them and making these new relationships that hopefully we could then nurture further down the line.

Stephanie: So we’re kind of entering into this brave new world of not relying on our email list, as we had talked about was a big thing for us, at lunch. And distribution, trying to figure out where we need to be posting this to, who we need to get onto the podcast so that they can share it with our audience – tactics like that. But then internally, as well, now we’re just trying to figure out if our KPI is awareness, how do we actually move the needle on that?  So, say, if we’re getting 2,000 downloads on an episode, does that provide as much value as, say, 200 hits does on the blog post?  How much more engaged is a podcast listener compared to a blog reader?

So we’re really trying to make sure that we’re measuring the podcast against our awareness blog posts because those are the posts that are more in line with what the podcast goal is, and so we’re going to have a better chance of figuring out whether or not the podcast is providing value.

Dan: Right. In that case, we’re comparing apples to apples, right?  We’re not comparing a podcast against something like a webinar, which is much more conversion centric; but to compare it against a piece of awareness content that lives on the blog, for example, or a guest post does make sense. And so we’re trying to make sure that we’re still data driven and that we’re still measuring results, but that we’re measuring the right things and not getting distracted by: hey, we’re not able to track this to sales and conversions. Well, that’s not necessarily the point but it doesn’t mean that we shouldn’t be tracking it at all.

Stephanie: Yeah, because that’s the thing. Because there’s always this knee-jerk reaction to be like: oh, if you can’t track something definitively, we should cut it, or it’s probably not valuable. But when you have something that’s purely an awareness channel, and something that is unable to be tracked exactly like podcast, that’s where it becomes a little bit more grey and where, say, we’re kind of campaigning to be like: no, we swear that this has value. We’ve gotten feedback and we believe in it. Podcasting blew up last year and so obviously there is something there. And so it just comes down to actually figuring it out; how to show that.

Dan: Exactly, yeah. There’s a reason so many brands are tripping over themselves to advertise on podcasts like Serial or This American Life. I heard a film ad on Serial the other week.

Stephanie: Really?

Dan: So like a major motion picture, Hollywood studio.

Stephanie: Like a trailer?

Dan: Yeah.

Stephanie: Cool.

Dan: Yeah, I think it was the Coen brothers, the new Coen brothers’ movie. And I was like: holy shit, like that’s entering –

Stephanie: That’s new.

Dan: Yeah, that’s new. And that’s like, to me, podcast entering the big time when they have Hollywood studios advertising. So the value of a podcast listener from a human standpoint, first, we don’t take that for granted because we realize how valuable your time is and how tenuous – how much content is out there and how we want to make sure to never break that trust. But I think that also bears out in business value, that we’re seeing in the industry that the value of a podcast listener compared to, let’s say, a blog post reader; if you compare podcast advertising rates to banner ads or even native advertising, that there’s a huge difference there.

And so we do not underestimate the value of this podcast. Just like anything else; a matter of figuring out how to measure that in a way that makes sense to the medium.

Stephanie: So along with figuring out the way that we want to evaluate it, we’re also having discussions on where we want to see the podcast going this year. So Dan, would you want to share some thoughts that you’ve had around where you’d like to take the podcast?

Dan: Yeah. I would love to talk to even broaden up the scope even more in terms of who we’ve talked to. So we’ve talked to the writers and editors of some of our favorite marketing blogs; some of our favorite SaaS blogs in particular. I’d love to talk to all sorts of thought leaders in the agency world, in the design world, brand marketers, also people in very specific industries like law and real estate. I want to know how they approach marketing differently. I just want to talk to as many marketers as possible to I think just broaden the scope of our understanding of things.

I think that, like anything else, the marketing, the digital marketing world, it could sometimes feel a little bit small, a little bit like an echo chamber. Everybody’s reading the same blog posts and looking to the same stuff. But I think that there are connections to be drawn to other industries. I think the world is actually a lot bigger than sometimes a cursory glance at like your Twitter feed or your Facebook feed would make it seem. And so we really want to make connections throughout the marketing world to help marketers do better and try new things that haven’t just been blogged about over and over again.

Stephanie: Yeah, and speaking on new things, too, just even playing with the format a little bit is something that I’m excited about. Like this episode is something we’ve never done before; just having an actual conversation and not like a standard interview, like actually –

Dan: I’d like to talk to you more.

Stephanie: Yeah. So from now on, we’re not having guests. It’s just gonna be Dan and I.

Dan: Just – you know, just chilling.

Stephanie: Just shooting, you know, the stuff. That was me censoring myself for iTunes.

Dan: I was gonna say and then I stopped myself so I said just chilling.

Stephanie: Because that’s something. If you swear on iTunes, you will have to have an adult rating. The more you know.

Dan: You know what?  We should test that. I wonder if having an adult rating would actually increase our listens. Maybe there’s a certain cache to that.

Stephanie: Because people would be like, wow, that is a naughty marketing podcast.

Dan: I feel like a naughty marketing podcast would be something else, but…

Stephanie: Yeah, so like aside from just a standard interview format, having more chats, more discussions. Something I’ve even kind of toyed with is having debates or just really having more actual kind of documentary style, journalism style, reporting, potentially.

Dan: One thing I’d like to do more of is share our experiences here at Unbounce. Because I think we’re very wary of being too self reflective or too self centered, which is I think why even this episode, talking about ourselves, feels like a little bit weird or against our nature. But you know, in the last two years since I’ve been here, our marketing team has gone from five people to 35. And there have been so many lessons along the way. There’s been some pain, there have been some triumphs. We’re constantly trying to improve on our structure, on our processes. And so I think that there probably are a lot of lessons that we could share.

And one of our values as a company is to be transparent and generous in terms of what we share with the world. And I think there’s an opportunity in this podcast to do that, as well. Plus, like we have all these amazing thought leaders within the company that we never had before. Like we never had a PPC specialist, an email marketing specialist, a CRO – the fifth top ranked CRO works for our company, now, Michael Aagaard. So I think we should be tapping that expertise more than we have been.

Stephanie: Yeah, and it’s something that we toyed a little bit at one point when we moved from definitions. We did a little, quick Unbounce employee story, which I actually really liked and I thought it was kind of an interesting way to segue into the interview. But we got some feedback that it kind of seemed a bit more like filler, again. So I think there is something to be said from talking about the kind of roadblocks and solutions that we have experienced as a company.

Because it is – again, we get that more intimate feel in the interview itself, and it’s something that we also know intimately which can allow for fun format changes. We’ve experienced all these issues that people are writing blog posts about so we may as well just talk about it in a real situation.

Dan: Yeah, and we also want to know what you guys want to hear more of. Like, does that sound like insufferable to you, to hear us go on about ourselves?  Is that something that you’re interested in hearing more of?  Is there anybody in particular you’d like us to have on the podcast?  Would you like to be on the podcast?  Let us know because we’re obviously doing this for business value but, like any good piece of content marketing, we’re doing this for our audience, first. And if it doesn’t resonate with you, then there’s just no point in doing it.

The feedback that we’ve gotten so far has been amazing. The reviews and the ratings have been great. We’re so appreciative of all the downloads every week. But we want to, like true conversion centered marketers, we want to keep optimizing and keep improving. And so please let us know how we could do better.

Stephanie: Yeah, so you can do that by either emailing us at podcast@unbounce.com. If you’re not a super big fan of email, you can tweet at us. I am @msbeansie, that’s M-S-B-E-A-N-S-I-E.

Dan: I am @DanJL, D-A-N-J-L.

Stephanie: So email, Twitter, you can – well, you can’t really phone us because we don’t really have phone numbers but yeah, just –

Dan: Look us up on Skype. There’s a lot of Dan Levys but you could find me.

Stephanie: If you find the right one. Yeah, please reach out to us. We’d love to hear your feedback. It’s super important for us. And like we said, this is the year that we really want to play around with the format and get a lot of new people on, so we would love to hear what you want to listen to.

Dan: Hey, Stephanie?

Stephanie: Yes, Dan?

Dan: Is that your call to action?

Stephanie: I think that was my call to action.

Dan: All right, then. Play the music. Thanks so –

Stephanie: Thanks for listening.

Stephanie: One, two, three.

Both: Thanks for listening.


View this article: 

Lessons Learned from Year 1 of the Call to Action Podcast [PODCAST]

A-ha! Users don’t care if it’s ‘ugly’

A brief introduction

DMV.org homepage
DMV.org homepage.

DMV.org is a non-government content-based website. They provide visitors with information that they might need for a DMV visit in each of the 50 U.S. states. The company earns revenue through performance-based advertising on their content pages.

The site spans thousands of pages of information.

Because we’ve been working with DMV.org for several years, we were able to pitch the following design test for their auto insurance conversion funnel (and they didn’t laugh in our faces!)

The utlimate mini-banner showdown

When users on DMV.org are looking for insurance rate estimates ― and thereby entering the auto insurance conversion funnel ― they are prompted to enter their zip code into a mini-banner. Once they’ve entered their zip, they are shown a list of auto insurance providers in their area.

We wanted to isolate various elements on this mini-banner to see if we could increase conversions (in this case, the product click-through rate) with slight, or not-so-slight, design changes.

The control banner featured a small car crash image:

In our variations, we decided to change things up a bit.

We tested a different image of a car crash in one variation and an animated GIF of a fender-bender in another. We tested a photo of a sad-looking driver in one and ― at the request of our pet-loving CEO ― we tested an image of a driving dog in another.

With our final variation, we tried something a little nuts: we changed the shape of the banner completely, making it the silhouette of a car.

A mini-banner shaped like a car? Why not?

Our design team hated this variation.

Jules Skopp

My initial reaction was ‘Oh no.’ I didn’t think it was a bad idea per se, I just thought it was so ugly.

Jules Skopp, CRO Experience Designer

As you might’ve guessed (given the build up) the car-shaped mini-banner walloped the competition.

Much to the dismay of our design team and the surprise of our strategy team, the little car-shaped banner that could increased the product click-through rate by 89.86% (compared to less than 10% for the other images) which led to a revenue per visitor increase of 74.92%.

Nick So

We had an office-wide poll going to see which banner would win. Nobody chose the car.

Nick So, Optimization Strategist

A-ha!

The design of this banner was totally out of left field, so much so that members of both teams thought it was a waste of time to test. But users loved it!

We had a million questions: Did users feel that the car-shaped banner was more relevant somehow? Or did the atypical design and shape simply make this banner more attention-grabbing?

Nick So

This result spawned some crazy questions: What if the car was shaped like a Ferrari? Would different car shapes perform better in different states? The possibilities are endless.

Nick So, Optimization Strategist

Most importantly, this test reminded us that our opinions don’t matter. Users will sometimes behave in completely unpredictable ways – if you have a gut feeling, test it.

The data doesn’t lie.

This post is fourth in a 5-part series. If you missed any previous ‘A-ha!’ moments, check out the links below:

Stay tuned for our last post in this series!

The post A-ha! Users don’t care if it’s ‘ugly’ appeared first on WiderFunnel.

Continue reading: 

A-ha! Users don’t care if it’s ‘ugly’