Tag Archives: marketing

7 Ways To Accelerate Product Adoption (Without Spamming Your User Base)

speed up product adoption

We tend to make a big deal about leads in the marketing space, and not without good reason. Everything starts with leads. However, for software companies, the real goal is product adoption. We need people actively and consistently using our product. Regardless of our business model, success occurs when users experience that “aha” moment that takes our product from an experiment to a core part of their day-to-day work. So how do we move people from lead to product adopter? How to we give them that “aha” moment? Two words: Strategic Repetition Repetition is a POWERFUL psychological force. Studies have…

The post 7 Ways To Accelerate Product Adoption (Without Spamming Your User Base) appeared first on The Daily Egg.

Link:  

7 Ways To Accelerate Product Adoption (Without Spamming Your User Base)

How to Micro Test New Product/Service Ideas Using AdWords

Launching a new business idea or deciding to develop a new product for your company is not without risk. Many of the best business ideas have come from inspiration, intuition or in-depth insight into an industry. While some of these ideas have risen to dominate the modern world, such as search engines, barcodes and credit card readers, many fine ideas still result in bankruptcy for their company, due to insufficient demand or failure to properly research customer desire. If you build it will they come? Often smart business entrepreneurs can still make big mistakes. With new product, service or business…

The post How to Micro Test New Product/Service Ideas Using AdWords appeared first on The Daily Egg.

See the original article here: 

How to Micro Test New Product/Service Ideas Using AdWords

The Part-Time Nihilist’s Guide to Marketing Terms You Hate, But Need

shutterstock_548874589
It’s about time that we take a step back and have a little chuckle at ourselves. Image via Shutterstock.

Plenty of products and services help people, making them healthier and happier. For those things, marketing is great — but sometimes, the way we talk about ourselves is absurd. Yeah, I said it, it’s absurd, but it’s all right because this post has a happy ending (stay tuned).

If you work in any sort of marketing role, you might have noticed that as a collective, we’ve done something incredible:

We’ve turned buzzwords into real, salaried jobs.  

You can be a Growth Hacker these days, or a Content Marketer. If you work somewhere really cool, you might even be a Conversion Ninja. Plenty of people do these jobs (myself included) and one day we’ll have the awkward pleasure of explaining to our grandchildren what it was like being paid to be a Solutions Architect, or a Dev Mogul.

“Neat, grandpa! Did you invent a new form of calculus?”

“No, son. But I had over 25,000 Twitter followers. I was an influencer.”

This is the part-time nihilist’s guide to all those marketing terms you hate (but need). It might also clarify why your parents will never understand what the heck your job is.

Homer gets back to basics with marketing. Video: Fox.
Disclaimer: This post tears down marketing terms and the idea of becoming an influencer. We hope that it is popular and that you share it. We see the irony, and we’re disgusted by it, so just move on, okay?

Being considered an “expert” or a “genius”

To be considered an expert in most other professions, you need to have studied and practiced for years and years and years. You study, you’re tested, you pass, you advance. After what feels like a lifetime of this, people trust you as a voice of authority, as an expert.

Pro tip: Inclusion in a listicle or roundup guarantees automatic employment — should you want it — with some of the most prestigious companies in Silicon Valley.

There are expert marketers, of course: people who have been to school, who dedicate their lives to the craft of combining insight and communication into the most irresistible calls to action. But if you’ve got a profile photo, maybe a Linkedin Premium account, and a byline on somewhere like Unbounce (Hey, that’s me!), you might be considered an expert.

This will do one of two things to you:

  1. It’ll make you lazy, because you’ll think that you’ve reached the top of the mountain. (By the way, there’s no top. There’s no mountain either.)
  2. It’ll scare the crap out of you, and you’ll work your ass off to become a genuine expert, or at least, someone with useful insights.

I hope for everyone’s sake that it’s the second one.

Bonus option: You’ll develop a nasty case of Imposter Syndrome, where you’ll live in constant fear of being called out. It’ll make you triple your efforts, but it’ll never be enough.

Pursuing “thought leadership”

As a marketer, when you have a good idea, you call it a thought leadership piece and you milk it until it’s red and sore. Never mind the idea that “thought leadership” sounds like some sort of mind control, it’s just damned impressive that we managed to turn the act of having ideas into a tool for marketing.

In a way, being considered a thought leader is a lot like being considered an expert. Not so long ago there were real thought leaders, people like Albert Einstein and Martin Luther King Jr.. Now, all you need to do is tip that scale from 9,999 followers to 10,000 and praise, be! You’re a thought leader.

“One of us, one of us, one of us.” Video: Fox

Free infographics and ebooks

The only real way to tell whether a post is legitimate — whether the author’s really serious about the information they’re giving you — is to check for an associated infographic or ebook. At Unbounce, they call these in-post giveaways Conversion Carrots. Some other places call them Lead Magnets. I call them necessary evil.

nihilist-marketer-graph

“Can we make it go viral?”

I once worked at a place where a department, armed with five grand, asked us if we could make them a viral video. In their defense, they didn’t understand the process of how something becomes viral (another gross marketing term), so points at least for the thought. But directly asking for a viral video, or setting out with the intention of making a viral video, is like marrying a stranger for the tax benefits, and not because you love them.

Influencer marketing

Hey bud, if you RT me, I’ll RT you.

As a marketer, you want eyeballs. You’re hungry for eyeballs, you want to pour them all over your website. Some people have lots of eyeballs looking at them; those people are called influencers, and if you’re kind to them, sometimes they’ll let you borrow their eyeball collections.

People with a lot of eyeballs in their collection tend to be good at making things go viral. They often make infographics and eBooks, as well. They are the Aaron Orendorffs of the world (Hey, man!), and they are all-powerful.

“We simply could not function without his tireless efforts.” Video: Fox

“Epic,” “unicorn,” “guru,” etc.

No, it’s not. No, they’re not. No, you’re not.

“That’s hilaaaaaarious.”

“We need more user-generated content.”

The idea behind user-generated content is sound; it’s word-of-mouth for a digital age. Having a strategy to develop user-generated content, though?

Do you ever watch those videos publications like Gothamist do on some donut shop in Brooklyn that’s been around for 140 years? You think, “Wow, they must have a lot of user-generated content!” No, they just make great donuts. If you want your users to generate more content, just make stuff they like.

“Can’t get enough of that Sugar Crisp!” Video: Fox

Time to follow in mommy and daddy’s footsteps?

For over 20 years my dad spent most of his days with his hands plunged into ice water, gutting and slicing one fish at a time. I spend my days trying to get prospects to type their names into a CTA form field. In those final years before the sun explodes and we’re all plunged into an every-man-for-himself scenario, who’s going to be more useful? My money’s on the old man.

I told you that there was a happy ending, and in a way, the sun exploding and annihilating everything from Mercury out past Pluto is a happy ending. It’s a reminder that we’re all in this together, from your parents and their grinding manual labor jobs, to us word-pickers and graph-checkers who moan when we can’t find the right long-tail keywords to optimize conversion rates. One day everyone that’s left will go together, burning up with all the finest email lists, and all the leads. It’s all going to be fine.

People make some great stuff, and for the short time we’re here, it’s up to us to help get it in front of as many of the right people as possible. That’s your job, and it’s a fun one.

What are some of the marketing terms you hate to need? Drop them in the comments below, then download this free infographic. Jokes, there’s no infographic.

More here:

The Part-Time Nihilist’s Guide to Marketing Terms You Hate, But Need

Glossary: Value Proposition

glossary value proposition

A value proposition is what you guarantee or promise to deliver to your potential buyers in exchange for their money. It’s also the main reason why people choose one product over other. If it’s done right, it can give you the competitive edge and help you grow your business. The value proposition is vital to conversion optimization as it allows you to build a perception of the value that a user is getting. So, if you test it, these few sentences might have a significant impact on your conversion rate and sales. What the value proposition does when done right:…

The post Glossary: Value Proposition appeared first on The Daily Egg.

Originally posted here: 

Glossary: Value Proposition

Lessons Learned From 2,345,864 Exit Overlay Visitors

sup

Back in 2015, Unbounce launched its first ever exit overlay on this very blog.

Did it send our signup rate skyrocketing 4,000%? Nope.

Did it turn our blog into a conversion factory for new leads? Not even close — our initial conversion rate was barely over 1.25%.

But what it did do was start us down the path of exploring the best ways to use this technology; of furthering our goals by finding ways to offer visitors relevant, valuable content through overlays.

Overlays are modal lightboxes that launch within a webpage and focus attention on a single offer. Still fuzzy on what an overlay is? Click here.

In this post, we’ll break down all the wins, losses and “holy smokes!” moments from our first 2,345,864 exit overlay viewers.

Psst: Towards the end of these experiments, Unbounce launched Convertables, and with it a whole toolbox of advanced triggers and targeting options for overlays.

Goals, tools and testing conditions

Our goal for this project was simple: Get more people to consume more Unbounce content — whether it be blog posts, ebooks, videos, you name it.

We invest a lot in our content, and we want it read by as many marketers as possible. All our research — everything we know about that elusive thing called conversion, exists in our content.

Our content also allows readers to find out whether Unbounce is a tool that can help them. We want more customers, but only if they can truly benefit from our product. Those who experience ‘lightbulb’ moments when reading our content definitely fit the bill.

As for tools, the first four experiments were conducted using Rooster (an exit-intent tool purchased by Unbounce in June 2015). It was a far less sophisticated version of what is now Unbounce Convertables, which we used in the final experiment.

Testing conditions were as follows:

  1. All overlays were triggered on exit; meaning they launched only when abandoning visitors were detected.
  1. For the first three experiments, we compared sequential periods to measure results. For the final two, we ran makeshift A/B tests.
  1. When comparing sequential periods, testing conditions were isolated by excluding new blog posts from showing any overlays.
  1. A “conversion” was defined as either a completed form (lead gen overlay) or a click (clickthrough overlay).
  1. All experiments were conducted between January 2015 and November 2016.

Experiment #1: Content Offer vs. Generic Signup

Our first exit overlay had a simple goal: Get more blog subscribers. It looked like this.

blog-subscriber-overlay

It was viewed by 558,488 unique visitors over 170 days, 1.27% of which converted to new blog subscribers. Decent start, but not good enough.

To improve the conversion rate, we posed the following.

HYPOTHESIS
Because online marketing offers typically convert better when a specific, tangible offer is made (versus a generic signup), we expect that by offering a free ebook to abandoning visitors, we will improve our conversion rate beyond the current 1.27% baseline.

Whereas the original overlay asked visitors to subscribe to the blog for “tips”, the challenger overlay offered visitors The 23 Principles of Attention-Driven Design.

add-overlay

After 96 days and over 260,000 visitors, we had enough conversions to call this experiment a success. The overlay converted at 2.65%, and captured 7,126 new blog subscribers.

overlay-experiment-1-results

Since we didn’t A/B test these overlays, our results were merely observations. Seasonality is one of many factors that can sway the numbers.

We couldn’t take it as gospel, but we were seeing double the subscribers we had previously.

Observations

  • Offering tangible resources (versus non-specific promises, like a blog signup) can positively affect conversion rates.

Stay in the loop and get all the juicy test results from our upcoming overlay experiments

Learn from our overlay wins, losses and everything in between.
By entering your email you’ll receive weekly Unbounce Blog updates and other resources to help you become a marketing genius.

Experiment #2: Four-field vs. Single-field Overlays

Data people always spoil the party.

The early success of our first experiment caught the attention of Judi, our resident marketing automation whiz, who wisely reminded us that collecting only an email address on a large-scale campaign was a missed opportunity.

For us to fully leverage this campaign, we needed to find out more about the individuals (and organizations) who were consuming our content.

Translation: We needed to add three more form fields to the overlay.

overlay-experiment-2

Since filling out forms is a universal bummer, we safely assumed our conversion rate would take a dive.

But something else happened that we didn’t predict. Notice a difference (besides the form fields) between the two overlays above? Yup, the new version was larger: 900x700px vs. 750x450px.

Adding three form fields made our original 750x450px design feel too cramped, so we arbitrarily increased the size — never thinking there may be consequences. More on that later.

Anyways, we launched the new version, and as expected the results sucked.

overlay-experiment-2-results
Things weren’t looking good after 30 days.

For business reasons, we decided to end the test after 30 days, even though we didn’t run the challenger overlay for an equal time period (96 days).

Overall, the conversion rate for the 30-day period was 48% lower than the previous 96-day period. I knew it was for good reason: Building our data warehouse is important. Still, a small part of me died that day.

Then it got worse.

It occurred to us that for a 30-day period, that sample size of viewers for the new overlay (53,460) looked awfully small.

A closer inspection revealed that our previous overlay averaged 2,792 views per day, while this new version was averaging 1,782. So basically our 48% conversion drop was served a la carte with a 36% plunge in overall views. Fun!

But why?

It turns out increasing the size of the overlay wasn’t so harmless. The size was too large for many people’s browser windows, so the overlay only fired two out of every three visits, even when targeting rules matched.

We conceded, and redesigned the overlay in 800x500px format.

overlay-experiment-redesign

Daily views rose back to their normal numbers, and our new baseline conversion rate of 1.25% remained basically unchanged.

loads-vs-views

Large gap between “loads” and “views” on June 4th; narrower gap on June 5th.

Observations

  • Increasing the number of form fields in overlays can cause friction that reduces conversion rates.
  • Overlay sizes exceeding 800×500 can be too large for some browsers and reduce load:view ratio (and overall impressions).

Experiment #3: One Overlay vs. 10 Overlays

It seemed like such a great idea at the time…

Why not get hyper relevant and build a different exit overlay to each of our blog categories?

With our new baseline conversion rate reduced to 1.25%, we needed an improvement that would help us overcome “form friction” and get us back to that healthy 2%+ range we enjoyed before.

So with little supporting data, we hypothesized that increasing “relevance” was the magic bullet we needed. It works on landing pages why not overlays?

HYPOTHESIS  
Since “relevance” is key to driving conversions, we expect that by running a unique exit overlay on each of our blog categories — whereby the free resource is specific to the category — we will improve our conversion rate beyond the current 1.25% baseline.

blog-categories

We divide our blog into categories according to the marketing topic they cover (e.g., landing pages, copywriting, design, UX, conversion optimization). Each post is tagged by category.

So to increase relevance, we created a total of 10 exit overlays (each offering a different resource) and assigned each overlay to one or two categories, like this:

category-specific-overlays

Creating all the new overlays would take some time (approximately three hours), but since we already had a deep backlog of resources on all things online marketing, finding a relevant ebook, course or video to offer in each category wasn’t difficult.

And since our URLs contain category tags (e.g., all posts on “design” start with root domain unbounce.com/design), making sure the right overlay ran on the right post was easy.

unbounce-targeting

URL Targeting rule for our Design category; the “include” rule automatically excludes the overlay from running in other categories.

But there was a problem: We’d established a strict rule that our readers would only ever see one exit overlay… no matter how many blog categories they browsed. It’s part of our philosophy on using overlays in a way that respects the user experience.

When we were just using one overlay, that was easy — a simple “Frequency” setting was all we needed.

unbounce-frequency

…but not so easy with 10 overlays running on the same blog.

We needed a way to exclude anyone who saw one overlay from seeing any of the other nine.

Cookies were the obvious answer, so we asked our developers to build a temporary solution that could:

  • Pass a cookie from an overlay to the visitor’s browser
  • Exclude that cookie in our targeting settings

They obliged.

unbounce-advanced-targeting

We used “incognito mode” to repeatedly test the functionality, and after that we were go for launch.

Then this happened.

rooster-dashboard
Ignore the layout… the Convertables dashboard is much prettier now :)

After 10 days of data, our conversion rate was a combined 1.36%, 8.8% higher than the baseline. It eventually crept its way to 1.42% after an additional 250,000 views. Still nowhere near what we’d hoped.

So what went wrong?

We surmised that just because an offer is “relevant” doesn’t mean it’s compelling. Admittedly, not all of the 10 resources were on par with The 23 Principles of Attention-Driven Design, the ebook we originally offered in all categories.

That said, this experiment provided an unexpected benefit: we could now see our conversion rates by category instead of just one big number for the whole blog. This would serve us well on future tests.

Observations

  • Just because an offer is relevant doesn’t mean it’s good.
  • Conversion rates vary considerably between categories.

Experiment #4: Resource vs. Resource

“Just because it’s relevant doesn’t mean it’s good.”

This lesson inspired a simple objective for our next task: Improve the offers in our underperforming categories.

We decided to test new offers across five categories that had low conversion rates and high traffic volume:

  1. A/B Testing and CRO (0.57%)
  2. Email (1.24%)
  3. Lead Gen and Content Marketing (0.55%)
Note: We used the same overlay for the A/B Testing and CRO categories, as well as the Lead Gen and Content Marketing Categories.

Hypothesis
Since we believe the resources we’re offering in the categories of A/B testing, CRO, Email, Lead Gen and Content Marketing are less compelling than resources we offer in other categories, we expect to see increased conversion rates when we test new resources in these categories.

With previous studies mentioned in this post, we compared sequential periods. For this one, we took things a step further and jury-rigged an A/B testing system together using Visual Website Optimizer and two Unbounce accounts.

And after finding what we believed to be more compelling resources to offer, the new test was launched.

topic-experiment

We saw slightly improved results in the A/B Testing and CRO categories, although not significant. For the Email category, we saw a large drop-off.

In the Lead Gen and Content Marketing categories however, there was a dramatic uptick in conversions and the results were statistically significant. Progress!

Observations

  • Not all content is created equal; some resources are more desirable to our audience.

Experiment #5: Clickthrough vs. Lead Gen Overlays

Although progress was made in our previous test, we still hadn’t solved the problem from our second experiment.

While having the four fields made each conversion more valuable to us, it still reduced our conversion rate a relative 48% (from 2.65% to 1.25% back in experiment #2).

We’d now worked our way up to a baseline of 1.75%, but still needed a strategy for reducing form friction.

The answer lay in a new tactic for using overlays that we dubbed traffic shaping.

Traffic Shaping: Using clickthrough overlays to incentivize visitors to move from low-converting to high-converting pages.

Here’s a quick illustration:

traffic-shaping-diagram

Converting to this format would require us to:

  1. Redesign our exit overlays
  2. Build a dedicated landing page for each overlay
  3. Collect leads via the landing pages

Basically, we’d be using the overlays as a bridge to move readers from “ungated” content (a blog post) to “gated” content (a free video that required a form submission to view). Kinda like playing ‘form field hot potato’ in a modern day version of Pipe Dream.

Hypothesis
Because “form friction” reduces conversions, we expect that removing form fields from our overlays will increase engagement (enough to offset the drop off we expect from adding an extra step). To do this, we will redesign our overlays to clickthrough (no fields), create a dedicated landing page for each overlay and add the four-field form to the landing page. We’ll measure results in Unbounce.

By this point, we were using Unbounce to build the entire campaign. The overlays were built in Convertables, and the landing pages were created with the Unbounce landing page builder.

We decided to test this out in our A/B Testing and CRO as well as Lead Gen and Content Marketing categories.

clickthrough-overlays

After filling out the form, visitors would either be given a secure link for download (PDF) or taken to a resource page where their video would play.

Again, for this to be successful the conversion rate on the overlays would need to increase enough to offset the drop off we expected by adding the extra landing page step.

These were our results after 21 days.

clickthrough-overlays-results

Not surprisingly, engagement with the overlays increased significantly. I stress the word “engagement” and not “conversion,” because our goal had changed from a form submission to a clickthrough.

In order to see a conversion increase, we needed to factor in the percentage of visitors who would drop off once they reached the landing page.

A quick check in Unbounce showed us landing page drop-off rates of 57.7% (A/B Testing/CRO) and 25.33% (Lead Gen/Content Marketing). Time for some grade 6 math…

clickthrough-overlays-results-2

Even with significant drop-off in the landing page step, overall net leads still increased.

Our next step would be applying the same format to all blog categories, and then measuring overall results.

Onward!

All observations

  • Offering specific, tangible resources (vs. non-specific promises) can positively affect conversion rates.
  • Increasing the number of form fields in overlays can cause friction that reduces conversion rates.
  • Overlay sizes exceeding 800×500 can be too large for some browsers and reduce load:view ratio (and overall impressions).
  • Just because an offer is relevant doesn’t mean it’s good
  • Conversion rates vary considerably between blog categories
  • Not all content is created equal; some resources are more desirable to our audience.
  • “Form friction” can vary significantly depending on where your form fields appear.

Stay tuned…

We’re continuing to test new triggers and targeting options for overlays, and we want to tell you all about it.

So what’s in store for next time?

  1. The Trigger Test — What happens when test our “on exit” trigger against a 15-second time delay?
  2. The Referral Test — What happens when we show different overlays to users from different traffic sources (e.g., social vs. organic)?
  3. New v.s. Returning Visitors — Do returning blog visitors convert better than first-time visitors?

Stay in the loop and get all the juicy test results from our upcoming overlay experiments

Learn from our overlay wins, losses and everything in between.
By entering your email you’ll receive weekly Unbounce Blog updates and other resources to help you become a marketing genius.

More: 

Lessons Learned From 2,345,864 Exit Overlay Visitors

Building an App or Online Business in 2017? Here’s A DIY Resource Kit of Free Tools & Tips!

building an app

Last year, I started working on an idea for a platform, called Counsell, currently available as an app on iOS and Android devices, that lets all professionals give and get paid advice. As a designer, I was fortunate to be working with an incredible developer from the very start so we knew we could turn the idea into a working product. However, it was only when I, bolstered by my marketing background, decided to build a business around the app that I realized how haphazard and unsystematic the realities of setting up a new online business could be. Thanks to…

The post Building an App or Online Business in 2017? Here’s A DIY Resource Kit of Free Tools & Tips! appeared first on The Daily Egg.

From:

Building an App or Online Business in 2017? Here’s A DIY Resource Kit of Free Tools & Tips!

5 Conversion Rate Optimization Challenges For Enterprises To Solve

Although the interest in conversion rate optimization is increasing over time, organizations are unable to adopt it fully. To ensure its smooth adoption and implementation, certain challenges and misconceptions need to be addressed.

interest in conversion rate optimization google trends
Google Trends

In this post, we will talk about 5 such conversion optimization challenges that enterprises face and ways to overcome them.

Challenge 1. Politics and People—A Cultural Challenge

An organization’s culture is made of 2 core components—people (skill and mindset) and their interpersonal relationships (power to influence and politics ). Creating a conversion optimization culture becomes challenging when either people lack the understanding and skill or when influential people in the organization want their opinions to be valued more than what data and facts indicate.

Political

Brian Massey, Founder, Conversion Sciences shares his view on the political challenge as follows:

Brian Massey

Why has Donald Trump’s top-down, opinion-driven leadership style been accepted by the white-collar working public in the US? Because enterprise businesses have trained us that this is how leadership works. We have a name for this leadership style: “HiPPO,” or Highest Paid Person’s Opinion. Joel Harvey calls it Helicopter Management. This is the management style of charismatic or autocratic leaders who drive action in their organizations by helicoptering in, expressing a lightly-informed opinion, and enforcing their opinion in one of the following two ways:

* They bestow budget upon the loyal.

* They threaten the jobs of the disloyal.

So marketing teams can grab the budget and buy the latest tools. But they then struggle to find the man-hours necessary to make the tools effective.

Like all big business problems, it’s a cultural issue.”

James Spittal, Chief Executive Officer, Web Marketing ROI also talks about the HiPPO effect and the political challenge that obstructs a culture of conversion rate optimization.

James Spittal

Only a small portion of changes are A/B tested, kind of like the “HiPPO” effect. The typically small and under-resourced internal CRO team madly tries to work with an agency to get as many A/B tests launched as possible and keeps up their A/B test velocity while talking to everyone about CRO. Meanwhile, a C-level executive asks for a change to be pushed straight into the source code base without it being tested, costing the organization potentially millions of dollars and because they don’t know any better.

Keith Hagen, VP & Director of Conversion Services at Inflow views politics as an obstacle in the implementation of quality insights for any CRO program.

Keith Hagen

Not all insights are equal. One insight can be worth millions; the other may not move the needle at all while the enterprise pays its employees to test and implement that insight as well.

Terming what an insight actually is, is important as well. Insights come from customers and identify a customer obstacle or opportunity.  If you are not making something better for the customer or capitalizing better on what you have, it should not be worked on. Enterprise organizations have a lot of voices, and the higher paid voices tend to influence what optimizations are made to a site.

The solution he proposes—Score Insights Based on Their Potential.

Every insight should be scored on its potential and shared across the organization. Whether the insight is about an obstacle to a purchase or an opportunity to sell more, the potential should be assigned a dollar value so that it is clear what NOT working on the insight will cost.

People

James Spittal, Chief Executive Officer, Web Marketing ROI attributes the lack of skill—technical or development—with regard to why people in an organization pose a challenge to creating a culture of CRO.

James Spittal

This challenge simply occurs because of people in an enterprise not having the knowledge, talent, or skills. Often, we see people with a graphic design, pure web design, pure analytics, or pure UX background become the “de facto” CRO team. But they struggle because it’s unlikely that they have the technical skills or development skills to be able to implement advanced A/B test ideas (major layout changes, modals, segmentation, changing cart flows, doing tests on pricing, etc.). Often, they also struggle to get resources internally or externally and build a strong business case to increase the CRO budget.

Johann Van Tonder, COO, AWA Digital, shares similar views regarding people and the lack of talent to implement conversion optimization.

Johann Van Tonder

The challenge is to find good optimization talent. While there is no shortage of people marketing themselves as CRO practitioners, only a small percentage of the candidates we screen make it into our organization. This is the same pool that enterprises are recruiting from.  

A good optimizer is both analytical and creative, with a solid grasp of disciplines as diverse as psychology, copywriting, marketing, and statistics. They are brilliant communicators with an entrepreneurial drive and at least basic coding skills. Finding them is not easy.

Solution

The first step of creating a culture of data-driven conversion optimization in any organization is to educate the people about its benefits. Any enterprise planning to implement such a shift—moving from random A/B testing to scientific conversion optimization—must first understand the “why” behind it. That’s why we have 15 conversion rate experts share why they feel it is important to step up from A/B testing to conversion optimization.

Any cultural change requires the complete support of the top management. That’s why it is all the more important to convince it about conversion optimization. Here’s how you can use data to convince your top management about why they need conversion optimization:

  • Highlight improved user experience as a double win.
  • Present a competitive analysis.
  • Stress the gaps in your current approach.
  • Show the money.
  • Show the data.

Challenge 2. No Defined Structure that Supports CRO

It’s a huge challenge for enterprises to put together a structure that supports conversion optimization effectively. There are a number of questions that arise when addressing this challenge. Would it be beneficial to hire a dedicated conversion optimization team, or would it mean only additional expenditure? Who is responsible for conversion optimization?

With regard to this challenge, some interesting observations were listed by ConversionXL’s report on State of Conversion Optimization 2016. One of the findings quoted in the report mentions, “…only 29% of people said that there’s a single dedicated person who does optimization. 30% more said there’s a team in charge of optimization, but 41% of respondents had no one in particular that was accountable for optimization efforts.”

Some companies have internal conversion optimization teams that comprise an analyst, designer, marketer, and project manager. However, should these people invest all of their time on conversion optimization? One way of dealing with this is to have all team members allocate time between core job functions and conversion optimization.

Another challenge related to the lack of structured process to conversion optimization, as explained by Tim Ash, CEO of SiteTuners, and a digital marketing keynote speaker, is the isolation of the CRO team from the rest of the teams.

Tim Ash

The biggest problem that an enterprise CRO faces is the siloing emblematic of big companies. All job functions and even departments are compartmentalized and do not communicate well with each other. So even though a CRO group or team exists within the company, it is only able to focus on limited tactical objectives and simple split testing. Typically, CRO initiatives pass through compliance and approval reviews, get watered down by the branding gatekeepers, and then languish in the IT development queue to get implemented.

At SiteTuners, we have developed our Conversion Maturity Model to grade organizations on key aspects of their optimization effectiveness. Dimensions include culture and processes, organizational structure and skill set, measurement and accountability, the marketing technology stack, and of course the user experience across all channels.

One of the biggest determiners of success is whether there is active and consistent support for CRO from high-ranking executives. If there is political air-cover and the CRO team reports high up in the company, this team can work across the silos to tackle fundamental business issues involving products and services, the business model, back-end operational efficiencies, and fundamental user experience redesigns.

Solution

Lay down a clear process for conversion optimization that needs to be followed by everyone in the organization. Create a dashboard or platform where all the conversion optimization activities are planned, updated, and reported. Share this platform with everyone in the organization. Encourage a culture where everyone contributes to conversion optimization. However, make decisions based only on data. For example, while deciding what to test and optimize, follow a scientific hypotheses prioritization framework. The benefit—though everyone gets to share their observations and hypotheses—is that only the most relevant of those are tested.

Challenge 3. Inefficient Methodology to Implementing Conversion Optimization

Paul Rouke, Founder and CEO, PRWD points out that lack of user research is one problem in the current conversion optimization methodology followed by most enterprises.

Paul Rouke

Among enterprises, a lack of an intelligent and robust optimization methodology is a major barrier to them making experimentation a trusted and valued part of their growth strategy. Lack of user research in developing test hypotheses, alongside lack of innovative and strategic testing, instead a focus on simple A/B testing, are some of the biggest barriers which prevent enterprises from harnessing the potential strategic impact conversion optimization could have for their business.

As shown below, the interest in A/B testing is far more widespread than in conversion optimization.

interest in a/b testing vs. interest in conversion optimization - google trends
Google Trends

It is important to understand that testing random ideas based on opinions is not a smart way of testing. You may get a winning variation even by testing “ideas,” but this will not help solve the real pain points that users face. The challenge, therefore, is to eliminate guesswork; and the solution is to focus on data instead.

Here’s what Brian Massey has to say regarding eliminating guess work and relying on a behavioral data-based methodology.

Brian Massey

Enterprises are missing out on an area, that is, following Moore’s Law in terms of increasing capability and decreasing costs. Behavioral data collection is dropping precipitously in price, and new capabilities are coming online weekly. Just as Microsoft didn’t realize that mobile phone market would follow Moore’s Law, enterprises run the risk of missing the growth in Behavioral Science, a discipline designed to eliminate guessing from business strategy and tactics.

Mathilde Boyer, Head of CXO, House of Kaizen and Peter Figueredo, Founding Partner, House of Kaizen also talk about what is inefficient about the current conversion optimization methodology, as followed by some enterprises.

Mathilde Boyer

Opinion-based A/B testing is the gangrene of CRO programs. It hinders the process of objective creation and prioritization of test hypothesis. This tendency can lead to situations where a high level of resources are invested in low-impact optimization activities. Generation and prioritization of test hypothesis needs to be data-driven, systematic, repeatable, and teachable to allow for expansion of optimization activities across a business.

Peter Figueredo

Companies who invest in CRO typically rush to get testing started and overlook the importance of conducting research. Without proper research for informed testing, the design process CXO has lower chances of success. If your doctors do not know the root cause of your ailment, then they are likely only treating the symptoms but not curing the disease. Research should never be ignored and should be a critical component of House of Kaizen’s CXO success.

Solution

Data-driven optimization is focused on identifying friction, understanding the why behind user behavior, and testing hypotheses based on that data/information. Here’s what a formalized conversion optimization methodology would comprise:

  1. Researching into the existing data
  2. Finding gaps in the conversion funnel
  3. Planning and developing testable hypotheses
  4. Creating test variations and executing those tests
  5. Analyzing the tests and using the analysis in subsequent tests

You can read more about the scientific methodology for conversion optimization in this post.

Andre Morys, CEO of Web Arts,  in one of his interviews, talks about what’s wrong with the methodology. According to him, 80–90% of big companies do not aim for bigger goals, which could be change in the growth rate. This is another methodology-related drawback, as the goals being set do not take the profitability into account. Andre’s interview answers many other questions related to business growth.

Challenge 4. Choosing the Right Tool to Meet the Business Goals

The decision-makers in an organization have a variety of tools to choose from for meeting their business goals.  For example, when deciding on an A/B testing tool, they have to make a choice between a:

  • Frequentist-based statistical engine
  • Bayesian statistical engine

Moreover, there are multiple tools that help accomplish specific objectives. Enterprises might use hotjar for heatmap reports, a/b testing from VWO, and some other tool for on-page surveys. Reporting becomes a pain when instead of using one connected platform, enterprises use multiple tools to execute their conversion optimization program. If enterprises instead switch to a single connected platform, they can save a lot of time and resources.

Another problem with not using a single tool for testing and optimization is that it becomes difficult to explain instances of success and failure to the top management. This could be confusing for managers who are not in touch with day-to-day implementation of the conversion optimization program.

Solution

For selecting the correct tool, decision-makers need to weigh the pros and cons of their actions. They need to evaluate the tool based on how effectively and efficiently it can solve their specific business problems. For enterprises looking to invest in a tool for business growth, here’s a post on what decision-makers need to know before investing in CRO or A/B testing software.

Challenge 5. Insufficient and Incorrect Budget Allocation

Back in 2013, most companies spent less than 5% on conversion optimization from their total marketing budget.

budget for conversion optimization - graph

Moving on to 2014, a report from Adobe says that top-converting companies spend more than 5% of their budgets on optimization. Per the conversion optimization report 2016 by ConversionXL, businesses have increased their spend on optimization. The problem, however, lies in correct allocation.

Paul Rouke talks about inefficient budget allocation as follows:

Paul Rouke

Budgets for conversion optimization within enterprises are continuing to increase, but typically in the wrong direction. Enterprises focus far too much of their marketing investment in enterprise technology. As a result, there’s little investment in people and their skills to actually harness the technology—whether building their in-house team or harnessing specialist agencies.

Enterprises which invest in Human Intelligence (HI), above and beyond technology, and AI are the ones who are positioning themselves for significant and sustainable growth. Growth is about people.

Solution

Before deciding the amount that enterprises should spend on conversion optimization, they should think about the return on investment from CRO. Organizations need to budget for the conversion optimization tool while analyzing their goals and actual gains. To read more on how to budget for conversion optimization, read this post by Formstack.

Summary 

Although the interest in conversion optimization is growing, due to certain challenges, it is not being adopted fully by enterprises. Some of the drawbacks that this post talks about are related to organizational culture, structure, methods and processes, tools for conversion optimization, and budget. These challenges are either related to adoption of conversion optimization or its smooth implementation. Solving these can help enterprises deploy conversion optimization efficiently and effectively to achieve growth and success.

Hope you found this post insightful. We’d love to hear your thoughts on challenges that enterprises face when implementing conversion optimization. Send in your feedback and views in the comments section below.

Approach_Increasing_Conversion_Rates_Free_Trial


The post 5 Conversion Rate Optimization Challenges For Enterprises To Solve appeared first on VWO Blog.

Read original article: 

5 Conversion Rate Optimization Challenges For Enterprises To Solve

Infographic: How To Turn Browsers Into Buyers

browsers into buyers cro

98% of website visitors don’t convert. That’s right. On average, only 2% of your website visitors convert and drive your online revenue. Imagine if you could increase that number just a little bit. You could potentially increase your online revenue by 50 to a 100 percent or more. But too many people think conversion rate optimization is testing different button colors on landing pages. In reality, it’s so much bigger than that. If you want to better understand the powerful process that it takes to increase your conversion rate, then you’ll love this infographic. It’s a simple visual that will…

The post Infographic: How To Turn Browsers Into Buyers appeared first on The Daily Egg.

Excerpt from: 

Infographic: How To Turn Browsers Into Buyers

Wow Your Clients, Grow Your Agency – Register for Digital Agency Day 2017

If you could get in a room with digital marketing experts from Google, AdRoll and LinkedIn, what would you ask them? Better yet, what if you could rub shoulders with them without having to leave your desk?

We’re not trying to torture you with hypotheticals. For the second year in a row, Unbounce and HubSpot have teamed up to cregisurate Digital Agency Day: a full day of virtual and in-person events dedicated to the digital agency professional.

And it’s happening very soon: on March 16th, 2017. Completely free.

Register for Digital Agency Day here.

Join expert speakers from the world’s top agencies and agency partners as they share actionable, agency-tailored advice on analytics, reporting, growing retainers, new business strategy, content marketing, conversion rate optimization and much more.

Here’s just a taste of some of the presentations you can expect:

  • Rethinking Retainers & Other Pricing Issues
  • What Your Agency Needs to Execute Content Marketing the Right Way
  • Grow Your Agency With LinkedIn Sponsored Content
  • Extreme Growth with Google AdWords: For Agencies
  • Unifying your Customer Journey: Unlocking the Power of Cross-Device Marketing

Here’s what some of our attendees from last year had to say:

See you then? Click here to register.

Taken from:  

Wow Your Clients, Grow Your Agency – Register for Digital Agency Day 2017

“The more tests, the better!” and other A/B testing myths, debunked

Reading Time: 8 minutes

Will the real A/B testing success metrics please stand up?

It’s 2017, and most marketers understand the importance of A/B testing. The strategy of applying the scientific method to marketing to prove whether an idea will have a positive impact on your bottom-line is no longer novel.

But, while the practice of A/B testing has become more and more common, too many marketers still buy into pervasive A/B testing myths. #AlternativeFacts.

This has been going on for years, but the myths continue to evolve. Other bloggers have already addressed myths like “A/B testing and conversion optimization are the same thing”, and “you should A/B test everything”.

As more A/B testing ‘experts’ pop up, A/B testing myths have become more specific. Driven by best practices and tips and tricks, these myths represent ideas about A/B testing that will derail your marketing optimization efforts if left unaddressed.

Avoid the pitfalls of ad-hoc A/B testing…

Get this guide, and learn how to build an optimization machine at your company. Discover how to use A/B testing as part of your bigger marketing optimization strategy!



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.



But never fear! With the help of WiderFunnel Optimization Strategist, Dennis Pavlina, I’m going to rebut four A/B testing myths that we hear over and over again. Because there is such a thing as a successful, sustainable A/B testing program…

Into the light, we go!

Myth #1: The more tests, the better!

A lot of marketers equate A/B testing success with A/B testing velocity. And I get it. The more tests you run, the faster you run them, the more likely you are to get a win, and prove the value of A/B testing in general…right?

Not so much. Obsessing over velocity is not going to get you the wins you’re hoping for in the long run.

Mike St Laurent

The key to sustainable A/B testing output, is to find a balance between short-term (maximum testing speed), and long-term (testing for data-collection and insights).

Michael St Laurent, Senior Optimization Strategist, WiderFunnel

When you focus solely on speed, you spend less time structuring your tests, and you will miss out on insights.

With every experiment, you must ensure that it directly addresses the hypothesis. You must track all of the most relevant goals to generate maximum insights, and QA all variations to ensure bugs won’t skew your data.

Dennis Pavlina

An emphasis on velocity can create mistakes that are easily avoided when you spend more time on preparation.

Dennis Pavlina, Optimization Strategist, WiderFunnel

Another problem: If you decide to test many ideas, quickly, you are sacrificing your ability to really validate and leverage an idea. One winning A/B test may mean quick conversion rate lift, but it doesn’t mean you’ve explored the full potential of that idea.

You can often apply the insights gained from one experiment, when building out the strategy for another experiment. Plus, those insights provide additional evidence for testing a particular concept. Lining up a huge list of experiments at once without taking into account these past insights can result in your testing program being more scattershot than evidence-based.

While you can make some noise with an ‘as-many-tests-as-possible’ strategy, you won’t see the big business impact that comes from a properly structured A/B testing strategy.

Myth #2: Statistical significance is the end-all, be-all

A quick definition

Statistical significance: The probability that a certain result is not due to chance. At WiderFunnel, we use a 95% confidence level. In other words, we can say that there is a 95% chance that the observed result is because of changes in our variation (and a 5% chance it is due to…well…chance).

If a test has a confidence level of less than 95% (positive or negative), it is inconclusive and does not have our official recommendation. The insights are deemed directional and subject to change.

Ok, here’s the thing about statistical significance: It is important, but marketers often talk about it as if it is the only determinant for completing an A/B test. In actuality, you cannot view it within a silo.

For example, a recent experiment we ran reached statistical significance three hours after it went live. Because statistical significance is viewed as the end-all, be-all, a result like this can be exciting! But, in three hours, we had not gathered a representative sample size.

Claire Vignon Keser

You should not wait for a test to be significant (because it may never happen) or stop a test as soon as it is significant. Instead, you need to wait for the calculated sample size to be reached before stopping a test. Use a test duration calculator to understand better when to stop a test.

After 24 hours, the same experiment had dropped to a confidence level of 88%, meaning that there was now only an 88% likelihood that the difference in conversion rates was not due to chance – i.e. statistically significant.

Traffic behaves differently over time for all businesses, so you should always run a test for full business cycles, even if you have reached statistical significance. This way, your experiment has taken into account all of the regular fluctuations in traffic that impact your business.

For an e-commerce business, a full business cycle is typically a one-week period; for subscription-based businesses, this might be one month or longer.

Myth #2, Part II: You have to run a test until reaches statistical significance

As Claire pointed out, this may never happen. And it doesn’t mean you should walk away from an A/B test, completely.

As I said above, anything below 95% confidence is deemed subject to change. But, with testing experience, an expert understanding of your testing tool, and by observing the factors I’m about to outline, you can discover actionable insights that are directional (directionally true or false).

  • Results stability: Is the conversion rate difference stable over time, or does it fluctuate? Stability is a positive indicator.
ab testing results stability
Check your graphs! Are conversion rates crossing? Are the lines smooth and flat, or are there spikes and valleys?
  • Experiment timeline: Did I run this experiment for at least a full business cycle? Did conversion rate stability last throughout that cycle?
  • Relativity: If my testing tool uses t-test to determine significance, am I looking at the hard numbers of actual conversions in addition to conversion rate? Does the calculated lift make sense?
  • LIFT & ROI: Is there still potential for the experiment to achieve X% lift? If so, you should let it run as long as it is viable, especially when considering the ROI.
  • Impact on other elements: If elements outside the experiment are unstable (social shares, average order value, etc.) the observed conversion rate may also be unstable.

You can use these factors to make the decision that makes the most sense for your business: implement the variation based on the observed trends, abandon the variation based on observed trends, and/or create a follow-up test!

Myth #3: An A/B test is only as good as its effect on conversion rates

Well, if conversion rate is the only success metric you are tracking, this may be true. But you’re underestimating the true growth potential of A/B testing if that’s how you structure your tests!

To clarify: Your main success metric should always be linked to your biggest revenue driver.

But, that doesn’t mean you shouldn’t track other relevant metrics! At WiderFunnel, we set up as many relevant secondary goals (clicks, visits, field completions, etc.) as possible for each experiment.

Dennis Pavlina

This ensures that we aren’t just gaining insights about the impact a variation has on conversion rate, but also the impact it’s having on visitor behavior.

– Dennis Pavlina

When you observe secondary goal metrics, your A/B testing becomes exponentially more valuable because every experiment generates a wide range of secondary insights. These can be used to create follow up experiments, identify pain points, and create a better understanding of how visitors move through your site.

An example

One of our clients provides an online consumer information service — users type in a question and get an Expert answer. This client has a 4-step funnel. With every test we run, we aim to increase transactions: the final, and most important conversion.

But, we also track secondary goals, like click-through-rates, and refunds/chargebacks, so that we can observe how a variation influences visitor behavior.

In one experiment, we made a change to step one of the funnel (the landing page). Our goal was to set clearer visitor expectations at the beginning of the purchasing experience. We tested 3 variations against the original, and all 3 won resulted in increased transactions (hooray!).

The secondary goals revealed important insights about visitor behavior, though! Firstly, each variation resulted in substantial drop-offs from step 1 to step 2…fewer people were entering the funnel. But, from there, we saw gradual increases in clicks to steps 3 and 4.

Our variations seemed to be filtering out visitors without strong purchasing intent. We also saw an interesting pattern with one of our variations: It increased clicks from step 3 to step 4 by almost 12% (a huge increase), but decreased actual conversions by -1.6%. This result was evidence that the call-to-action on step 4 was extremely weak (which led to a follow-up test!)

ab testing funnel analysis
You can see how each variation fared against the Control in this funnel analysis.

We also saw large decreases in refunds and chargebacks for this client, which further supported the idea that the right visitors (i.e. the wrong visitors) were the ones who were dropping off.

This is just a taste of what every A/B test could be worth to your business. The right goal tracking can unlock piles of insights about your target visitors.

Myth #4: A/B testing takes little to no thought or planning

Believe it or not, marketers still think this way. They still view A/B testing on a small scale, in simple terms.

But A/B testing is part of a greater whole—it’s one piece of your marketing optimization program—and you must build your tests accordingly. A one-off, ad-hoc test may yield short-term results, but the power of A/B testing lies in iteration, and in planning.

ab testing infinity optimization process
A/B testing is just a part of the marketing optimization machine.

At WiderFunnel, a significant amount of research goes into developing ideas for a single A/B test. Even tests that may seem intuitive, or common-sensical, are the result of research.

ab testing planning
The WiderFunnel strategy team gathers to share and discuss A/B testing insights.

Because, with any test, you want to make sure that you are addressing areas within your digital experiences that are the most in need of improvement. And you should always have evidence to support your use of resources when you decide to test an idea. Any idea.

So, what does a revenue-driving A/B testing program actually look like?

Today, tools and technology allow you to track almost any marketing metric. Meaning, you have an endless sea of evidence that you can use to generate ideas on how to improve your digital experiences.

Which makes A/B testing more important than ever.

An A/B test shows you, objectively, whether or not one of your many ideas will actually increase conversion rates and revenue. And, it shows you when an idea doesn’t align with your user expectations and will hurt your conversion rates.

And marketers recognize the value of A/B testing. We are firmly in the era of the data-driven CMO: Marketing ideas must be proven, and backed by sound data.

But results-driving A/B testing happens when you acknowledge that it is just one piece of a much larger puzzle.

One of our favorite A/B testing success stories is that of DMV.org, a non-government content website. If you want to see what a truly successful A/B testing strategy looks like, check out this case study. Here are the high level details:

We’ve been testing with DMV.org for almost four years. In fact, we just launched our 100th test with them. For DMV.org, A/B testing is a step within their optimization program.

Continuous user research and data gathering informs hypotheses that are prioritized and created into A/B tests (that are structured using proper Design of Experiments). Each A/B test delivers business growth and/or insights, and these insights are fed back into the data gathering. It’s a cycle of continuous improvement.

And here’s the kicker: Since DMV.org began A/B testing strategically, they have doubled their revenue year over year, and have seen an over 280% conversion rate increase. Those numbers kinda speak for themselves, huh?

What do you think?

Do you agree with the myths above? What are some misconceptions around A/B testing that you would like to see debunked? Let us know in the comments!

The post “The more tests, the better!” and other A/B testing myths, debunked appeared first on WiderFunnel Conversion Optimization.

Excerpt from:

“The more tests, the better!” and other A/B testing myths, debunked