All posts by Natasha Wahid

‘Get past personas’, and other takeaways from CTA Conf 17

Reading Time: 7 minutes

This week, I spent two jam-packed days at Unbounce’s fourth-ever Call To Action Conference. The one-track event featured some of today’s most influential digital marketing speakers like Mitch Joel, Kindra Hall, and Rand Fishkin.

WiderFunnel team at CTA Conf 17
The WiderFunnel team and I having a ball at the CTA Conf afterparty.

Session topics ranged from integrity in marketing, to performance marketing success, to the marriage of SEO and conversion optimization. But most shared a common theme: Don’t forget about the real person behind that click.

Knowledge bombs were dropped, important conversations were had, and actionable insights were shared. So, in today’s post, I’m going to share some of my most important takeaways from CTA Conf.

If you attended the conference, please share your favorite takeaways in the comments below!

1. Don’t be trendy, be data-driven

Featured Speaker: Oli Gardner

Unbounce Co-Founder, Oli Gardner, kicked things off on the first day.

Fun fact: Due to technical difficulties, Oli ended up acting out his entire opening video sequence (and most of the subsequent videos in his presentation). He handled the hiccup like a pro, of course, and launched into a great session on data-driven design.

One of the strongest points that Oli made was that digital marketing trends self-perpetuate, regardless of whether or not they are helpful to a user.

I know we, as data-driven marketers, ‘know’ this fact. We complain about ‘best practices’, and buzzwords, and yet we still get totally caught up in trends.

Remember when explainer videos became the end-all, be-all for homepages?

WiderFunnel CTA Conf Recap Oli Gardner
Oli pointing out the flaws in an old Unbounce explainer video at CTA Conf.

What happened? Hundreds of blog posts were written about explainer videos, and hundreds of explainer videos were produced to talk about how great explainer videos are. And then, every homepage on the internet featured an explainer video.

But…were all of those explainer videos really what customers needed? In some cases, but certainly not in all.

Instead, Oli spoke about the need to “mend trends”, and make design decisions based on data, rather than the most popular trend at the time.

We hold the same view at WiderFunnel. You can A/B test explainer video after explainer video. But to create truly impactful experiences, you have to go back to the research phase.

Use the data you have to drill into what you think are you most important business problems. And test hypotheses that attempt to solve for those problems.

2. Choose people, not personas

I’m not a big fan of personas. I’ve never kicked it with a persona.

– Wil Reynolds

But, without personas, how do I write the right copy for my customers at the right time?!

Don’t panic.

Focus on motivation instead

Featured Speaker: Joel Klettke

Conversion copywriter extraordinaire, Joel Klettke, spoke about how to read your customer’s mind. He emphasized the need to get past user personas and keywords, and focus on customer motivation instead.

Joel Klettke at CTA Conf
Joel Klettke on stage at CTA Conf.

We get stuck behind our screens, and start writing about ‘synergies’ and features that our customers really don’t care about.

– Joel Klettke

He outlined a framework for getting your customers to tell you about their pain points, anxieties, desired outcomes, and priorities, in their own words:

  1. Ask
  2. Record
  3. Analyze
  4. Feed
  5. Measure

Note: I didn’t dig too deeply into the framework, here. But Joel put together a resource for CTA Conf attendees, and graciously gave me the green light to share it. Check it out here!

Jobs To Be Done vs. Personas

Featured Speaker: Claire Suellentrop

On Day 2, Claire Suellentrop built on this idea of the dated persona.

She explained that marketers collect many data points about our prospects, like…

  • Gender, age, location
  • Title, company, industry
  • Married, no kids, one puppy

…but asked whether or not all of that data actually helps us determine why a real human being just bought a new backpack from Everlane.

As an alternative, she suggested the Jobs To Be Done framework. JTBD refers to your customer’s struggle to make progress on something. When your customer overcomes that struggle, the job is done, and they have made progress.

The framework looks a little something like this:

“When ____________ (event that triggers the struggle), help me ______ (struggle / job) so I can __________ (better life / done).”

To identify your customers’ struggle, Claire suggests actually asking your customers. She outlined several sample questions:

  • “Take me back to life before [product]. What was it like?”
  • “What happened that compelled you to start looking for something different?”
  • “What happened when you tried [product] that made you confident it was right for you?”
  • “What can you do now that you couldn’t do before?”

3. Tell the story, don’t just allude to it

Featured Speaker: Kindra Hall

One of my favorite speakers on Day 1 of CTA Conf was Kindra Hall. (Not surprising, as she is the storytelling expert).

Kindra dug into strategic storytelling in marketing. According to her, you should use a story every time you need to communicate value in your marketing.

Kindra Hall at CTA Conf
Kindra Hall sets out to define storytelling in marketing.

Storytelling is powerful because real life humans are attracted to great stories. (And marketers talk to people after all).

Stories, according to Kindra, stick with us and make us do stuff because storytelling is a co-creative process.

“As I am telling you my story, you are creating your own in your mind. I am giving you my words, but you are meeting me half way, and we are creating a shared memory,” Kindra explained.

The most powerful moment in her talk came when she challenged the audience with the biggest storytelling mistake:

Too often, we allude to the story, but don’t actually tell it.

– Kindra Hall

She showed two example videos to illustrate her point. In the first, a company founder almost told her compelling story about losing both of her parents, but glossed over the details. The result was a pretty video, with pretty music that almost created feeling.

In the second video, the founder told her full story, explaining how losing her parents shaped her company and product. The difference in emotional impact was kind of incredible.

And making your customers feel is a huge part of making your customers act. Because we — consumers, people, humans — don’t buy products or services…we buy feelings.

4. Pay attention to people signals

For goodness’ sake, solve the searcher’s problem

Featured Speaker: Wil Reynolds

Founder of Seer Interactive, Wil Reynolds, danced his way onto the stage, and delivered a really strong talk on SEO, conversion optimization, and the importance of people signals.

Wil Reynolds at CTA Conf
Wil remembers when he f*ed up, and forgot about the HUMAN element.

He didn’t mince words, explaining that marketers too often put conversions before customers. We ask “how do I get?” when we should be asking, “how do I help my customer get what they need?”

When you do an amazing job on search, you get to help people who are lost solve their problems.

– Wil Reynolds

Wil painted a picture of how we, as marketers, are letting our own wants override solving our customers’ problems. In the world of search, Wil pointed out that Google rewards pages that solve the searchers’ query. So solve the searchers’ query!

Much like we allude to stories, but often don’t tell them, we talk about listening to our customers, but often don’t really listen.

Instead of showing them product comparisons when they search “best CRM platform”, we pay to show them a landing page that claims “My product is the best! Get in my funnel!”

This isn’t just an issue in search or performance. In conversion optimization, there is an emphasis on velocity over user research. There is pressure to test more, and test faster.

But, we must take the time to do the research. To get as close to our customers’ problem, and tailor our marketing experience to their needs.

Win at SEO and CRO with a long-term vision

Featured Speaker: Rand Fishkin

Building on Wil’s session on Day 1, SEO wizard, Rand Fishkin, gave the audience actionable tips around how to optimize for searcher intent.

Rand pointed to conversion optimization.

At its core, conversion optimization is about getting into your customers’ minds, and testing changes to get closer to the best possible customer experience. To give your customer what they need, you must soothe their pain points, and provide a solution.

You can apply this same concept to SEO: If you 1) gain a deep understanding of what searchers are seeking, and 2) determine why some searchers come away unsatisfied, you can optimize for searcher task accomplishment.

Rand Fishkin at CTA Conf
Rand demonstrates how establishing trust leads to ROI.

Unfortunately, Rand pointed out, there is still a conflict between SEO and CRO, because conversion rate and searcher satisfaction are sometimes in direct opposition.

For example, let’s say you want to get more blog subscriptions, so you add a pop-up to your blog post. This may lead to a higher conversion rate on the page, but lower searcher satisfaction. Some readers might bounce, which may lead to lower organic traffic.

But, Rand ended on a high note:

You can win with long-term thinking. By always asking, ‘are we building a brand that’s helping people succeed?’

– Rand Fishkin

5. Don’t fear disruption. Own it.

Featured Speaker: Mitch Joel

One of the final speakers on Day 1 was marketing thought-leader, Mitch Joel, who shook things up a bit. Mitch spoke about what it means to be disruptive (and how to not fear disruption).

Mitch Joel at CTA Conf
“For a seed to achieve its greatest expression, it must come completely undone.”

When I ask C-Suite marketers to define disruption, the definition is never consistent. In fact, I often don’t get a definition of disruption, I get a definition of destruction.

– Mitch Joel

He asked, if disruption is the big bad wolf, who are the heroes in this marketing story?

Well, like the three little pigs, Mitch discussed three ways to be disruptive rather than be disrupted:

  1. Transformation: Business transformation is not your products or services, etc. It’s inside out. And it starts with technology. You need to be using the same tech, same form of communication that your customers are using.
  2. Innovative marketing: Innovation is not re-allocation of resources. It isn’t investing more in Google Adwords versus another channel. Real innovation is about making and creating new products and experiences that we can use to market with.
  3. Micro-transactions: Marketers and businesses get caught up in the macro transaction, in the purchase. But we live in a world of micro-transactions. This is the customer journey, and it is extremely important to understand.

Mitch Joel emphasized the fact that if you can apply these ‘three little pigs’ to your business model, you will be in a great place, though he recognized that it’s not always easy.

But nothing great is ever easy.

6. Be bold enough to be wrong

Featured Speaker: Michael Aagaard

Senior Conversion Optimizer at Unbounce, Michael Aagaard, closed out the two-day conference. His message was a simple but powerful warning against the trap of confirmation bias.

We, as humans, are not interested in information, but confirmation.

– Michael Aagaard

Confirmation bias refers to our tendency to search for and recall information in ways that confirm our existing beliefs, hypotheses, and expectations. And it is a threat to data-driven marketing.

Michael Aagaard at CTA Conf
Michael takes us back to ye olde London to make a point about the enduring power of confirmation bias.

When you A/B test, you are searching for objectivity. You are trying to figure out which variation your users prefer, outside of your own opinions and beliefs about what works best.

But it’s rarely that simple, even if you are a pro.

Michael showed us a landing page that he analyzed for a client, featuring a stock photo hero image. He said he had railed against the photo, and shown the client examples of the hundreds of other stock photos featuring the same model.

But, when he tested the landing page, he found that the original version, featuring the ‘terrible’ stock photo, was the clear winner.

“Maybe,” he said, “users don’t spend hours scouring the internet for stock photo sinners like I do.”

He urged the audience to be bold enough to be wrong, to challenge our hypotheses, and get out of the marketing bubble when we are trying to solve problems.

If we don’t get out of the marketing bubble, we end up making assumptions, and designing experiences for ourselves.

– Michael Aagaard

Go hang out with your customer success teams and sales teams; get outsider input on your ‘great’ ideas. Go find your own natural skeptic, and challenge your hypotheses.

Were you at CTA Conf 17? What were your most important takeaways? Who were your favorite speakers, and why? Let us know in the comments!

The post ‘Get past personas’, and other takeaways from CTA Conf 17 appeared first on WiderFunnel Conversion Optimization.

More – 

‘Get past personas’, and other takeaways from CTA Conf 17

How do ad agencies win a Cannes Lion award?

Reading Time: 2 minutes

As the Cannes Lions Festival is wrapping up this week, we’re seeing the annual breathless, self-congratulatory statements coming out of agencies with photos of their awards and sun-tanned creative teams sipping champagne.

Cannes Lions
Thanks for the trip to the south of France, clients!
We’d like to thank the little people who made this possible.

They should feel proud. They’ve achieved a huge accomplishment that has been the recognized stamp of credibility for advertising creativity since 1954.

How do agencies win at the Cannes Lions festival?

When I worked at the big ad agencies, I was often shocked at how they used clients’ budgets for the purpose of winning awards and self-promotion.

I’ve seen ad agency executives planning how to maximize their billings for minimal work and use their clients’ budgets to submit campaigns for awards.

I vividly remember, shortly before I walked away from my ad agency career, being part of a team that created a poster to promote a lightbulb.

It involved an elaborate set rental, professional photography shoot, intensive image editing, and ultimately cost the client $17,000. For a poster.

It did nothing to communicate the benefits of the lightbulb for consumers. And there was not a single conversation at the agency about how we should measure results, or even what the goal was for the poster.

Was it a failed poster campaign?

It certainly didn’t achieve the goals in the official creative brief.

But, it did win a prestigious award for that agency and the creative director.

It was certainly a clever (if not esoteric) concept with beautiful, subtle photography, but it was entirely useless as an ad.

I watched as the client contacts turned a blind eye to the waste, knowing that they would be repaid with lavish expense account dinners in exchange for handing over their company’s cash.

CMOs are turning against award-obsessed agencies

That’s why today’s CMO’s are rejecting traditional award-seeking agencies. They know those agencies don’t care about their clients. Much less their clients’ customers.

Today’s CMOs know award-seeking agencies don’t care about their clients. Much less their clients’ customers.

They know that too-clever ads often don’t achieve results. Their digital transformation is changing their priorities. Data-informed ad campaigns are now revealing how ineffective the old gut-feeling approach can be.

They are seeking alternatives, and finding them in the Zen Marketing approach that balances intuition with data, big ideas with bold experiments, inspiration with rigorous validation.

The alternative to cleverness is customer insights that are validated by robust data.

The alternative to awards for cleverness is measurable results lift.

I firmly believe that creativity is still required for advertising. And a rigorous experimentation program is enabling today’s marketing innovation.

I’m reminded again, in this Cannes Lions Festival season, of why I started WiderFunnel to be the “anti-agency.” And again, why we will never make a recommendation if we haven’t tested its ability to lift the client’s revenue.

So, the next time you’re in an agency pitch where they’re bragging about their awards, don’t walk; run away from hiring them. They’re telling you they don’t care about you.

Why we will never win a Cannes Lion award

Short answer: Because we will never submit for one.

The post How do ad agencies win a Cannes Lion award? appeared first on WiderFunnel Conversion Optimization.

View original post here – 

How do ad agencies win a Cannes Lion award?

[Case Study] Ecwid sees 21% lift in paid plan upgrades in one month

Reading Time: 2 minutes

What would you do with 21% more sales this month?

I bet you’d walk into your next meeting with your boss with an extra spring in your step, right?

Well, when you implement a strategic marketing optimization program, results like this are not only possible, they are probable.

In this new case study, you’ll discover how e-commerce software supplier, Ecwid, ran one experiment for four weeks, and saw a 21% increase in paid upgrades.

Get the full Ecwid case study now!

Download a PDF version of the Ecwid case study, featuring experiment details, supplementary takeaways and insights, and a testimonial from Ecwid’s Sr. Director, Digital Marketing.



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.

A little bit about Ecwid

Ecwid provides easy-to-use online store setup, management, and payment solutions. The company was founded in 2009, with the goal of enabling business-owners to add online stores to their existing websites, quickly and without hassle.

The company has a freemium business model: Users can sign up for free, and unlock more features as they upgrade to paid packages.

Ecwid’s partnership with WiderFunnel

In November 2016, Ecwid partnered with WiderFunnel with two primary goals:

  1. To increase initial signups for their free plan through marketing optimization, and
  2. To increase the rate of paid upgrades, through platform optimization

This case study focuses on a particular experiment cycle that ran on Ecwid’s step-by-step onboarding wizard.

The methodology

Last Winter, the WiderFunnel Strategy team did an initial LIFT Analysis of the onboarding wizard, and identified several potential barriers to conversion. (Both in terms of completing steps to setup a new store, and in terms of upgrading to a paid plan.)

The lead WiderFunnel Strategist for Ecwid, Dennis Pavlina, decided to create an A/B cluster test to 1) address the major barriers simultaneously, and 2) to get major lift for Ecwid, quickly.

The overarching goal was to make the onboarding process smoother. The WiderFunnel and Ecwid optimization teams hoped that enhancing the initial user experience, and exposing users to the wide range of Ecwid’s features, would result in more users upgrading to paid plans.

Dennis Pavlina

Ecwid’s two objectives ended up coming together in this test. We thought that if more new users interacted with the wizard and were shown the whole ‘Ecwid world’ with all the integrations and potential it has, they would be more open to upgrading. People needed to be able to see its potential before they would want to pay for it.

Dennis Pavlina, Optimization Strategist, WiderFunnel

The Results

This experiment ran for four weeks, at which point the variation was determined to be the winner with 98% confidence. The variation resulted in a 21.3% increase in successful paid account upgrades for Ecwid.

Read the full case study for:

  • The details on the initial barriers to conversion
  • How this test was structured
  • Which secondary metrics we tracked, and
  • The supplementary takeaways and customer insights that came from this test

The post [Case Study] Ecwid sees 21% lift in paid plan upgrades in one month appeared first on WiderFunnel Conversion Optimization.

See original article:

[Case Study] Ecwid sees 21% lift in paid plan upgrades in one month

Capturing supermarket magic and providing the ideal customer experience

Reading Time: 6 minutes

The customer-centric focus

Over the past few years, one message has been gaining momentum within the marketing world: customer experience is king.

Customer experience” (CX) refers to your customer’s perception of her relationship with your brand—both conscious and subconscious—based on every interaction she has with your brand during her customer life cycle.

Customer experience is king
How do your customers feel about your brand?

Companies are obsessing over CX, and for good reason(s):

  • It is 6-7x more expensive to attract a new customer than it is to retain an existing customer
  • 67% of consumers cite ‘bad experiences’ as reason for churn
  • 66% of consumers who switch brands do so because of poor service

Across sectors, satisfied customers spend more, exhibit deeper loyalty to companies, and create conditions that allow companies to have lower costs and higher levels of employee engagement.

As conversion optimization specialists, we test in pursuit of the perfect customer experience, from that first email subject line, to the post-purchase conversation with a customer service agent.

We test because it is the best way to listen, and create ideal experiences that will motivate consumers to choose us over our competitors in the saturated internet marketplace.

Create the perfect personalized customer experience!

Your customers are unique, and their ideal experiences are unique. Create the perfect customer experience with this 4-step guide to building the most effective personalization strategy.



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.


Which leads me to the main question of this post: Which companies are currently providing the best customer experiences, and how can you apply their strategies in your business context?

Each year, the Tempkin Group releases a list of the best and worst US companies, by customer experience rating. The list is based on survey responses from 10,000 U.S. consumers, regarding their recent experiences with companies.

And over the past few years, supermarkets have topped that list: old school, brick-and-mortar, this-model-has-been-around-forever establishments.

Customer experience - brick-mortar vs. ecommerce
What are supermarkets doing so right, and how can online retailers replicate it?

In the digital world, we often focus on convenience, usability, efficiency, and accessibility…but are there elements at the core of a great customer experience that we may be missing?

A quick look at the research

First things first: Let’s look at how the Tempkin Group determines their experience ratings.

Tempkin surveys 10,000 U.S. consumers, asking them to rate their recent (past 60 days) interactions with 331 companies across 20 industries. The survey questions cover Tempkin’s three components of experience:

  1. Success: Were you, the consumer, able to accomplish what you wanted to do?
  2. Effort: How easy was it for you to interact with the company?
  3. Emotion: How did you feel about those interactions?

Respondents answer questions on a scale of 1 (worst) to 7 (best), and researchers score each company accordingly. For more details on how the research was conducted, you can download the full report, here.

In this post, I am going to focus on one supermarket that has topped the list for the past three years: Publix. Not only does Publix top the Tempkin ratings, it also often tops the supermarket rankings compiled by the American Customer Satisfaction Index.

Long story short: Publix is winning the customer experience battle.

WiderFunnel Customer Experience Ratings Tempkin 2017
2017 Customer Experience ratings from Tempkin.
WiderFunnel Customer Experience Ratings Tempkin 2016
2016 Customer Experience ratings from Tempkin.

So, what does Publix do right?

Publix growth - WiderFunnel customer experience
Publix growth trends (Source).

If you don’t know it, Publix Super Markets, Inc. is an American supermarket chain headquartered in Florida. Founded in 1930, Publix is a private corporation that is wholly owned by present and past employees; it is considered the largest employee-owned company in the world.

In an industry that has seen recent struggles, Publix has seen steady growth over the past 10 years. So, what is this particular company doing so very right?

1. World-class customer service

Publix takes great care to provide the best possible customer service.

From employee presentation (no piercings, no unnatural hair color, no facial hair), to the emphasis on “engaging the customer”, to the bread baked fresh on-site every day, the company’s goal is to create the most pleasurable shopping experience for each and every customer.

When you ask “Where is the peanut butter?” at another supermarket, an employee might say, “Aisle 4.” But at Publix, you will be led to the peanut butter by a friendly helper.

The store’s slogan: “Make every customer’s day a little bit better because they met you.”

2. The most motivated employees

Publix associates are famously “pleased-as-punch, over-the-moon, [and] ridiculously contented”.

Note the term “associates”: Because Publix is employee-owned, employees are not referred to as employees, but associates. As owners, associates share in the store’s success: If the company does well, so do they.

Our culture is such that we believe if we take care of our associates, they in turn will take care of our customers. Associate ownership is our secret sauce,” said Publix spokeswoman, Maria Brous. “Our associates understand that their success is tied to the success of our company and therefore, we must excel at providing legendary service to our customers.

3. Quality over quantity

While Publix is one of the largest food retailers in the country by revenue, they operate a relatively small number of stores: 1,110 stores across six states in the southeastern U.S. (For context, Wal-Mart operates more than 4,000 stores).

Each of Publix’s store locations must meet a set of standards. From the quality of the icing on a cake in the bakery, to the “Thanks for shopping at Publix. Come back and see us again soon!” customer farewell, customers should have a delightful experience at every Publix store.

4. An emotional shopping experience

In the Tempkin Experience Ratings, emotion was the weakest component for the 331 companies evaluated. But, Publix was among the few organizations to receive an “excellent” emotion rating. (In fact, they are ranked top 3 in this category.)

widerfunnel customer delight
Are you creating delight for the individuals who are your customers?

They are able to literally delight their customers. And, as a smart marketer, I don’t have to tell you how powerful emotion is in the buying process.

Great for Publix. What does this mean for me?

As marketers, we should be changing the mantra from ‘always be closing’ to ‘always be helping’.

– Jonathan Lister, LinkedIn

In the digital marketing world, it is easy to get lost in acronyms: UX, UI, SEO, CRO, PPC…and forget about the actual customer experience. The experience that each individual shopper has with your brand.

Beyond usability, beyond motivation tactics, beyond button colors and push notifications, are you creating delight?

To create delight, you need to understand your customer’s reality. It may be time to think about how much you spend on website traffic, maintenance, analytics, and tools vs. how much you spend to understand your customers…and flip the ratio.

It’s important to understand the complexity of how your users interact with your website. We say, ‘I want to find problems with my website by looking at the site itself, or at my web traffic’. But that doesn’t lead to results. You have to understand your user’s reality.

– André Morys, Founder & CEO, WebArts

Publix is winning with their customer-centric approach because they are fully committed to it. While the tactics may be different with a brick-and-mortar store and an e-commerce website, the goals overlap:

1. Keep your customer at the core of every touch point

From your Facebook ad, to your product landing page, to your product category page, checkout page, confirmation email, and product tracking emails, you have an opportunity to create the best experience for your customers at each step.

customer service and customer experience
Great customer service is one component of a great customer experience.

2. Make your customers feel something.

Humans don’t buy things. We buy feelings. What are you doing to make your shoppers feel? How are you highlighting the intangible benefits of your value proposition?

3. Keep your employees motivated.

Happy, satisfied employees, deliver happy, satisfying customer experiences, whether they’re creating customer-facing content for your website, or speaking to customers on the phone. For more on building a motivated, high performance marketing team, read this post!

Testing to improve your customer experience

Of course, this wouldn’t be a WiderFunnel blog post if I didn’t recommend testing your customer experience improvements.

If you have an idea for how to inject emotion into the shopping experience, test it. If you believe a particular tweak will make the shopping experience easier and your shoppers more successful, test it.

Your customers will show you what an ideal customer experience looks like with their actions, if you give them the opportunity.

Here’s an example.

During our partnership with e-commerce platform provider, Magento, we ran a test on the product page for the company’s Enterprise Edition software, meant to improve the customer experience.

The main call-to-action on this page was “Get a free demo”—a universal SaaS offering. The assumption was that potential customers would want to experience and explore the platform on their own (convenient, right?), before purchasing the platform.

Magento_CTA_Get
The original Magento Enterprise Edition homepage featuring the “Get a free demo”.

Looking at click map data, however, our Strategists noticed that visitors to this page were engaging with informational tabs lower on the page. It seemed that potential customers needed more information to successfully accomplish their goals on the page.

Unfortunately, once visitors had finished browsing tabs, they had no option other than trying the demo, whether they were ready or not.

So, our Strategists tested adding a secondary “Talk to a specialist” call-to-action. Potential customers could connect directly with a Magento sales representative, and get answers to all of their questions.

Magento_CTA
Today’s Magento Enterprise Edition homepage features a “Talk to a specialist” CTA.

This call-to-action hadn’t existed prior to this test, so the literal infinite conversion rate lift Magento saw in qualified sales calls was not surprising.

What was surprising was the phone call we received six months later: Turns out the “Talk to a specialist” leads were 8x more valuable than the “Get a free demo” leads.

After several subsequent test rounds, “Talk to a specialist” became the main call-to-action on that product page. Magento’s most valuable prospects had demonstrated that the ideal customer experience included the opportunity to get more information from a specialist.

While Publix’s success reminds us of the core components of a great customer experience, actually creating a great customer experience can be tricky.

You might be wondering:

  • What is most important to my customers: Success, Effort, or Emotion?
  • What improvements should I make first?
  • How will I know these improvements are actually working?

A test-and-learn strategy will help you answer these questions, and begin working toward a truly great customer experience.

Don’t get lost in the guesswork of tweaks, fixes, and best practices. Get obsessed with understanding your customer, instead.

How do you create the ideal customer experience?

Please share your thoughts in the comments section below!

The post Capturing supermarket magic and providing the ideal customer experience appeared first on WiderFunnel Conversion Optimization.

Visit source: 

Capturing supermarket magic and providing the ideal customer experience

How to do server-side testing for SPA optimization

Reading Time: 5 minutes

Gettin’ technical.

We talk a lot about marketing strategy on this blog. But today, we are getting technical.

In this post, I team up with WiderFunnel front-end developer, Thomas Davis, to cover the basics of server-side testing from a web development perspective.

The alternative to server-side testing is client-side testing, which has arguably been the dominant testing method for many marketing teams, due to ease and speed.

But modern web applications are becoming more dynamic and technically complex. And testing within these applications is becoming more technically complex.

Server-side testing is a solution to this increased complexity. It also allows you to test much deeper. Rather than being limited to testing images or buttons on your website, you can test algorithms, architectures, and re-brands.

Simply put: If you want to test on an application, you should consider server-side testing.

Let’s dig in!

Note: Server-side testing is a tactic that is linked to single page applications (SPAs). Throughout this post, I will refer to web pages and web content within the context of a SPA. Applications such as Facebook, Airbnb, Slack, BBC, CodeAcademy, eBay, and Instagram are SPAs.


Defining server-side and client-side rendering

In web development terms, “server-side” refers to “occurring on the server side of a client-server system.”

The client refers to the browser, and client-side rendering occurs when:

  1. A user requests a web page,
  2. The server finds the page and sends it to the user’s browser,
  3. The page is rendered on the user’s browser, and any scripts run during or after the page is displayed.
Static app server
A basic representation of server-client communication.

The server is where the web page and other content live. With server-side rendering, the requested web page is sent to the user’s browser in final form:

  1. A user requests a web page,
  2. The server interprets the script in the page, and creates or changes the page content to suit the situation
  3. The page is sent to the user in final form and then cannot be changed using server-side scripting.

To talk about server-side rendering, we also have to talk a little bit about JavaScript. JavaScript is a scripting language that adds functionality to web pages, such as a drop-down menu or an image carousel.

Traditionally, JavaScript has been executed on the client side, within the user’s browser. However, with the emergence of Node.js, JavaScript can be run on the server side. All JavaScript executing on the server is running through Node.js.

*Node.js is an open-source, cross-platform JavaScript runtime environment, used to execute JavaScript code server-side. It uses the Chrome V8 JavaScript engine.

In laymen’s (ish) terms:

When you visit a SPA web application, the content you are seeing is either being rendered in your browser (client-side), or on the server (server-side).

If the content is rendered client-side, JavaScript builds the application HTML content within the browser, and requests any missing data from the server to fill in the blanks.

Basically, the page is incomplete upon arrival, and is completed within the browser.

If the content is being rendered server-side, your browser receives the application HTML, pre-built by the server. It doesn’t have to fill in any blanks.

Why do SPAs use server-side rendering?

There are benefits to both client-side rendering and server-side rendering, but render performance and page load time are two huge pro’s for the server side.

(A 1 second delay in page load time can result in a 7% reduction in conversions, according to Kissmetrics.)

Server-side rendering also enables search engine crawlers to find web content, improving SEO; and social crawlers (like the crawlers used by Facebook) do not evaluate JavaScript, making server-side rendering beneficial for social searching.

With client-side rendering, the user’s browser must download all of the application JavaScript, and wait for a response from the server with all of the application data. Then, it has to build the application, and finally, show the complete HTML content to the user.

All of which to say, with a complex application, client-side rendering can lead to sloooow initial load times. And, because client-side rendering relies on each individual user’s browser, the developer only has so much control over load time.

Which explains why some developers are choosing to render their SPAs on the server side.

But, server-side rendering can disrupt your testing efforts, if you are using a framework like Angular or React.js. (And the majority of SPAs use these frameworks).

The disruption occurs because the version of your application that exists on the server becomes out of sync with the changes being made by your test scripts on the browser.

NOTE: If your web application uses Angular, React, or a similar framework, you may have already run into client-side testing obstacles. For more on how to overcome these obstacles, and successfully test on AngularJS apps, read this blog post.


Testing on the server side vs. the client side

Client-side testing involves making changes (the variation) within the browser by injecting Javascript after the original page has already loaded.

The original page loads, the content is hidden, the necessary elements are changed in the background, and the ‘new’ version is shown to the user post-change. (Because the page is hidden while these changes are being made, the user is none-the-wiser.)

As I mentioned earlier, the advantages of client-side testing are ease and speed. With a client-side testing tool like VWO, a marketer can set up and execute a simple test using a WYSIWYG editor without involving a developer.

But for complex applications, client-side testing may not be the best option: Layering more JavaScript on top of an already-bulky application means even slower load time, and an even more cumbersome user experience.

A Quick Hack

There is a workaround if you are determined to do client-side testing on a SPA application. Web developers can take advantage of features like Optimizely’s conditional activation mode to make sure that testing scripts are only executed when the application reaches a desired state.

However, this can be difficult as developers will have to take many variables into account, like location changes performed by the $routeProvider, or triggering interaction based goals.

To avoid flicker, you may need to hide content until the front-end application has initialized in the browser, voiding the performance benefits of using server-side rendering in the first place.

WiderFunnel - client side testing activation mode
Activation Mode waits until the framework has loaded before executing your test.



When you do server-side testing, there are no modifications being made at the browser level. Rather, the parameters of the experiment variation (‘User 1 sees Variation A’) are determined at the server route level, and hooked straight into the javascript application through a service provider.

Here is an example where we are testing a pricing change:

“Ok, so, if I want to do server-side testing, do I have to involve my web development team?”

Yep.

But, this means that testing gets folded into your development team’s work flow. And, it means that it will be easier to integrate winning variations into your code base in the end.

If yours is a SPA, server-side testing may be the better choice, despite the work involved. Not only does server-side testing embed testing into your development workflow, it also broadens the scope of what you can actually test.

Rather than being limited to testing page elements, you can begin testing core components of your application’s usability like search algorithms and pricing changes.

A server-side test example!

For web developers who want to do server-side testing on a SPA, Tom has put together a basic example using Optimizely SDK. This example is an illustration, and is not functional.

In it, we are running a simple experiment that changes the color of a button. The example is built using Angular Universal and express JS. A global service provider is being used to fetch the user variation from the Optimizely SDK.

Here, we have simply hard-coded the user ID. However, Optimizely requires that each user have a unique ID. Therefore, you may want to use the user ID that already exists in your database, or store a cookie through express’ Cookie middleware.

Are you currently doing server-side testing?

Or, are you client-side testing on a SPA application? What challenges (if any) have you faced? How have you handled them? Do you have any specific questions? Let us know in the comments!

The post How to do server-side testing for SPA optimization appeared first on WiderFunnel Conversion Optimization.

Link to original: 

How to do server-side testing for SPA optimization

How to do server-side testing for single page app optimization

Reading Time: 5 minutes

Gettin’ technical.

We talk a lot about marketing strategy on this blog. But today, we are getting technical.

In this post, I team up with WiderFunnel front-end developer, Thomas Davis, to cover the basics of server-side testing from a web development perspective.

The alternative to server-side testing is client-side testing, which has arguably been the dominant testing method for many marketing teams, due to ease and speed.

But modern web applications are becoming more dynamic and technically complex. And testing within these applications is becoming more technically complex.

Server-side testing is a solution to this increased complexity. It also allows you to test much deeper. Rather than being limited to testing images or buttons on your website, you can test algorithms, architectures, and re-brands.

Simply put: If you want to test on an application, you should consider server-side testing.

Let’s dig in!

Note: Server-side testing is a tactic that is linked to single page applications (SPAs). Throughout this post, I will refer to web pages and web content within the context of a SPA. Applications such as Facebook, Airbnb, Slack, BBC, CodeAcademy, eBay, and Instagram are SPAs.


Defining server-side and client-side rendering

In web development terms, “server-side” refers to “occurring on the server side of a client-server system.”

The client refers to the browser, and client-side rendering occurs when:

  1. A user requests a web page,
  2. The server finds the page and sends it to the user’s browser,
  3. The page is rendered on the user’s browser, and any scripts run during or after the page is displayed.
Static app server
A basic representation of server-client communication.

The server is where the web page and other content live. With server-side rendering, the requested web page is sent to the user’s browser in final form:

  1. A user requests a web page,
  2. The server interprets the script in the page, and creates or changes the page content to suit the situation
  3. The page is sent to the user in final form and then cannot be changed using server-side scripting.

To talk about server-side rendering, we also have to talk a little bit about JavaScript. JavaScript is a scripting language that adds functionality to web pages, such as a drop-down menu or an image carousel.

Traditionally, JavaScript has been executed on the client side, within the user’s browser. However, with the emergence of Node.js, JavaScript can be run on the server side. All JavaScript executing on the server is running through Node.js.

*Node.js is an open-source, cross-platform JavaScript runtime environment, used to execute JavaScript code server-side. It uses the Chrome V8 JavaScript engine.

In laymen’s (ish) terms:

When you visit a SPA web application, the content you are seeing is either being rendered in your browser (client-side), or on the server (server-side).

If the content is rendered client-side, JavaScript builds the application HTML content within the browser, and requests any missing data from the server to fill in the blanks.

Basically, the page is incomplete upon arrival, and is completed within the browser.

If the content is being rendered server-side, your browser receives the application HTML, pre-built by the server. It doesn’t have to fill in any blanks.

Why do SPAs use server-side rendering?

There are benefits to both client-side rendering and server-side rendering, but render performance and page load time are two huge pro’s for the server side.

(A 1 second delay in page load time can result in a 7% reduction in conversions, according to Kissmetrics.)

Server-side rendering also enables search engine crawlers to find web content, improving SEO; and social crawlers (like the crawlers used by Facebook) do not evaluate JavaScript, making server-side rendering beneficial for social searching.

With client-side rendering, the user’s browser must download all of the application JavaScript, and wait for a response from the server with all of the application data. Then, it has to build the application, and finally, show the complete HTML content to the user.

All of which to say, with a complex application, client-side rendering can lead to sloooow initial load times. And, because client-side rendering relies on each individual user’s browser, the developer only has so much control over load time.

Which explains why some developers are choosing to render their SPAs on the server side.

But, server-side rendering can disrupt your testing efforts, if you are using a framework like Angular or React.js. (And the majority of SPAs use these frameworks).

The disruption occurs because the version of your application that exists on the server becomes out of sync with the changes being made by your test scripts on the browser.

NOTE: If your web application uses Angular, React, or a similar framework, you may have already run into client-side testing obstacles. For more on how to overcome these obstacles, and successfully test on AngularJS apps, read this blog post.


Testing on the server side vs. the client side

Client-side testing involves making changes (the variation) within the browser by injecting Javascript after the original page has already loaded.

The original page loads, the content is hidden, the necessary elements are changed in the background, and the ‘new’ version is shown to the user post-change. (Because the page is hidden while these changes are being made, the user is none-the-wiser.)

As I mentioned earlier, the advantages of client-side testing are ease and speed. With a client-side testing tool like VWO, a marketer can set up and execute a simple test using a WYSIWYG editor without involving a developer.

But for complex applications, client-side testing may not be the best option: Layering more JavaScript on top of an already-bulky application means even slower load time, and an even more cumbersome user experience.

A Quick Hack

There is a workaround if you are determined to do client-side testing on a SPA application. Web developers can take advantage of features like Optimizely’s conditional activation mode to make sure that testing scripts are only executed when the application reaches a desired state.

However, this can be difficult as developers will have to take many variables into account, like location changes performed by the $routeProvider, or triggering interaction based goals.

To avoid flicker, you may need to hide content until the front-end application has initialized in the browser, voiding the performance benefits of using server-side rendering in the first place.

WiderFunnel - client side testing activation mode
Activation Mode waits until the framework has loaded before executing your test.



When you do server-side testing, there are no modifications being made at the browser level. Rather, the parameters of the experiment variation (‘User 1 sees Variation A’) are determined at the server route level, and hooked straight into the javascript application through a service provider.

Here is an example where we are testing a pricing change:

“Ok, so, if I want to do server-side testing, do I have to involve my web development team?”

Yep.

But, this means that testing gets folded into your development team’s work flow. And, it means that it will be easier to integrate winning variations into your code base in the end.

If yours is a SPA, server-side testing may be the better choice, despite the work involved. Not only does server-side testing embed testing into your development workflow, it also broadens the scope of what you can actually test.

Rather than being limited to testing page elements, you can begin testing core components of your application’s usability like search algorithms and pricing changes.

A server-side test example!

For web developers who want to do server-side testing on a SPA, Tom has put together a basic example using Optimizely SDK. This example is an illustration, and is not functional.

In it, we are running a simple experiment that changes the color of a button. The example is built using Angular Universal and express JS. A global service provider is being used to fetch the user variation from the Optimizely SDK.

Here, we have simply hard-coded the user ID. However, Optimizely requires that each user have a unique ID. Therefore, you may want to use the user ID that already exists in your database, or store a cookie through express’ Cookie middleware.

Are you currently doing server-side testing?

Or, are you client-side testing on a SPA application? What challenges (if any) have you faced? How have you handled them? Do you have any specific questions? Let us know in the comments!

The post How to do server-side testing for single page app optimization appeared first on WiderFunnel Conversion Optimization.

Continue reading – 

How to do server-side testing for single page app optimization

How to get evergreen results from your landing page optimization

Reading Time: 7 minutes

Landing page optimization is old news.

Seriously. A quick google will show you that Unbounce, QuickSprout, Moz, Qualaroo, Hubspot, Wordstream, Optimizely, CrazyEgg, VWO (and countless others), have been writing tips and guides on how to optimize your landing pages for years.

Not to mention the several posts we have already published on the WiderFunnel blog since 2008.

And yet. This conversation is so not over.

Warning: If your landing page optimization goals are short-term, or completely focused on conversion rate lift, this post may be a waste of your time. If your goal is to continuously have the best-performing landing pages on the internet, keep reading.



Marketers are funnelling more and more money into paid advertising, especially as Google allocates more and more SERP space to ads.

In fact, as an industry, we are spending upwards of $92 billion annually on paid search advertising alone.

landing-page-optimization-SERP-space
The prime real estate on a Google search results page often goes to paid.

And it’s not just search advertising that is seeing an uptick in spend, but social media advertising too.

It makes sense that marketers are still obsessing over their landing page conversion rates: this traffic is costly and curated. These are visitors that you have sought out, that share characteristics with your target market. It is extremely important that these visitors convert!

But, there comes a time in every optimizer’s life, when they face the cruel reality of diminishing returns. You’ve tested your landing page hero image. You’ve tested your value proposition. You’ve tested your form placement. And now, you’ve hit a plateau.

So, what next? What’s beyond the tips and guides? What is beyond the optimization basics?

1) Put on your customer’s shoes.

First things first: Let’s do a quick sanity check.

When you test your hero image, or your form placement, are you testing based on tips and recommended best practices? Or, are you testing based on a specific theory you have about your page visitors?

landing-page-optimization-customer-shoes
Put on your customer’s shoes.

Tips and best practices are a fine place to start, but the insight behind why those tactics work (or don’t work) for your visitors is where you find longevity.

The best way to improve experiences for your visitors is to think from their perspective. And the best way to do that is to use frameworks, and framework thinking, to get robust insights about your customers.

– Chris Goward, Founder & CEO, WiderFunnel

Laying the foundation

It’s very difficult to think from a different perspective. This is true in marketing as much as it is in life. And it’s why conversion optimization and A/B testing have become so vital: We no longer have to guess at what our visitors want, but can test instead!

That said, a test requires a hypothesis. And a legitimate hypothesis requires a legitimate attempt to understand your visitor’s unique perspective.

To respond to this need for understanding, WiderFunnel developed the LIFT Model® in 2008: our foundational framework for identifying potential barriers to conversion on a page from the perspective of the page visitor.

Get optimization ideas with the LIFT poster!

Get the LIFT Model poster, and challenge yourself to keep your visitor’s perspective in mind at all times. Use the six conversion factors to analyze your pages, and get optimization ideas!



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.


The LIFT Model attempts to capture the idea of competing forces in communication, narrowing them down to the most salient aspects of communication that marketers should consider.

I wanted to apply the principles of Relevance, Clarity, Distraction, Urgency and Anxiety to what we were delivering to the industry and not just to our clients. And the LIFT Model is a part of that: making something as simple as possible but no simpler.

– Chris Goward

When you look at your page through a lens like the LIFT Model, you are forced to question your assumptions about what your visitors want when they land on your page.

landing-page-optimization-LIFT-Model
View your landing pages through a framework lens.

You may love an interactive element, but is it distracting your visitors? You may think that your copy creates urgency, but is it really creating anxiety?

If you are an experienced optimizer, you may have already incorporated a framework like the LIFT Model into your optimization program. But, after you have analyzed the same page multiple times, how do you continue to come up with new ideas?

Here are a few tips from the WiderFunnel Strategy team:

  1. Bring in fresh eyes from another team to look at and use your page
  2. User test, to watch and record how actual users are using your page
  3. Sneak a peek at your competitors’ landing pages: Is there something they’re doing that might be worth testing on your site?
  4. Do your page analyses as a team: many heads are better than one
  5. Brainstorm totally new, outside-the-box ideas…and test one!

You should always err on the side of “This customer experience could be better.” After all, it’s a customer-centric world, and we’re just marketing in it.

2) Look past the conversion rate.

“Landing page optimization”, like “conversion rate optimization”, is a limiting term. Yes, on-page optimization is key, but mature organizations view “landing page optimization” as the optimization of the entire experience, from first to last customer touchpoint.

Landing pages are only one element of a stellar, high-converting marketing campaign. And focusing all of your attention on optimizing only one element is just foolish.

From testing your featured ads, to tracking click-through rates of Thank You emails, to tracking returns and refunds, to tracking leads through the rest of the funnel, a better-performing landing page is about much more than on-page conversion rate lift.

landing-page-optimization-big-picture
On-page optimization is just one part of the whole picture.

An example is worth 1,000 words

One of our clients is a company that provides an online consumer information service—visitors type in a question and get an Expert answer. One of the first zones (areas on their website) that we focused on was a particular landing page funnel.

Visitors come from an ad, and land on page where they can ask their question. They then enter a 4-step funnel: Step 1: Ask the question > Step 2: Add more information > Step 3: Pick an Expert > Step 4: Get an answer (aka the checkout page)

Our primary goal was to increase transactions, meaning we had to move visitors all the way through the funnel. But we were also tracking refunds and chargebacks, as well as revenue per visitor.

More than pushing a visitor to ‘convert’, we wanted to make sure those visitors went on to be happy, satisfied customers.

In this experiment, we focused on the value proposition statements. The control landing page exclaimed, “A new question is answered every 9 seconds!“. Our Strategy team had determined (through user testing) that “speed of answers” was the 8th most valuable element of the service for customers, and that “peace of mind / reassurance” was the most important.

So, they tested two variations, featuring two different value proposition statements meant to create more peace of mind for visitors:

  • “Join 6,152,585 satisfied customers who got professional answers…”
  • “Connect One on One with an Expert who will answer your question”

Both of these variations ultimately increased transactions, by 6% and 9.4% respectively. But! We also saw large decreases in refunds and chargebacks with both variations, and large increases in net revenue per visitor for both variations.

By following visitors past the actual conversion, we were able to confirm that these initial statements set an impactful tone: visitors were more satisfied with their purchases, and comfortable investing more in their expert responses.

3) Consider the big picture.

As you think of landing page optimization as the optimization of a complete digital experience, you should also think of landing page optimization as part of your overall digital optimization strategy.

When you discover an insight about visitors to your product page, feed it into a test on your landing page. When you discover an insight about visitor behavior on your landing page, feed it into a test on your website.

It’s true that your landing pages most likely cater to specific visitor segments, who may behave totally differently than your organic visitors. But, it is also true that landing page wins may be overall wins.

Plus, landing page insights can be very valuable, because they are often new visitor insights. And now, a little more advice from Chris Goward, optimization guru:

“Your best opportunities for testing your value proposition are with first impression visitors. These are usually new visitors to your high traffic landing pages or your home page […]

By split testing your alternative value propositions with new visitors, you’ll reduce your exposure to existing customers or prospects who are already in the consideration phase. New prospects have a blank canvas for you to present your message variations and see what sticks.

Then, from the learning gained on landing pages, you can validate insights with other target audience groups and with your customers to leverage the learning company-wide.

Landing page testing can do more than just improve conversion rates on landing pages. When done strategically, it can deliver powerful, high-leverage marketing insights.”



Just because your landing pages are separate from your website, does not mean that your landing page optimization should be separate from your other optimization efforts. A landing page is just another zone, and you are free to (and should) use insights from one zone when testing on another zone.

4) Go deeper, explore further.

A lot of marketers talk about landing page design: how to build the right landing page, where to position each element, what color scheme and imagery to use, etc.

But when you dig into the why behind your test results, it’s like breaking into a piñata of possibilities, or opening a box of idea confetti.

landing-page-optimization-ideas
Discovering the reason behind the result is like opening a box of idea confetti!

Why do your 16-25 year old, mobile users respond so favorably to a one-minute video testimonial from a past-purchaser? Do they respond better to this indicator of social proof than another?

Why do your visitors prefer one landing page under normal circumstances, and a different version when external factors change (like a holiday, or a crisis)? Can you leverage this insight throughout your website?

Why does one type of urgency phrasing work, while slightly different wording decreases conversions on your page? Are your visitors sensitive to overly salesy copy? Why or why not?

Not only are there hundreds of psychological principles to explore within your landing page testing, but landing page optimization is also intertwined with your personalization strategy.

For many marketers, personalized landing pages are becoming more normal. And personalization opens the door to even more potential customer insights. Assuming you already have visitor segments, you should test the personalized experiences on your landing pages.

For example, imagine you have started using your visitors’ first names in the hero banner of your landing page. Have you validated that this personalized experience is more effective than another, like moving a social proof indicator above the fold? Both can be deemed personalization, but they tap into very different motivations.

From psychological principles, to validating your personalized experiences, the possibilities for testing on your landing pages are endless.

Just keep testing, Dory-style

Your landing page(s) will never be “optimized”. That is the beauty and cruelty of optimization: we are always chasing unattainable perfection.

But your landing pages can definitely be better than they are now. Even if you have a high-converting page, even if your page is listed by Hubspot as one of the 16 best designed landing pages, even if you’ve followed all of the rules…your landing page can be better.

Because I’m not just talking about conversions, I’m talking about your entire customer experience. If you give them the opportunity, your new users will tell you what’s wrong with your page.

They’ll tell you where it is unclear and where it is distracting.

They’ll tell you what motivates them.

They’ll tell you how personal you should get.

They’ll tell you how to set expectations so that they can become satisfied customers or clients.

A well-designed landing page is just the beginning of landing page optimization.

The post How to get evergreen results from your landing page optimization appeared first on WiderFunnel Conversion Optimization.

More: 

How to get evergreen results from your landing page optimization

Beyond A vs. B: How to get better results with better experiment design

Reading Time: 7 minutes

You’ve been pushing to do more testing at your organization.

You’ve heard that your competitors at ______ are A/B testing, and that their customer experience is (dare I say it?) better than yours.

You believe in marketing backed by science and data, and you have worked to get the executive team at your company on board with a tested strategy.

You’re excited to begin! To learn more about your customers and grow your business.

You run one A/B test. And then another. And then another. But you aren’t seeing that conversion rate lift you promised. You start to hear murmurs and doubts. You start to panic a little.

You could start testing as fast as you can, trying to get that first win. (But you shouldn’t).

Instead, you need to reexamine how you are structuring your tests. Because, as Alhan Keser writes,

Alhan Keser

If your results are disappointing, it may not only be what you are testing – it is definitely how you are testing. While there are several factors for success, one of the most important to consider is Design of Experiments (DOE).

This isn’t the first (or even the second) time we have written about Design of Experiments on the WiderFunnel blog. Because that’s how important it is. Seriously.

For this post, I teamed up with Director of Optimization Strategy, Nick So, to take a deeper look at the best ways to structure your experiments for maximum growth and insights.

Discover the best experiment structure for you!

Compare the pro’s and con’s of different Design of Experiment tactics with this simple download. The method you choose is up to you!



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.


Warning: Things will get a teensy bit technical, but this is a vital part of any high-performing marketing optimization program.

The basics: Defining A/B, MVT, and factorial

Marketers often use the term ‘A/B testing’ to refer to marketing experimentation in general. But there are multiple different ways to structure your experiments. A/B testing is just one of them.

Let’s look at a few: A/B testing, A/B/n testing, multivariate (MVT), and factorial design.

A/B test

In an A/B test, you are testing your original page / experience (A) against a single variation (B) to see which will result in a higher conversion rate. Variation B might feature a multitude of changes (i.e. a ‘cluster’) of changes, or an isolated change.

ab test widerfunnel
When you change multiple elements in a single variation, you might see lift, but what about insights?

In an A/B/n test, you are testing more than two variations of a page at once. “N” refers to the number of versions being tested, anywhere from two versions to the “nth” version.

Multivariate test (MVT)

With multivariate testing, you are testing each, individual change, isolated one against another, by mixing and matching every possible combination available.

Imagine you want to test a homepage re-design with four changes in a single variation:

  • Change A: New hero banner
  • Change B: New call-to-action (CTA) copy
  • Change C: New CTA color
  • Change D: New value proposition statement

Hypothetically, let’s assume that each change has the following impact on your conversion rate:

  • Change A = +10%
  • Change B = +5%
  • Change C = -25%
  • Change D = +5%

If you were to run a classic A/B test―your current control page (A) versus a combination of all four changes at once (B)―you would get a hypothetical decrease of -5% overall (10% + 5% – 25% +5%). You would assume that your re-design did not work and most likely discard the ideas.

With a multivariate test, however, each of the following would be a variation:

mvt widerfunnel

Multivariate testing is great because it shows you the positive or negative impact of every single change, and every single combination of every change, resulting in the most ideal combination (in this theoretical example: A + B + D).

However, this strategy is kind of impossible in the real world. Even if you have a ton of traffic, it would still take more time than most marketers have for a test with 15 variations to reach any kind of statistical significance.

The more variations you test, the more your traffic will be split while testing, and the longer it will take for your tests to reach statistical significance. Many companies simply can’t follow the principles of MVT because they don’t have enough traffic.

Enter factorial experiment design. Factorial design allows for the speed of pure A/B testing combined with the insights of multivariate testing.

Factorial design: The middle ground

Factorial design is another method of Design of Experiments. Similar to MVT, factorial design allows you to test more than one element change within the same variation.

The greatest difference is that factorial design doesn’t force you to test every possible combination of changes.

Rather than creating a variation for every combination of changed elements (as you would with MVT), you can design your experiment to focus on specific isolations that you hypothesize will have the biggest impact.

With basic factorial experiment design, you could set up the following variations in our hypothetical example:

VarA: Change A = +10%
VarB: Change A + B = +15%
VarC: Change A + B + C = -10%
VarD: Change A + B + C + D = -5%

Factorial design widerfunnel
In this basic example, variation A features a single change; VarB is built on VarA, and VarC is built on VarB.

NOTE: With factorial design, estimating the value (e.g. conversion rate lift) of each change is a bit more complex than shown above. I’ll explain.

Firstly, let’s imagine that our control page has a baseline conversion rate of 10% and that each variation receives 1,000 unique visitors during your test.

When you estimate the value of change A, you are using your control as a baseline.

factorial testing widerfunnel
Variation A versus the control.

Given the above information, you would estimate that change A is worth a 10% lift by comparing the 11% conversion rate of variation A against the 10% conversion rate of your control.

The estimated conversion rate lift of change A = (11 / 10 – 1) = 10%

But, when estimating the value of change B, variation A must become your new baseline.

factorial testing widerfunnel
Variation B versus variation A.

The estimated conversion rate lift of change B = (11.5 / 11 – 1) = 4.5%

As you can see, the “value” of change B is slightly different from the 5% difference shown above.

When you structure your tests with factorial design, you can work backwards to isolate the effect of each individual change by comparing variations. But, in this scenario, you have four variations instead of 15.

Mike St Laurent

We are essentially nesting A/B tests into larger experiments so that we can still get results quickly without sacrificing insights gained by isolations.

– Michael St Laurent, Optimization Strategist, WiderFunnel

Then, you would simply re-validate the hypothesized positive results (Change A + B + D) in a standard A/B test against the original control to see if the numbers align with your prediction.

Factorial allows you to get the best potential lift, with five total variations in two tests, rather than 15 variations in a single multivariate test.

But, wait…

It’s not always that simple. How do you hypothesize which elements will have the biggest impact? How do you choose which changes to combine and which to isolate?

The Strategist’s Exploration

The answer lies in the Explore (or research gathering) phase of your testing process.

At WiderFunnel, Explore is an expansive thinking zone, where all options are considered. Ideas are informed by your business context, persuasion principles, digital analytics, user research, and your past test insights and archive.

Experience is the other side to this coin. A seasoned optimization strategist can look at the proposed changes and determine which changes to combine (i.e. cluster), and which changes should be isolated due to risk or potential insights to be gained.

At WiderFunnel, we don’t just invest in the rigorous training of our Strategists. We also have a 10-year-deep test archive that our Strategy team continuously draws upon when determining which changes to cluster, and which to isolate.

Factorial design in action: A case study

Once upon a time, we were testing with Annie Selke, a retailer of luxury home-ware goods. This story follows two experiments we ran on Annie Selke’s product category page.

(You may have already read about what we did during this test, but now I’m going to get into the details of how we did it. It’s a beautiful illustration of factorial design in action!)

Experiment 4.7

In the first experiment, we tested three variations against the control. As the experiment number suggests, this was not the first test we ran with Annie Selke, in general. But it is the ‘first’ test in this story.

ab testing marketing control
Experiment 4.7 control product category page.

Variation A featured an isolated change to the “Sort By” filters below the image, making it a drop down menu.

ab testing marketing example
Replaced original ‘Sort By’ categories with a more traditional drop-down menu.

Evidence?

This change was informed by qualitative click map data, which showed low interaction with the original filters. Strategists also theorized that, without context, visitors may not even know that these boxes are filters (based on e-commerce best practices). This variation was built on the control.

Variation B was also built on the control, and featured another isolated change to reduce the left navigation.

ab testing marketing example
Reduced left-hand navigation.

Evidence?

Click map data showed that most visitors were clicking on “Size” and “Palette”, and past testing had revealed that Annie Selke visitors were sensitive to removing distractions. Plus, the persuasion principle, known as the Paradox of Choice, theorizes that more choice = more anxiety for visitors.

Unlike variation B, variation C was built on variation A, and featured a final isolated change: a collapsed left navigation.

Collapsed left-hand filter (built on VarA).
Collapsed left-hand filter (built on VarA).

Evidence?

This variation was informed by the same evidence as variation B.

Results

Variation A (built on the control) saw a decrease in transactions of -23.2%.
Variation B (built on the control) saw no change.
Variation C (built on variation A) saw a decrease in transactions of -1.9%.

But wait! Because variation C was built on variation A, we knew that the estimated value of change C (the collapsed filter), was 19.1%.

The next step was to validate our estimated lift of 19.1% in a follow up experiment.

Experiment 4.8

The follow-up test also featured three variations versus the original control. Because, you should never waste the opportunity to gather more insights!

Variation A was our validation variation. It featured the collapsed filter (change C) from 4.7’s variation C, but maintained the original “Sort By” functionality from 4.7’s control.

ab testing marketing example
Collapsed filter & original ‘Sort By’ functionality.

Variation B was built on variation A, and featured two changes emphasizing visitor fascination with colors. We 1) changed the left nav filter from “palette” to “color”, and 2) added color imagery within the left nav filter.

ab testing marketing example
Updated “palette” to “color”, and added color imagery. (A variation featuring two clustered changes).

Evidence?

Click map data suggested that Annie Selke visitors are most interested in refining their results by color, and past test results also showed visitor sensitivity to color.

Variation C was built on variation A, and featured a single isolated change: we made the collapsed left nav persistent as the visitor scrolled.

ab testing marketing example
Made the collapsed filter persistent.

Evidence?

Scroll maps and click maps suggested that visitors want to scroll down the page, and view many products.

Results

Variation A led to a 15.6% increase in transactions, which is pretty close to our estimated 19% lift, validating the value of the collapsed left navigation!

Variation B was the big winner, leading to a 23.6% increase in transactions. Based on this win, we could estimate the value of the emphasis on color.

Variation C resulted in a 9.8% increase in transactions, but because it was built on variation A (not on the control), we learned that the persistent left navigation was actually responsible for a decrease in transactions of -11.2%.

This is what factorial design looks like in action: big wins, and big insights, informed by human intelligence.

The best testing framework for you

What are your testing goals?

If you are in a situation where potential revenue gains outweigh the potential insights to be gained or your test has little long-term value, you may want to go with a standard A/B cluster test.

If you have lots and lots of traffic, and value insights above everything, multivariate may be for you.

If you want the growth-driving power of pure A/B testing, as well as insightful takeaways about your customers, you should explore factorial design.

A note of encouragement: With factorial design, your tests will get better as you continue to test. With every test, you will learn more about how your customers behave, and what they want. Which will make every subsequent hypothesis smarter, and every test more impactful.

One 10% win without insights may turn heads your direction now, but a test that delivers insights can turn into five 10% wins down the line. It’s similar to the compounding effect: collecting insights now can mean massive payouts over time.

– Michael St Laurent

The post Beyond A vs. B: How to get better results with better experiment design appeared first on WiderFunnel Conversion Optimization.

More – 

Beyond A vs. B: How to get better results with better experiment design

“The more tests, the better!” and other A/B testing myths, debunked

Reading Time: 8 minutes

Will the real A/B testing success metrics please stand up?

It’s 2017, and most marketers understand the importance of A/B testing. The strategy of applying the scientific method to marketing to prove whether an idea will have a positive impact on your bottom-line is no longer novel.

But, while the practice of A/B testing has become more and more common, too many marketers still buy into pervasive A/B testing myths. #AlternativeFacts.

This has been going on for years, but the myths continue to evolve. Other bloggers have already addressed myths like “A/B testing and conversion optimization are the same thing”, and “you should A/B test everything”.

As more A/B testing ‘experts’ pop up, A/B testing myths have become more specific. Driven by best practices and tips and tricks, these myths represent ideas about A/B testing that will derail your marketing optimization efforts if left unaddressed.

Avoid the pitfalls of ad-hoc A/B testing…

Get this guide, and learn how to build an optimization machine at your company. Discover how to use A/B testing as part of your bigger marketing optimization strategy!



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.



But never fear! With the help of WiderFunnel Optimization Strategist, Dennis Pavlina, I’m going to rebut four A/B testing myths that we hear over and over again. Because there is such a thing as a successful, sustainable A/B testing program…

Into the light, we go!

Myth #1: The more tests, the better!

A lot of marketers equate A/B testing success with A/B testing velocity. And I get it. The more tests you run, the faster you run them, the more likely you are to get a win, and prove the value of A/B testing in general…right?

Not so much. Obsessing over velocity is not going to get you the wins you’re hoping for in the long run.

Mike St Laurent

The key to sustainable A/B testing output, is to find a balance between short-term (maximum testing speed), and long-term (testing for data-collection and insights).

Michael St Laurent, Senior Optimization Strategist, WiderFunnel

When you focus solely on speed, you spend less time structuring your tests, and you will miss out on insights.

With every experiment, you must ensure that it directly addresses the hypothesis. You must track all of the most relevant goals to generate maximum insights, and QA all variations to ensure bugs won’t skew your data.

Dennis Pavlina

An emphasis on velocity can create mistakes that are easily avoided when you spend more time on preparation.

Dennis Pavlina, Optimization Strategist, WiderFunnel

Another problem: If you decide to test many ideas, quickly, you are sacrificing your ability to really validate and leverage an idea. One winning A/B test may mean quick conversion rate lift, but it doesn’t mean you’ve explored the full potential of that idea.

You can often apply the insights gained from one experiment, when building out the strategy for another experiment. Plus, those insights provide additional evidence for testing a particular concept. Lining up a huge list of experiments at once without taking into account these past insights can result in your testing program being more scattershot than evidence-based.

While you can make some noise with an ‘as-many-tests-as-possible’ strategy, you won’t see the big business impact that comes from a properly structured A/B testing strategy.

Myth #2: Statistical significance is the end-all, be-all

A quick definition

Statistical significance: The probability that a certain result is not due to chance. At WiderFunnel, we use a 95% confidence level. In other words, we can say that there is a 95% chance that the observed result is because of changes in our variation (and a 5% chance it is due to…well…chance).

If a test has a confidence level of less than 95% (positive or negative), it is inconclusive and does not have our official recommendation. The insights are deemed directional and subject to change.

Ok, here’s the thing about statistical significance: It is important, but marketers often talk about it as if it is the only determinant for completing an A/B test. In actuality, you cannot view it within a silo.

For example, a recent experiment we ran reached statistical significance three hours after it went live. Because statistical significance is viewed as the end-all, be-all, a result like this can be exciting! But, in three hours, we had not gathered a representative sample size.

Claire Vignon Keser

You should not wait for a test to be significant (because it may never happen) or stop a test as soon as it is significant. Instead, you need to wait for the calculated sample size to be reached before stopping a test. Use a test duration calculator to understand better when to stop a test.

After 24 hours, the same experiment had dropped to a confidence level of 88%, meaning that there was now only an 88% likelihood that the difference in conversion rates was not due to chance – i.e. statistically significant.

Traffic behaves differently over time for all businesses, so you should always run a test for full business cycles, even if you have reached statistical significance. This way, your experiment has taken into account all of the regular fluctuations in traffic that impact your business.

For an e-commerce business, a full business cycle is typically a one-week period; for subscription-based businesses, this might be one month or longer.

Myth #2, Part II: You have to run a test until reaches statistical significance

As Claire pointed out, this may never happen. And it doesn’t mean you should walk away from an A/B test, completely.

As I said above, anything below 95% confidence is deemed subject to change. But, with testing experience, an expert understanding of your testing tool, and by observing the factors I’m about to outline, you can discover actionable insights that are directional (directionally true or false).

  • Results stability: Is the conversion rate difference stable over time, or does it fluctuate? Stability is a positive indicator.
ab testing results stability
Check your graphs! Are conversion rates crossing? Are the lines smooth and flat, or are there spikes and valleys?
  • Experiment timeline: Did I run this experiment for at least a full business cycle? Did conversion rate stability last throughout that cycle?
  • Relativity: If my testing tool uses t-test to determine significance, am I looking at the hard numbers of actual conversions in addition to conversion rate? Does the calculated lift make sense?
  • LIFT & ROI: Is there still potential for the experiment to achieve X% lift? If so, you should let it run as long as it is viable, especially when considering the ROI.
  • Impact on other elements: If elements outside the experiment are unstable (social shares, average order value, etc.) the observed conversion rate may also be unstable.

You can use these factors to make the decision that makes the most sense for your business: implement the variation based on the observed trends, abandon the variation based on observed trends, and/or create a follow-up test!

Myth #3: An A/B test is only as good as its effect on conversion rates

Well, if conversion rate is the only success metric you are tracking, this may be true. But you’re underestimating the true growth potential of A/B testing if that’s how you structure your tests!

To clarify: Your main success metric should always be linked to your biggest revenue driver.

But, that doesn’t mean you shouldn’t track other relevant metrics! At WiderFunnel, we set up as many relevant secondary goals (clicks, visits, field completions, etc.) as possible for each experiment.

Dennis Pavlina

This ensures that we aren’t just gaining insights about the impact a variation has on conversion rate, but also the impact it’s having on visitor behavior.

– Dennis Pavlina

When you observe secondary goal metrics, your A/B testing becomes exponentially more valuable because every experiment generates a wide range of secondary insights. These can be used to create follow up experiments, identify pain points, and create a better understanding of how visitors move through your site.

An example

One of our clients provides an online consumer information service — users type in a question and get an Expert answer. This client has a 4-step funnel. With every test we run, we aim to increase transactions: the final, and most important conversion.

But, we also track secondary goals, like click-through-rates, and refunds/chargebacks, so that we can observe how a variation influences visitor behavior.

In one experiment, we made a change to step one of the funnel (the landing page). Our goal was to set clearer visitor expectations at the beginning of the purchasing experience. We tested 3 variations against the original, and all 3 won resulted in increased transactions (hooray!).

The secondary goals revealed important insights about visitor behavior, though! Firstly, each variation resulted in substantial drop-offs from step 1 to step 2…fewer people were entering the funnel. But, from there, we saw gradual increases in clicks to steps 3 and 4.

Our variations seemed to be filtering out visitors without strong purchasing intent. We also saw an interesting pattern with one of our variations: It increased clicks from step 3 to step 4 by almost 12% (a huge increase), but decreased actual conversions by -1.6%. This result was evidence that the call-to-action on step 4 was extremely weak (which led to a follow-up test!)

ab testing funnel analysis
You can see how each variation fared against the Control in this funnel analysis.

We also saw large decreases in refunds and chargebacks for this client, which further supported the idea that the right visitors (i.e. the wrong visitors) were the ones who were dropping off.

This is just a taste of what every A/B test could be worth to your business. The right goal tracking can unlock piles of insights about your target visitors.

Myth #4: A/B testing takes little to no thought or planning

Believe it or not, marketers still think this way. They still view A/B testing on a small scale, in simple terms.

But A/B testing is part of a greater whole—it’s one piece of your marketing optimization program—and you must build your tests accordingly. A one-off, ad-hoc test may yield short-term results, but the power of A/B testing lies in iteration, and in planning.

ab testing infinity optimization process
A/B testing is just a part of the marketing optimization machine.

At WiderFunnel, a significant amount of research goes into developing ideas for a single A/B test. Even tests that may seem intuitive, or common-sensical, are the result of research.

ab testing planning
The WiderFunnel strategy team gathers to share and discuss A/B testing insights.

Because, with any test, you want to make sure that you are addressing areas within your digital experiences that are the most in need of improvement. And you should always have evidence to support your use of resources when you decide to test an idea. Any idea.

So, what does a revenue-driving A/B testing program actually look like?

Today, tools and technology allow you to track almost any marketing metric. Meaning, you have an endless sea of evidence that you can use to generate ideas on how to improve your digital experiences.

Which makes A/B testing more important than ever.

An A/B test shows you, objectively, whether or not one of your many ideas will actually increase conversion rates and revenue. And, it shows you when an idea doesn’t align with your user expectations and will hurt your conversion rates.

And marketers recognize the value of A/B testing. We are firmly in the era of the data-driven CMO: Marketing ideas must be proven, and backed by sound data.

But results-driving A/B testing happens when you acknowledge that it is just one piece of a much larger puzzle.

One of our favorite A/B testing success stories is that of DMV.org, a non-government content website. If you want to see what a truly successful A/B testing strategy looks like, check out this case study. Here are the high level details:

We’ve been testing with DMV.org for almost four years. In fact, we just launched our 100th test with them. For DMV.org, A/B testing is a step within their optimization program.

Continuous user research and data gathering informs hypotheses that are prioritized and created into A/B tests (that are structured using proper Design of Experiments). Each A/B test delivers business growth and/or insights, and these insights are fed back into the data gathering. It’s a cycle of continuous improvement.

And here’s the kicker: Since DMV.org began A/B testing strategically, they have doubled their revenue year over year, and have seen an over 280% conversion rate increase. Those numbers kinda speak for themselves, huh?

What do you think?

Do you agree with the myths above? What are some misconceptions around A/B testing that you would like to see debunked? Let us know in the comments!

The post “The more tests, the better!” and other A/B testing myths, debunked appeared first on WiderFunnel Conversion Optimization.

Excerpt from:

“The more tests, the better!” and other A/B testing myths, debunked

How to build a high-performance marketing team

Reading Time: 9 minutes

Build a marketing team that gets results

Marketers always want to hear about results: 100% conversion rate lift, doubled revenue year over year, 89% increase in qualified leads, etc.

It makes sense: Results are promising, they’re easy to sell, they encourage you to imagine yourself in that person’s shoes, and to imagine those results at your company.

At WiderFunnel, we obsess about results. That’s why our clients continue to be our clients, because we consistently deliver profitable ‘A-ha!’ moments in the form of insights and revenue lift. In the end, results are what matter, right?

The effort it takes to get great results is less sexy. But it’s what separates the good from the great.

Humans appreciate ease. People love the promise of the silver bullet. We are prone to the cognitive shortcut called Satisficing, which gives us sub-optimal results. It’s difficult for people to push through to the best result.

This is why best practices, tool-centric strategies, ‘expert’ opinions, and 10-steps-to-guaranteed-success blog posts will always be popular.

Satisficing is a cognitive heuristic that encourages a person to stop considering alternatives when they’ve found one that meets the lowest acceptable criteria. It’s why people buy a product when they don’t feel like the additional effort searching for a better alternative is worth the exerting. It can actually be an effective method for optimizing all costs, if it’s done consciously.

The reality is that you reap what you sow: The best results come from a solid foundation. You’ve heard me talk about process and framework thinking as being crucial to getting great marketing results…

…and today, I’m going to talk about another pillar for success: building a high performance marketing team.

Want to join a high-performance marketing team?

Join our team >>

The people who you hire are at the core of what you can achieve. If you want to achieve growth, you have to build a high-performance team. I have spent the last 10 years building the WiderFunnel team; they are a group of experts who deliver consistently amazing results for our clients.

Victoria Petriw

If you have no team, you have no business. People often overlook that simple fact. We want to say it’s the ideas, marketing, sales, etc. that are the number one priority. But in order to achieve any results in any of these areas, you need a solid team.

Victoria Petriw, Manager of Operations, WiderFunnel

In this post, I am going to walk through how to build and maintain your high-performance, results-driving, ‘A-ha!’-creating marketing team.

Let’s start at the beginning.

Lay the foundation

You can’t build anything without a solid foundation, so first things first: Who are you as a company? What is your company identity, the glue that holds everything together?

If you can’t answer that question immediately, you may find it very difficult to hire the right people.

Agnes Tseng

You can have a candidate that is extremely skilled, but if they are not on the same page as you, it won’t be a winning relationship for either of you.

Agnes Tseng, Human Resources, WiderFunnel

Most companies today have created some form of a mission or vision statement, and company values. But I’d argue that most don’t use them to really define what their company does.

Without a shared belief in the types of decisions and behaviors you won’t accept, you’ll accept anything. If you don’t stand for something, you’ll fall for anything.

If you have a clear purpose that is written, repeated, and used for decision-making, you’ll be more likely to attract and retain people that resonate together. When people resonate, the added energy from the coherence multiplies their effect.

Strong core values are a proven way of finding people that resonate with each other. At WiderFunnel, five values sit at the core of our company identity. This is who we are.

We created these values as a team to reflect how we work.

marketing team - WiderFunnel values
Our values are at the core of every decision we make.

These values are embedded into everything we do. They are integral to our hiring decisions, reviewed during onboarding, called out in our weekly team shoutouts, and used to decide on client fit.

Often, companies will grow to a certain number of employees, realize that their company culture is waning, and then scramble to define their identity. But, by then it may be too late.

If you don’t intentionally build the culture you want, the culture you don’t want will create itself.

So, start by identifying your purpose and the values you’ll live by. And, build all of your decisions on that foundation.

Then…

Build the structure

So, you are happy with your team ‘why’, and have begun the hiring process. How do you maintain a satisfied, productive, and high-performing marketing team?

marketing team - build the structure
It takes a solid foundation, and intentional frameworks to build a high-performance marketing team.

I’ve always recognized the value of framework thinking for conversion optimization. And when developing our human resources process, I’ve sought out the best frameworks for that area of the business too.

The best frameworks simplify difficult decisions, focus attention on the right pieces of data, and align team members on the salient criteria.

How to get the right butts in the right seats

For the first couple years at WiderFunnel, I struggled with our hiring failure rate. It was painful to hire and train promising people only to see them flame out in disappointment.

I knew there had to be a better process for improving our success rate. When I found the Topgrading book back in 2009, it gave me the tools I needed to separate the gold from the quartz in those mountains.

The Topgrading process incorporates very specific questions that are meant to reveal whether someone is an A-player, a B-player, a C-player, etc. The secret is in the exact wording of the questions and steps in the process to reveal insights about the candidate.

There’s also a newer and more approachable (i.e. shorter) book that describes the process, called Who.

We have tweaked the framework slightly to fit our needs, but the premise is to filter out the B-players and C-players, and to only engage with A-players. The process looks a little something like this:

  1. Screening call (Conducted by HR)
  2. In-person in-depth “Topgrading” interview (HR)
  3. In-person culture interview (Team Lead)
  4. Team interview (Team)

Only the most promising candidates make it through to meet with a team leader.

On top of the interview process, we use a lightweight work-style behavior and motivation profiling tool called Predictive Index.

This allows us to create behavior profiles for each position, to identify what behaviors define success in any position. I call it “lightweight” because it only takes a few minutes for a candidate to fill out, but the insights it reveals are stunning.

Once a candidate has passed their Topgrading interview, they fill out a quick Predictive Index quiz, which shows us a their natural behavioral patterns and motivations.

marketing team - employee trust
What is the candidate motivated by? What does she like? How does she work, naturally?

This tells us whether the person will naturally be a great fit for that position. If a candidate doesn’t ‘fit’ the profile, we don’t necessarily remove them from consideration. But, we know which questions to ask to ensure we are creating a position that person will be happy with.

Because WiderFunnel is a data-focused marketing agency (as I hope yours is a data-focused marketing team), we also require most candidates to complete various technical tests.

Yes, it takes effort to hire the right people

If this sounds intense, that’s because it is. But it’s worth it!

There is a lot at stake when you are talking about a person’s job and livelihood (not to mention the well-being of your business), and these upfront processes will help you get the right personalities on your team from the outset.

Not only does hiring the right people save you a lot of money on mis-hires, but a team of A-players wants us to hire more A-players. Someone who can’t match the pace of the team’s thinking and work is frustrating to everyone else. A team of stallions doesn’t invite ponies to their party.

Our team members are proud of the day they pass their 90-day probationary period and receive their full fledged WiderFunnel team jacket. They know they have joined an elite team.

marketing team - WiderFunnel jackets
Thumbs up! It was a good day when James and Agnes got their WiderFunnel jackets.

Keep people at the center

All this talk about A-players, stringent hiring processes, and the cost of mis-hires may sound like people are just cost items. But, that is the opposite of how I see our people. And that wouldn’t be the best way to create any high functioning team.

Your team members don’t leave their personal lives at the door when they enter your workplace. They are whole people and all areas of their lives affect how they show up in their day.

I’m a long-time member of Entrepreneur’s Organization (EO) and other similar mastermind groups. At EO, I belong to a small forum group of entrepreneurs who meet monthly to discuss the best and worst things that are happening in our businesses and personal lives. I have learned how important it is to have people I trust that can relate to my experiences. In that forum, I have also learned how tightly business and personal life are intertwined.

A few years ago, I brought some of EO’s perspectives into WiderFunnel’s team. It began as part of our Friday afternoon happy hour, where everyone shares their weekly “Awesomes” with the rest of the team.

At 4:00pm every Friday, we stop working, pour a few beers, and every team member shares a professional awesome and a personal awesome from that week. It’s contagious: If you’ve had a rough week, hearing 25 “Awesome’s” is a pretty cool pick-me-up.

Building on my insight from EO, I also encouraged people to share if they have a weekly “Awful” and the result was powerful. The laughter and tears shared within this forum of support encourage our team to be Real with each other.

Victoria Petriw

I am a firm believer that all people want is to be heard, and to be loved. Companies often act like this doesn’t translate into the professional realm, that it only lives in the personal realm. And that is, I think, the number one mistake a lot of companies make.

– Victoria Petriw

It’s important to create structures that help meet your team’s needs.

Individual needs

How often do each of your team members get a check-in with their boss? As you might have guessed by now, I’m going to recommend a structured process for regular check-ins.

marketing team - check in
Real, 1-on-1 conversation can solve emerging issues before they become real problems.

A few years ago, we implemented the Entrepreneurial Operating System (EOS), based on Gino Wickman’s book Traction, which shows a structure for communicating throughout the company. Part of that system defines a regular check-in rhythm.

Some companies take an ad hoc or “as needed” approach to meetings, but I’ve found that team members often feel neglected if they aren’t regularly scheduled.

If you are not checking in with the people on your team, regularly, you should rethink your management strategy. We ensure that each WiderFunnel team member has, at the very least, a monthly check-in with their team lead.

These check-ins are a space for personal and professional review, for project updates, and value-based feedback. Are your team members being heard? Do they feel appreciated and successful?

In tough times, real, 1-on-1 conversation can solve emerging issues before they become real problems.

Team needs

To make sure your team as a whole is jiving, you need to facilitate the right meetings at the right times.

marketing team - alignment
Keeping your team aligned requires intentional meetings.

Within the Traction system, we’ve set up daily huddles, weekly working meetings, quarterly priority-setting meetings, and annual planning meetings within each team. This creates a consistent rhythm and flow of information for the entire company.

This system helps us make sure that the projects each individual is working on come to fruition.

Agnes Tseng

Many of our meetings are recurring, but they all have a specific ‘why’. No one here has time to waste, and each meeting has a purpose, agenda, and priorities.

– Agnes Tseng

I encourage you to look at your meeting schedule and ask yourself whether each meeting is intentional? Does it have a clear purpose? If not, it may be worth your while to test a system like Traction.

A culture of personal ownership

The Dilbert era is over, for big and small companies alike. People want to love where they work. So, how do you make your team attractive to A-players? And how do you retain your A-players?

Do you need more perks? Beer on Fridays? Exotic company retreats? Company bowling night? It can feel overwhelming to keep up with the perks some companies offer. And last week’s perks are today’s entitlement.

Some time ago, we decided to change how WiderFunnel-ers view company culture. In the past, the task of planning fun, culture-stimulating social activities fell to the Operations team.

But it began to feel like team members were sitting back, waiting for Operations to deliver happiness. And if they didn’t like what was happening, morale waned.

Victoria Petriw

It felt like everyone was sitting around the dinner table waiting to be served, expecting ‘culture’ to be provided on a silver platter. But culture is like happiness: You can’t inject it into a company. It has to live in each individual.

– Victoria Petriw

So, we decided to shift the perspective. We encouraged team members to contribute to company culture and activities. Now, we have a WiderFun initiative where we have team-planned monthly fun-tivities, and the change is palpable.

From events like WiderFunnel-themed jeopardy, to WiderFun-lympics, to spontaneous game nights and jam sessions, I have seen the team commit to creating the culture they want to work in. And, they love it even more because they’ve had a part in creating it. (Which, by the way is a great example of the IKEA Effect cognitive bias.)

The IKEA Effect says that people are more likely to love something if they’ve had a part in creating it. I’m no longer surprised when my daughters most-raved-about meals are the ones that they’ve helped cook.

Rather than taking a top-down approach to culture, challenge your team to own it!

What does this mean for your bottom-line?

A happy, smart, engaged team wants to deliver great work. Structures and frameworks like the ones I’ve shared are a starting point. You may find others that work for you, but the principle is the same.

When you have put rigorous thought into building a well-oiled machine, when individuals are in the right jobs, in the right culture, you will see the effects in your bottom-line.

So, now it’s your turn.

What do you do to build and maintain a high-performing marketing team? How does your company create, maintain and enhance culture? Add your comments below.

And, if you know someone you think would be a great fit for our team, please send them our way. WiderFunnel is hiring!

The post How to build a high-performance marketing team appeared first on WiderFunnel Conversion Optimization.

Link to original: 

How to build a high-performance marketing team