Tag Archives: director

Data-Backed Advice for High-Converting Real Estate Landing Page Design [+ FREE TEMPLATE]

You’re designing a landing page for your Real Estate client, and you turn to “best practice” advice articles to help guide the way.

But there’s a nagging voice at the back of your mind:

Does this “best practice” advice apply indiscriminately to my industry? Does this author really know anything about my audience at all?

“Best practices” become “better practices” when they are industry-specific.

When our design team was recently wireframing new landing page templates for the Unbounce builder, they set out to create industry-specific templates that addressed this truth: different audiences belonging to different industries behave differently. They have different pains, different motivators and different disincentives.

Firm believers that data needs to inform design, our design team sourced their research in two key areas:

  1. Data from the Unbounce Conversion Benchmark Report: The report includes average conversion rates for 10 popular industries, as well as Machine Learning-powered recommendations around reading ease, page length, emotion and sentiment.
  2. High-converting customer landing pages: Our designers looked at the top 10 highest-converting Unbounce landing pages in those industries, and analyzed common design and copy elements across the pages.

Our design team then combined insight from these two key areas of research to build out content and design requirements for the best possible landing page template for each of the 10 industries.

One of these industries was Real Estate, and now we want to share their findings with you.

See a breakdown of their process for designing the Real Estate page template at the bottom of this post, or read on for their key findings about what converts in the Real Estate industry.

Which copy elements convert best in the Real Estate industry?

Word count

The data scientists and conversion rate optimizers who put together the Unbounce Conversion Benchmark Report found that for Real Estate lead capture landing pages, short n’ sweet is better: overall, they saw 33% lower conversion rates for longer landing pages.

This chart shows how the word count relates to conversion rates for the Real Estate vertical. On the x-axis we have word count — on the y-axis, conversion rate.

This was consistent with what the design team saw across high-converting Unbounce customer landing pages in Real Estate: pages were relatively short with concise, to-the-point copy.

Reading ease

The Unbounce Convert Benchmark Report also revealed that in the Real Estate vertical, prospects want simple and accessible language. The predicted conversion rate for a landing page written with 6th grade level language was nearly double that of a page written at the university level.

This chart shows how conversion rates trend with changes to reading ease for the Real Estate Industry. On the x-axis we have the Flesch Reading Ease score — on the y-axis, conversion rate.
According to the Unbounce Conversion Benchmark Report, 41.6% of marketers in the Real Estate industry have at least one page that converts at less than 1.3% (in the 25th percentile for this industry). Download the report here to see the full data story on Real Estate and get recommendations for copy, sentiment, page length and more for nine additional industries.

Fear-inducing language

The Unbounce Conversion Benchmark Report used an Emotion Lexicon and Machine Learning to determine whether words associated with eight basic emotions (anger, anticipation, disgust, fear, joy, sadness, surprise and trust) affected overall conversion rates.

While these emotions did not seem to dramatically correlate with conversion rate in the Real Estate vertical, fear-based language was the exception. We saw a slight negative trend for pages using more fear-inducing terms:

This chart shows how the percentage of copy that evokes fear is related to conversion rates for the Real Estate vertical. On the x-axis we have the percentage of copy that uses words related to fear — on the y-axis, conversion rate.

If more than half a percent of your copy evokes feelings of fear, you could be hurting your conversion rates.

Here are some words commonly associated with fear on Real Estate lead capture landing pages: highest, fire, problem, watch, change, confidence, mortgage, eviction, cash, risk…

See the full list in the Unbounce Conversion Benchmark Report.

Calls to action

When our designers looked at the top 10 highest-converting Unbounce customer landing pages in the Real Estate vertical, they took a close look at the calls to action and found that:

  • Every page provided a detailed description of the offer
  • Almost all had a “request a call back” or “call us” option (other CTAs included “get more info,” “apply now” and “get the pricelist”)
  • Most did an excellent job of including button copy that reinforces what prospects get by submitting the form
If you use a “call us” CTA on your landing pages, make sure you try out our CallRail integration. This will help you track which calls are a result of your paid spend and landing pages!

Here are some examples of the forms and calls to action on some of our highest-converting Real Estate lead capture landing pages:

The usual suspects (benefits, social proof, UVP…)

Without much exception, the pages featured a lot of the copywriting elements that one would expect to see on any high-converting landing page (regardless of vertical):

  • Detailed benefits listed as bullet points
  • A tagline that reinforces the unique value proposition or speaks to a pain point:
  • And not surprisingly, testimonials. One page went above and beyond with a video testimonial:

Which design elements convert best in the Real Estate industry?

The highest-converting Real Estate landing pages included lots of imagery:

  • Beautiful hero shots of the interior and exterior of properties
  • Maps
  • Full-width photography backgrounds
  • Floor plans

Some examples:

Our designers also studied other design features as basic guidelines for the template they were then going to create.

While these specifics are meant to be taken with a grain of salt (you may already have brand colors and fonts!) they could serve as a good starting point if you’re starting completely from scratch and want to know what others are up to.

Many of the high-converting pages had:

  • San-serif fonts
  • Palettes of deep navy and forest green
  • Orange (contrasting) call to action buttons
The highest-converting landing pages in the Real Estate industry sit at 11.2%. If your Real Estate page converts at over 8.7%, you’re beating 90% of your competitors’ pages. See the breakdown of median and top conversion rates (and where you stand!) via the Unbounce Conversion Benchmark Report.

Behold, the template our designers created

After synthesizing all that research, our Senior Art Director Cesar Martínez took to his studio (okay, his desk), and drafted up this beautiful Real Estate landing page template:

Not only is the template beautiful, it was created by analyzing actual data: what makes for a high-performing landing page in the Real Estate industry via the Unbounce Benchmark Report and high-converting customer pages.

Footnote: The design process

Curious about the process our designers used to develop this data-backed Real Estate landing page template? Here are the steps they followed:

  1. For the 10 highest-converting customer landing pages, they analyzed all common elements (such as form, what type of information is collected, what type of offer, if there are any testimonials, etc). This allowed them to build their content requirements.
  2. They referred to the word count recommendations in the Unbounce Conversion Benchmark Report and designed for that word count limit.
  3. They referred to reading ease level recommendations for that specific industry from the Benchmark Report and shared the information with their copywriter.
  4. They sketched out a rough idea of their potential landing page template.
  5. They selected typography and colors relevant to the industry based on what was popular in the 10 examples.
  6. They named their imaginary company in the industry and sketched out some potential logos. They picked photography built out a moodboard.
  7. That helped them gather all the information they needed to build out their template!

See the article here: 

Data-Backed Advice for High-Converting Real Estate Landing Page Design [+ FREE TEMPLATE]

How Your PPC Strategy Should Differ on the AdWords Search VS Display Network

As we ramp up for Unbounce’s upcoming PPC week, we thought we’d revisit some of our favorite PPC posts from the archives. This post was originally published in June 2015 but still rings true. Enjoy!

Have you ever been kicking so much AdWords Search Network butt that it made you raise your chest and gave you instant super powers?

You know, the type of confidence that makes you walk with a pep in your step and hair bouncing around?

Confidence
Kinda like this mini-horse. Image source.

Feels AMAZING.

But sometimes you hit a ceiling with the keywords you’re bidding on, and there’s literally no more Search Network traffic out there (since your impression shares are all around 98%).

You immediately think of using the AdWords Display Network, simply because you know there’s more traffic, cheaper clicks and much more potential ROI just waiting to be grabbed.

dog-pee-to-claim-land-FACE-Low-Cost-SpayNeuter-Clinic-FB
Actually, don’t do that. It won’t get you conversions. Image source.

As you may already know, the AdWords Display Network (also known as the Google Display Network/GDN) is the biggest digital ad network in the world. It allows you to advertise on publisher properties like websites, mobile apps, Gmail, YouTube and more.

Compared to the AdWords Search Network, the Display Network also houses the largest viewership of any online platform. YouTube itself has a monthly viewership equivalent to 10 Super Bowls – so it shouldn’t come as a surprise that display advertising is said to capture 34% of all online ad spend and about 10% of all marketing budgets.

But with new channels come different strategies.

What you’re doing on the AdWords Search Network will not perform the same way on the Display Network.

If the Display Network is uncharted territory for you, here’s how you need to adjust your current PPC strategy to get the results you want.

Different user behavior calls for a different strategy

The biggest difference between the AdWords Search Network and Display Network can be seen in the sweet visual I had my designer custom-make below.

unbounce-_chuck_norris

In the “Chuck Norris” action cycle above, you can see how the power of keyword intent in the Search Network can put people really close to taking action (AKA converting), but the Display Network typically has visitors who are a few steps behind.

This is because people who are on the Display Network aren’t actively searching for what you offer. As Erin Sagin puts it, they’re rarely in “shopping mode.”

Instead, Display Network visitors are most likely in the research phase when your display ads are hitting them. They’re on forums, blog posts, or watching that YouTube vid trying to gather enough information to make a decision. They don’t know what they need yet, so your job is create awareness.

If you’re selling more of an “emergency” service like being a locksmith or roadside assistance, then you’ll have a hard time using the Display Network to your advantage.

This is simply because ads on the Display Network are not triggered from a search engine like text ads on the Search Network are. The Search Network works as a demand harvester (your ads are grabbing the intent), while the Display Network works as a demand generator (your ads are creating awareness).

So how do you change your strategy from the Search Network to also make the AdWords Display Network a money making machine?

Create trust and deliver value

As I mentioned, your Display Network ads could be interrupting someone who’s reading the news, reading a blog or watching a video.

Because of that, the level of commitment it takes for someone to stop what they’re doing, click your ad, then call you or fill out your landing page form is high and much more unlikely compared to the Search Network. In other words, you can’t expect to have the same campaign conversion rates on the Display Network as you do on the Search Network.

If you’re offering “Free Quotes” on the search network because people are actively searching for someone who can relieve their problem, it might actually be better for you to lead with valuable educational material (i.e. your content) on the Display Network.

A perfect example of this is my crush of an email marketing company, Emma.

Emma uses the AdWords Search Network to drive sign ups, but they use the Display Network to give you great, fun and actionable value. Here’s what some of their Display Ads look like (click on them to go to the accompanying landing page):

emma-gif-1

emma-gif-2

emma-gif-3

I reached out to Cynthia Price (the Director of Marketing at Emma) and she gave me this golden nugget about how they use the AdWords Display Network:

We get that someone seeing a display ad isn’t necessarily interested in learning more about our product just yet. It’s all about brand awareness, and more importantly for us, trust-building.

So we offer content that we think will be valuable and helpful to our audience’s marketing efforts. It starts our brand relationship off on the right foot, helps them understand the strength of our expertise and paves the way for us to nurture or retarget them in the future.

You already know that content marketing’s core foundation is about adding true value.

Your display ads should be no different.

On the Display Network, your first goal is to establish trust by giving value, and then nurture the visitors down the road to become paying customers.

Revisit your targeting options

Once you have a great piece of content that delivers value and educates your audience, it’s time to figure out how to target it to people who actually want it.

Let’s have a look at the five targeting options that’ve been found to drive the biggest impact on the Display Network.

To illustrate how each one works, let’s pretend you’re a dog walker. Your name is Lori and you live in Huntington Beach, CA. You’ve been advertising on the AdWords search network and this is your landing page:

lori-the-dog-walker

What are your best targeting options?

Placement targeting

Placement targeting allows you to advertise directly on certain publisher sites. This means you could have your ad show up on Forbes or CNN if you’d like.

Best practice advice: Make sure the website or page’s audience is relevant to what you’re offering. Don’t shotgun approach all of CNN – sniper shot individual placements within CNN if you can.

Contextual/Keyword targeting

Contextual/Keyword targeting allows you to give Google your keywords and have it automatically find relevant placements for your ads.

Best practice advice: Mix this with placement targeting to be even more laser focused with your targeting.

Topic targeting

Topic targeting allows you to go more broad than regular placement targeting.

For this, you could target the topic of Pets & Animals directly and cast a wider net, with the possibility of your ads showing up on FerretLovers.com (yes, that’s a real site).

Best practice advice: See what Topic targeting gives you, then exclude unwanted placements from your campaign once things are running and data is coming in.

Interest targeting

Interest targeting is kind of similar to topic targeting, but instead of judging the context of websites, interest targeting tracks behaviors of web users. This targeting method can be even more vague than topic targeting.

Best practice advice: Every industry is different, so always test things out and see the performance. Be quick to pause and exclude irrelevant placements once data comes in.

Combining targeting methods

This is where you’ll have a lot of fun and potentially get better results.

You’re not locked into using just one targeting method with the AdWords Display Network. In fact, Alistair Dent over at Search Engine Watch and many others highly recommend never going with just one targeting option, but combining multiple together.

You can target certain placements with the addition of contextual/keyword targeting to tell Google that you only want your ads to show when a visitor is on CNN and reading an article about dog walking.

Or you can target different interests with contextual/keyword targeting as well.

Create multiple ad groups, each with their own targeting specifications, and see how they perform against each other. Once you’ve hit your stride and conversions are coming in, pause the other ad groups that aren’t working, and make variations of the ad group targetings that are working for you, so that you can squeeze more out of your PPC dollars.

Wrapping up

Wow! Quite a bit of info huh?

Now that you clearly know why your Display Network strategy has to be different from your Search Network strategy, what do you have to lose? Get started now. Try different targeting combinations, and never forget to offer true value.

What have you found to be the best driver of conversions on the AdWords Display Network? How different are your strategies compared to the ones we talked about?

Read More: 

How Your PPC Strategy Should Differ on the AdWords Search VS Display Network

How to Make Facebook Ads Work for Your B2B Company With a Simple Google Form Survey

Google Survey

Facebook is still primarily a leisure social network: people browse it to connect with their friends, find interesting news and, of course, check out cat pictures. Therefore, most marketers believe that advertising on Facebook is useless for B2B. They’ll point to lower click-through rates for B2B Facebook ads, and higher costs per click, and go back to focusing on Google Adwords. Facebook is a great tool for B2C promotions, where marketers can offer discounts, promote sales and retarget buyers. But these tactics are not always suitable for the B2B crowd. That’s fair. But guess what? Companies are made of people….

The post How to Make Facebook Ads Work for Your B2B Company With a Simple Google Form Survey appeared first on The Daily Egg.

Excerpt from: 

How to Make Facebook Ads Work for Your B2B Company With a Simple Google Form Survey

Google AdWords Launches Greater Visibility Into Quality Score Components (And What This Means For You)

A recent update to Google AdWords is changing the way performance marketers understand their landing pages’ Quality Scores. Image via Shutterstock.

While Quality Score is a critical factor in your ad performance, it’s always been a bit of a mystery wrapped in an enigma. Marketers have never been able to natively view changes to Quality Score components in AdWords directly. That is — even though expected click through rate, ad relevance and landing page experience scores are the elements contributing to your Quality Score, you haven’t been able to see these individual scores at scale (or for given timeframes) within your AdWords account, or export them into Excel.

Which is why, up until now, some especially savvy marketers have had to improvise workarounds, using third-party scripts to take daily snapshots of Quality Score to have some semblance of historical record — and a better-informed idea as to changes in performance.

Fortunately, an AdWords reporting improvement has brought new visibility into Quality Score components that could help you diagnose some real wins with your ads and corresponding landing pages.

What’s different now?

As you may have already noticed, there are now seven new columns added to your menu of Quality Score metrics including three optional status columns:

  • Expected CTR
  • Ad Relevance and
  • Landing Page Experience

And four revealing historical keyword quality:

  • Quality Score (hist.)
  • Landing Page Experience (hist.)
  • Ad Relevance (hist.)
  • Expected Click Through Rate (hist.)
what's new
Image courtesy of Google’s Inside AdWords blog

This is not new data per se (it’s been around in a different, less accessible form), but as of this month you can now see everything in one spot and understand when certain changes to Quality Score have occurred.

So how can you take advantage?

There are two main ways you can use this AdWords improvement to your advantage as a performance marketer:

1. Now you can see whether your landing page changes are positively influencing Quality Score

Now, after you make changes to a landing page — you can use AdWords’ newest reporting improvement to see if you have affected the landing page experience portion of your Quality Score over time.

This gives you a chance to prove certain things are true about the performance of your landing pages, whereas before you may have had to use gut instinct about whether a given change to a landing page was affecting overall Quality Score (or whether it was a change to the ad, for example).

As Blaize Bolton, Team Strategist at Performance Marketing Agency Thrive Digital told me:

As agency marketers, we don’t like to assume things based on the nature of our jobs. We can now pinpoint changes to Quality Score to a certain day, which is actual proof of improvement. To show this to a client is a big deal.

Overall, if your CPC drops, now you can better understand whether it may be because of changes made to a landing page.

2. You can identify which keywords can benefit most from an updated landing page

Prior to this AdWords update, ad relevancy, expected click through rate and landing page relevancy data existed, but you had to mouse over each keyword to get this data to pop up on a keyword-by-keyword basis. Because you couldn’t analyze the data at scale, you couldn’t prioritize your biggest opportunities for improvement.

Hovering over individual keywords
Image courtesy of Brad Geddes and Search Engine Land

However, now that you can export this data historically (for dates later than January 22, 2016), you can do a deep dive into your campaigns and identify where a better, more relevant landing page could really help.

You can now pull every keyword in your AdWords account — broken out by campaign — and identify any underperforming landing pages.

An Excel Quality Score Deep Dive
Now, an Excel deep dive into your AdWords campaigns can help you reveal landing page weaknesses.

Specifically, here’s what Thrive Digital’s Managing Director Ross McGowan recommends:

You can break down which of your landing pages are above average, or those that require tweaking. For example, you might index your campaigns by the status AdWords provides, assigning anything “Above Average” as 3, “Average” as 2 and “Below Average” as 1. You can then find a weighted average for each campaign or ad group and make a call on what to focus on from there.

What should you do when you notice a low landing page experience score?

As Google states, landing page experience score is an indication of how useful the search engine believes your landing page is to those who click on your ad. They recommend to, “make sure your landing page is clear and useful… and that it is related to your keyword and what customers are searching for.”

In short, it’s very important that your landing pages are highly relevant to your ad. Sending traffic to generic pages on your website may not cut it. Moreover, once you are noticing low landing page engagement scores, it’s time to try optimizing these pages with some quick wins.

In the words of Thrive’s Ross McGowan:

Figure out what a user wants, and do everything you can to tailor the on-page experience to them. Whether that be [using] Dynamic Text Replacement, A/B testing elements to get the best user experience, or spending less time on technical issues and more on writing great content.

Finally, for more on AdWords’ latest improvements, AdAlysis founder Brad Geddes has written a great article on Search Engine Land. His company had enough data on hand to attempt a reverse-engineer of the formula for Quality Score to get a sense of how changes to one of the QS components would impact overall score. His recommendation is much the same as Ross’, in that, if a landing page’s score is particularly low, your best bet is to focus on increasing user’s interaction with the page.

Want to optimize your landing pages?

Read more: 

Google AdWords Launches Greater Visibility Into Quality Score Components (And What This Means For You)

Marketing Machines: Is Machine Learning Helping Marketers or Making Us Obsolete?

Hollywood paints a grim picture of a future populated by intelligent machines. Terminator, 2001: A Space Odyssey, The Matrix and countless other films show us that machines are angry, they’re evil and — if given the opportunity — they will not hesitate to overthrow the human race.

Films like these serve as cautionary tales about what could happen if machines gain consciousness (or some semblance of). But in order for that to happen humans need to teach machines to think for themselves. This may sound like science fiction but it’s an actual discipline known as machine learning.

machine-learning-and-marketing-featured-650
The machines are coming. But fear not — they could help you become a better marketer. Image via Shutterstock.

Still in its infancy, machine learning systems are being applied to everything from filtering spam emails, to suggesting the next series to binge-watch and even matching up folks looking for love.

For digital marketers, machine learning may be especially helpful in getting products or services in front of the right prospects, rather than blanket-marketing to everyone and adding to the constant noise that is modern advertising. Machine learning will also be key to predicting customer churn and attribution: two thorns in many digital marketers’ sides.

Despite machine learning’s positive impact on the digital marketing field, there are questions about job security and ethics that cannot be swept under the rug. Will marketing become so automated that professional marketers become obsolete? Is there potential for machine learning systems to do harm, whether by targeting vulnerable prospects or manipulating people’s emotions?

These aren’t just rhetorical questions. They get to the heart of what the future of marketing will look like — and what role marketers will play in it.

What is Machine Learning?

Machine learning is a complicated subject, involving advanced math, code and overwhelming amounts of data. Luckily, Tommy Levi, Director of Data Science at Unbounce, has a PhD in Theoretical Physics. He distills machine learning down to its simplest definition:

You can think of machine learning as using a computer or mathematics to make predictions or see patterns in data. At the end of the day, you’re really just trying to either predict something or see patterns, and then you’re just using the fact that a computer is really fast at calculating.

You may not know it, but you likely interact with machine learning systems on a daily basis. Have you ever been sucked into a Netflix wormhole prompted by recommended titles? Or used Facebook’s facial recognition tool when uploading and tagging an image? These are both examples of machine learning in action. They use the data you input (by rating shows, tagging friends, etc.) to produce better and more accurate suggestions over time.

Other examples of machine learning include spell check, spam filtering… even internet dating — yes, machine learning has made its way into the love lives of many, matching up singles using complicated algorithms that take into consideration personality traits and interests.


Machine learning may be helpful in getting products or services in front of the right prospects.
Click To Tweet


How Machine Learning Works

While it may seem like witchcraft to the layperson, running in the background of every machine learning system we encounter is a human-built machine that would have gone through countless iterations to develop.

Facebook’s facial recognition tool, which can recognize your face with 98% accuracy, took several years of research and development to produce what is regarded as cutting-edge machine learning.

So how exactly does machine learning work? Spoiler alert: it’s complicated. So without going into too much detail, here’s an introduction to machine learning, starting with the two basic techniques.

Supervised learning

Supervised learning systems rely upon humans to label the incoming data — at least to begin with — in order for the systems to better predict how to classify future input data.

Gmail’s spam filter is a great example of this. When you label incoming mail as either spam or not spam, you’re not only cleaning up your inbox, you’re also training Gmail’s filter (a machine learning system) to identify what you consider to be spam (or not spam) in the future.

Unsupervised learning

Unsupervised learning systems use unlabeled incoming data, which is then organized into clusters based on similarities and differences in the data. Whereas supervised learning relies upon environmental feedback, unsupervised learning has no environmental feedback. Instead, data scientists will often use a reward/punishment system to indicate success or failure.

According to Tommy, this type of machine learning can be likened to the relationship between a parent and a young child. When a child does something positive they’re rewarded. Likewise, when “[a machine] gets it right — like it makes a good prediction — you kind of give it a little pat on the back and you say good job.”

Like any child (or person for that matter), the system ends up trying to maximize the positive reinforcement, thus getting better and better at predicting.

The Power of Machine Learning

A lot of what machine learning can do is yet to be explored, but the main benefit is its ability to wade through and sort data far more quickly and efficiently than any human could, no matter how clever.

Tommy is currently experimenting with an unsupervised learning system that clusters landing pages with similar features. Whereas one person could go through a few hundred pages in a day, this model can run through 300,000 pages in 20 minutes.

How do your landing page conversion rates compare against your industry competitors?

We analyzed the behavior of 74,551,421 visitors to 64,284 lead generation landing pages. Now we want to share average industry conversion rates with you in the Unbounce Conversion Benchmark Report.
By entering your email you’ll receive other resources to help you improve your conversion rates.

The advantage is not just speed, it’s also retention and pattern recognition. Tommy explains:

To go through that many pages and see those patterns and hold it all in memory and be able to balance that — that’s where the power is.

For some marketers, this raises a troubling question: If machine learning systems solve problems by finding patterns that we can’t see, does this mean that marketers should be worried about job security?

The answer is more nuanced than a simple yes or no.

Machine Learning and the Digital Marketer

As data becomes the foundation for more and more marketing decisions, digital marketers have been tasked with sorting through an unprecedented amount of data.

This process usually involves hours of digging through analytics, collecting data points from marketing campaigns that span several months. And while focusing on data analysis and post-mortems is incredibly valuable, doing so takes a significant amount of time and resources away from future marketing initiatives.

As advancements in technology scale exponentially, the divide between teams that do and those that don’t will become more apparent. Those that don’t evolve will stumble and those that embrace data will grow — this is where machine learning can help.


Marketers that don’t embrace data will fumble. Those that do will grow — ML can help.
Click To Tweet


That being said, machine learning isn’t something digital marketers can implement themselves after reading a quick tutorial. It’s more comparable to having a Ferrari in your driveway when you don’t know how to drive standard… or maybe you can’t even drive at all.

Until the day when implementing a machine learning system is just a YouTube video away, digital marketers could benefit from keeping a close eye on the companies that are incorporating machine learning into their products, and assessing whether they can help with their department’s pain points.

So how are marketers currently implementing machine learning to make decisions based on data rather than gut instinct? There are many niches in marketing that are becoming more automated. Here are a few that stand out.

Lead scoring and machine learning

Lead scoring is a system that allows marketers to gauge whether a prospect is a qualified lead and thus worth pursuing. Once marketing and sales teams agree on the definition of a “qualified lead,” they can begin assigning values to different qualified lead indicators, such as job title, company size and even interaction with specific content.

These indicators paint a more holistic picture of a lead’s level of interest, beyond just a form submission typically associated with lead generation content like ebooks. And automating lead scoring takes the pressure off marketers having to qualify prospects via long forms, freeing them up to work on other marketing initiatives.

Once the leads have reached the “qualified” threshold, sales associates can then focus their efforts on those prospects — ultimately spending their time and money where it matters most.

Content marketing and copywriting

Machine learning models can analyze data points beyond just numbers — including words on your website, landing page or PPC ads. Machine learning systems can find patterns in language and detect words that elicit the most clicks or engagement.

Is emotional copywriting on your landing page effective in your industry?

We used machine learning to help create the Unbounce Conversion Benchmark Report, which shares insights on how different aspects of page copy correspond to conversion rates across 10 industries.
By entering your email you’ll receive other resources to help you improve your conversion rates.

But can a machine write persuasive copy? Maybe, actually.

A New York-based startup called Persado offers a “cognitive content platform” that uses math, data, natural language processing, emotional language data and machine learning systems to serve the best copy and images to spur prospects into action. It does this by analyzing all the language data each client has ever interacted with and serving future prospects with the best possible words or phrases. An A/B test could never achieve this at the same scale.

Think this is a joke? With over $65 million in venture capital and a reported average conversion rate uplift of 49.5% across 4,000 campaigns, Persado’s business model is no laughing matter.

Still, there is no replacement for a supremely personalized piece of content delivered straight to your client’s inbox — an honest call to action from one human to another.

Recently Unbounce’s Director of Campaign Strategy, Corey Dilley, sent an email to our customers. It had no sales pitch, no call to action button. It was just Corey reaching out and saying, “Hey.”

corey-dilley-marketing-email-1

Corey’s email had an open rate of 41.42%, and he received around 80 personal responses. Not bad for an email written by a human!

Sometimes it’s actions — like clicks and conversions — you want to elicit from customers. Other times the goal is to build rapport. In some cases we should let the machines do the work, but it’s up to the humans to keep the content, well, human.


There is no replacement for personalized content and an honest ask from one human to another.
Click To Tweet


Machine learning for churn prediction

In the SaaS industry, churn is a measure of the percentage of customers who cancel their recurring revenue subscriptions. According to Tommy, churn tells a story about “how your customers behave and feel. It’s giving a voice to the customers that we don’t have time or the ability to talk to.”

Self-reporting methods such as polls and surveys are another good way to give a voice to these customers. But they’re not always scalable — large data sets can be hard for humans to analyze and derive meaning from.

Self-reporting methods can also skew your results. Tommy explains:

The problem with things like surveys and popups is that they’re only going to tell you what you’ve asked about, and the type of people that answer surveys are already a biased set.

Machine learning systems, on the other hand, can digest a larger number of data points, and with far less bias. Ideally the data is going to reveal what marketing efforts are working, thus leading to reduced churn and helping to move customers down the funnel.

This is highly relevant for SaaS companies, whose customers often sign up for trials before purchasing the product. Once someone starts a trial, the marketing department will start sending them content in order to nurture them into adopting the service and become engaged.

Churn models can help a marketing team determine which pieces of content lead to negative or positive encounters — information that can inform and guide the optimization process.

Ethical Implications of Machine Learning in Marketing

We hinted at the ethical implications of machine learning in marketing, but it deserves its own discussion (heck, it deserves its own book). The truth is, machine learning systems have the potential to cause legitimate harm.

According to Carl Schmidt, Co-Founder and Chief Technology Officer at Unbounce:

Where we are really going to run into ethical issues is with extreme personalization. We’re going to teach machines how to be the ultimate salespeople, and they’re not going to care about whether you have a compulsive personality… They’re just going to care about success.

This could mean targeting someone in rehab with alcohol ads, or someone with a gambling problem with a trip to Las Vegas. The machine learning system will make the correlation, based on the person’s internet activity, and it’s going to exploit that.

Another dilemma we run into is with marketing aimed at affecting people’s emotions. Sure copywriters often tap into emotions in order to get a desired response, but there’s a fine line between making people feel things and emotional manipulation, as Facebook discovered in an infamous experiment.

If you aren’t familiar with the experiment, here’s the abridged version: Facebook researchers adapted word count software to manipulate the News Feeds of 689,003 users to determine whether their emotional state could be altered if they saw fewer positive posts or fewer negative posts in their feeds.

Posts were deemed either positive or negative if they contained at least one positive or negative word. Because researchers never saw the status updates (the machine learning system did the filtering) technically it fell within Facebook’s Data Use Policy.

However, public reaction to the Facebook experiment was generally pretty scathing. While some came to the defense of Facebook, many criticized the company for breaching ethical guidelines for informed consent.

In the end, Facebook admitted they could have done better. And one good thing did come out of the experiment: It now serves as a benchmark for when machine learning goes too far, and as a reminder for marketers to continually gut-check themselves.

For Carl, it comes down to intent:

If I’m Facebook, I might be worried that if we don’t do anything about the pacing and style of content, and we’re inadvertently presenting content that could be reacted to negatively, especially to vulnerable people, then we would want to actively understand that mechanism and do something about it.

While we may not yet have a concrete code of conduct around machine learning, moving forward with good intentions and a commitment to do no harm is a good place to start.

The Human Side of Machine Learning

Ethical issues aside, the rise of machines often implies the fall of humans. But it doesn’t have to be one or the other.

“You want machines to do the mundane stuff and the humans to do the creative stuff,” Carl says. He continues:

Computers are still not creative. They can’t think on their own, and they generally can’t delight you very much. We are going to get to a point where you could probably generate highly personal onboarding content by a machine. But it [will have] no soul.

That’s where the human aspect comes in. With creativity and wordsmithing. With live customer support. Heck, it takes some pretty creative data people to come up with an algorithm that recognizes faces with 98% accuracy.

Imagine a world where rather than getting 15 spam emails a day, you get just one with exactly the content you would otherwise be searching for — content written by a human, but served to you by a machine learning system.

While pop culture may say otherwise, the future of marketing isn’t about humans (or rather, marketers) versus machines. It’s about marketers using machines to get amazing results — for their customers and their company.

Machine learning systems may have an edge when it comes to data sorting, but they’re missing many of the things that make exceptional marketing experiences: empathy, compassion and a true understanding of the human experience.

Editor’s note: This article originally appeared in The Split, a digital magazine by Unbounce.

See original article – 

Marketing Machines: Is Machine Learning Helping Marketers or Making Us Obsolete?

First CRO Certification Course in Italy – An Initiative Supported by VWO

alt : http://feedproxy.google.com/~r/ILoveSplitTesting/~5/7jRxHo7WIRI/madri_ok.mp4http://feedproxy.google.com/~r/ILoveSplitTesting/~5/7jRxHo7WIRI/madri_ok.mp4

How can you learn Conversion Rate Optimization in a way that you can apply it easily to any project?  How can you make a low performing website to a highly remunerative one without redesigning it from scratch?

Those are just two of the questions that Luca Catania, Director of Madri Internet Marketing & Head of Marketing of Catchi, answered during the First Certification CRO certification Course in Italy supported by VWO.

The course targeted a wide audience—from people with no experience in CRO to experts in the field. Attendees comprised c-suite executives—Entrepreneurs, Head of Marketing, Managing Directors, Consultants, from more than 20 different industries.

The objective of the training was to teach participants an innovative step-by-step approach to CRO, in which participants are guided to learn a system that they can apply to any business to increase conversion rates, increase leads, increase sales online.

Participants got the chance to learn how to optimize their websites in a real-time setup. Using the VWO platform live in the course allowed the participants to understand and experience how the software can help optimize websites and achieve better conversions.

Do you want to improve you CRO skills? 

You can read interesting case studies and find the dates of upcoming courses in Europe/Australasia, following Luca Catania on LinkedIn.

The post First CRO Certification Course in Italy – An Initiative Supported by VWO appeared first on VWO Blog.

Jump to original:

First CRO Certification Course in Italy – An Initiative Supported by VWO

Prototype And Code: Creating A Custom Pull-To-Refresh Gesture Animation

Pull-to-refresh is one of the most popular gestures in mobile applications right now. It’s easy to use, natural and so intuitive that it is hard to imagine refreshing a page without it. In 2010, Loren Brichter created Tweetie, one of numerous Twitter applications. Diving into the pool of similar applications, you won’t see much difference among them; but Loren’s Tweetie stood out then.

Prototype And Code: Creating A Custom Pull-To-Refresh Gesture Animation

It was one simple animation that changed the game — pull-to-refresh, an absolute innovation for the time. No wonder Twitter didn’t hesitate to buy Tweetie and hire Loren Brichter. Wise choice! As time went on, more and more developers integrated this gesture into their applications, and finally, Apple itself brought pull-to-refresh to its system application Mail, to the joy of people who value usability.

The post Prototype And Code: Creating A Custom Pull-To-Refresh Gesture Animation appeared first on Smashing Magazine.

Original article:

Prototype And Code: Creating A Custom Pull-To-Refresh Gesture Animation

Mobile-First Is Just Not Good Enough: Meet Journey-Driven Design

In a recent sales meeting for a prospective healthcare client, our team at Mad*Pow found ourselves answering an all-too-familiar question. We had covered the fundamental approach of user-centered design, agreed on leading with research and strategy, and everything was going smoothly. Just as we were wrapping up, the head of their team suddenly asked, “Oh, you guys design mobile-first, right?”

Mobile First Is Just Not Good Enough: Meet Journey-Driven Design

Well, that’s a difficult question to answer. While the concept of mobile-first began as a philosophy to help prioritize content and ensure positive, device-agnostic experiences, budgetary and scheduling constraints often result in mobile-first meaning mobile-only.

The post Mobile-First Is Just Not Good Enough: Meet Journey-Driven Design appeared first on Smashing Magazine.

Visit site: 

Mobile-First Is Just Not Good Enough: Meet Journey-Driven Design

Running an A/A Test Before A/B Testing – Wise or Waste?

To A/A test or not is a question that invites conflicting opinions. Enterprises when faced with the decision of implementing an A/B testing tool do not have enough context on whether they should A/A test. Knowing the benefits and loopholes of A/A testing can help organizations make better decisions.

In this blog post we explore why some organizations practice A/A testing and the things they need to keep in mind while A/A testing. We also discuss other methods that can help enterprises decide whether or not to invest in a certain A/B testing tool.

Why Some Organizations Practice A/A Testing

A/A testing is done when organizations are taking up new implementation of an A/B testing tool. Running an A/A test at that time can help them with:

  • Checking the accuracy of an A/B Testing tool
  • Setting a baseline conversion rate for future A/B tests
  • Deciding a minimum sample size

Checking the Accuracy of an A/B Testing Tool

Organizations who are about to purchase an A/B testing tool or want to switch to a new testing software may run an A/A test to ensure that the new software works fine, and that it has been set up properly.

Tomasz Mazur, an eCommerce Conversion Rate Optimization expert, explains further: “A/A testing is a good way to run a sanity check before you run an A/B test. This should be done whenever you start using a new tool or go for new implementation. A/A testing in these cases will help check if there is any discrepancy in data, let’s say, between the number of visitors you see in your testing tool and the web analytics tool. Further, this helps ensure that your hypothesis are verified.”

In an A/A test, a web page is A/B tested against an identical variation. When there is absolutely no difference between the control and the variation, it is expected that the result will be inconclusive. However, in cases where an A/A test provides a winner between two identical variations, there is a problem. The reasons could be the following:

  • The tool has not been set up properly.
  • The test hasn’t been conducted correctly.
  • The testing tool is inefficient.

Here’s what Corte Swearingen, Director, A/B Testing and Optimization at American Eagle, has to say about A/A testing: “I typically will run an A/A test when a client seems uncertain about their testing platform, or needs/wants additional proof that the platform is operating correctly. There really is no better way to do this than to take the exact same page and test it against itself with no changes whatsoever. We’re essentially tricking the platform and seeing if it catches us! The bottom line is that while I don’t run A/A tests very often, I will occasionally use it as a proof of concept for a client, and to help give them confidence that the split testing platform they are using is working as it should.”

Determining the Baseline Conversion Rate

Before running any A/B test, you need to know the conversion rate that you will be benchmarking the performance results against. This benchmark is your baseline conversion rate.

An A/A test can help you set the baseline conversion rate for your website. Let’s explain this with the help of an example. Suppose you are running an A/A test where the control gives 303 conversions out of 10,000 visitors and the identical variation B gives 307 out of 10,000 conversions. The conversion rate for A is 3.03%, and that for B is 3.07%, when there is no difference between the two variations. Therefore, the conversion rate range that can be set as a benchmark for future A/B tests can be set at 3.03–3.07%. If you run an A/B test later and get an uplift within this range, this might mean that this result is not significant.

Deciding a Minimum Sample Size

A/A testing can also help you get an idea about the minimum sample size from your website traffic. A small sample size would not include sufficient traffic from multiple segments. You might miss out on a few segments which can potentially impact your test results. With a larger sample size, you have a greater chance of taking into account all segments that impact the test.

Corte says, “A/A testing can be used to make a client understand the importance of getting enough people through a test before assuming that a variation is outperforming the original.” He explains this with an A/A testing case study that was done for Sales Training Program landing pages for one of his clients, Dale Carnegie. The A/A test that was run on two identical landing pages got test results indicating that a variation was producing an 11.1% improvement over the control. The reason behind this was that the sample size being tested was too small.

a/a test initial results

After having run the A/A test for a period of 19 days and with over 22,000 visitors, the conversion rates between the two identical versions were the same.

a/a test results with more data

Michal Parizek, Senior eCommerce & Optimization Specialist at Avast, shares similar thoughts. He says, “At Avast, we did a comprehensive A/A test last year. And it gave us some valuable insights and was worth doing it!” According to him, “It is always good to check the statistics before final evaluation.”

At Avast, they ran an A/A test on  two main segments—customers using the free version of the product and customers using the paid version. They did so to get a comparison.

The A/A test had been live for 12 days, and they managed to get quite a lot of data. Altogether, the test involved more than 10 million users and more than 6,500 transactions.

In the “free” segment, they saw a 3% difference in the conversion rate and 4% difference in Average Order Value (AOV). In the “paid” segment, they saw a 2% difference in conversion and 1% difference in AOV.

“However, all uplifts were NOT statistically significant,” says Michal. He adds, “Particularly in the ‘free’ segment, the 7% difference in sales per user (combining the differences in the conversion rate and AOV) might look trustworthy enough to a lot of people. And that would be misleading. Given these results from the A/A test, we have decided to implement internal A/B testing guidelines/lift thresholds. For example, if the difference in the conversion rate or AOV is lower than 5%, be very suspicious that the potential lift is not driven by the difference in the design but by chance.”

Michal sums up his opinion by saying, “A/A testing helps discover how A/B testing could be misleading if they are not taken seriously. And it is also a great way to spot any bugs in the tracking and setup.”

Problems with A/A Testing

In a nutshell, the two main problems inherent in A/A testing are:

  • Everpresent element of randomness in any experimental setup
  • Requirement of a large sample size

We will consider these one by one:

Element of Randomness

As pointed out earlier in the post, checking the accuracy of a testing tool is the main reason for running an A/A test. However, what if you find out a difference between conversions of control and an identical variation? Do you always point it out as a bug in the A/B testing tool?

The problem (for the lack of a better word) with A/A testing is that there is always an element of randomness involved. In some cases, the experiment acquires statistical significance purely by chance, which means that the change in the conversion rate between A and its identical version is probabilistic and does not denote absolute certainty.  

Tomaz Mazur explains randomness with a real-world example. “Suppose you set up two absolutely identical stores in the same vicinity. It is likely, purely by chance or randomness, that there is a difference in results reported by the two. And it doesn’t always mean that the A/B testing platform is inefficient.”

Requirement of a Large Sample Size

Following the example/case study provided by Corte above, one problem with A/A testing is that it can be time-consuming. When testing identical versions, you need a large sample size to find out if A is preferred to its identical version. This in turn will take too much time.

As explained in one of the ConversionXL’s posts, “The amount of sample and data you need to prove that there is no significant bias is huge by comparison with an A/B test. How many people would you need in a blind taste testing of Coca-Cola (against Coca-Cola) to conclude that people liked both equally? 500 people, 5000 people?” Experts at ConversionXL explain that entire purpose of an optimization program is to reduce wastage of time, resources, and money.  They believe that even though running an A/A test is not wrong, there are better ways to use your time when testing.  In the post they mention, “The volume of tests you start is important but even more so is how many you *finish* every month and from how many of those you *learn* something useful from. Running A/A tests can eat into the “real” testing time.”

VWO’s Bayesian Approach and A/A Testing

VWO uses a Bayesian-based statistical engine for A/B testing. This allows VWO to deliver smart decisions–it tells you which variation will minimize potential loss.

Chris Stucchio, Director of Data Science at VWO, shares his viewpoint on how A/A testing is different in VWO than typical frequentist A/B testing tools.

Most A/B testing tools are seeking truth. When running an A/A test in a frequentist tool, an erroneous “winner” should only be reported 5% of the time. In contrast, VWO’s SmartStats is attempting to make a smart business decision. We report a smart decision when we are confident that a particular variation is not worse than all the other variations, that is, we are saying “you’ll leave very little money on the table if you choose this variation now.” In an A/A test, this condition is always satisfied—you’ve got nothing to lose by stopping the test now.

The correct way to evaluate a Bayesian test is to check whether the credible interval for lift contains 0% (the true value).

He also says that the possible and simplest reason for A/A tests to provide a winner

is random chance. “With a frequentist tool, 5% of A/A tests will return a winner due to bad luck. Similarly, 5% of A/A tests in a Bayesian tool will report erroneous lifts. Another possible reason is the configuration error; perhaps the javascript or HTML is incorrectly configured.”

Other Methods and Alternatives to A/A Testing

A few experts believe that A/A testing is inefficient as it consumes a lot of time that could otherwise be used in running actual A/B tests. However, there are others who say that it is essential to run a health check on your A/B testing tool. That said, A/A testing alone is not sufficient to establish whether one testing tool should be prefered over another. When making a critical business decision such as buying a new tool/software application for A/B testing, there are a number of other things that should be considered.

Corte points out that though there is no replacement or alternative to A/A testing, there are other things that must be taken into account when a new tool is being implemented. These are listed as follows:

  1.  Will the testing platform integrate with my web analytics program so that I can further slice and dice the test data for additional insight?
  2.  Will the tool let me isolate specific audience segments that are important to my business and just test those audience segments?
  3.  Will the tool allow me to immediately allocate 100% of my traffic to a winning variation? This feature can be an important one for more complicated radical redesign tests where standardizing on the variation may take some time. If your testing tool allows immediate 100% allocation to the winning variation, you can reap the benefits of the improvement while the page is built permanently in your CMS.
  4. Does the testing platform provide ways to collect both quantitative and qualitative information about site visitors that can be used for formulating additional test ideas? These would be tools like heatmap, scrollmap, visitor recordings, exit surveys, page-level surveys, and visual form funnels. If the testing platform does not have these integrated, do they allow integration with third-party tools for these services.
  5. Does the tool allow for personalization? If test results are segmented and it is discovered that one type of content works best for one segment and another type of content works better for a second segment, does the tool allow you to permanently serve these different experiences for different audience segments”?

That said, there is still a set of experts or people who would opt for alternatives such as triangulating data over an A/A test. Using this procedure means you have two sets of performance data to cross-check with each other. Use one analytics platform as the base to compare all other outcomes against, to check if there is something wrong or something that needs fixing.

And then there is the argument—why just A/A test when you can get more meaningful insights by running an A/A/B test. Doing this, you can still compare two identical versions while also testing some changes in the B variant.

Conclusion

When businesses face the decision of implementing a new testing software application, they need to run a thorough check on the tool. A/A testing is one method that some organizations use for checking the efficiency of the tool. Along with personalization and segmentation capabilities and some other pointers mentioned in this post, this technique can help check if the software application is good for implementation.

Did you find the post insightful? Drop us a line in the comments section with your feedback.

Free-trial CTA

The post Running an A/A Test Before A/B Testing – Wise or Waste? appeared first on VWO Blog.

Read original article:  

Running an A/A Test Before A/B Testing – Wise or Waste?