The text (or characters) inside a website hyperlink. Anchor text can help inform search engines of a webpage’s subject matter. It’s a fairly simple thing to explain, however, anchor text is a controversial topic in SEO (search engine optimization). Let’s touch on that bit. The Old “Click Here” Lesson Up until very recently, if you typed the words “click here” into Google, one of the top results would be a result for Adobe Acrobat. Why? Because for the past 15-20 years, people have been publishing the anchor text: “Click Here To Download Adobe Acrobat” and making that anchor text a…
A recent update to Google AdWords is changing the way performance marketers understand their landing pages’ Quality Scores. Image via Shutterstock.
While Quality Score is a critical factor in your ad performance, it’s always been a bit of a mystery wrapped in an enigma. Marketers have never been able to natively view changes to Quality Score components in AdWords directly. That is — even though expected click through rate, ad relevance and landing page experience scores are the elements contributing to your Quality Score, you haven’t been able to see these individual scores at scale (or for given timeframes) within your AdWords account, or export them into Excel.
Which is why, up until now, some especially savvy marketers have had to improvise workarounds, using third-party scripts to take daily snapshots of Quality Score to have some semblance of historical record — and a better-informed idea as to changes in performance.
Fortunately, an AdWords reporting improvement has brought new visibility into Quality Score components that could help you diagnose some real wins with your ads and corresponding landing pages.
What’s different now?
As you may have already noticed, there are now seven new columns added to your menu of Quality Score metrics including three optional status columns:
This is not new data per se (it’s been around in a different, less accessible form), but as of this month you can now see everything in one spot and understand when certain changes to Quality Score have occurred.
So how can you take advantage?
There are two main ways you can use this AdWords improvement to your advantage as a performance marketer:
1. Now you can see whether your landing page changes are positively influencing Quality Score
Now, after you make changes to a landing page — you can use AdWords’ newest reporting improvement to see if you have affected the landing page experience portion of your Quality Score over time.
This gives you a chance to prove certain things are true about the performance of your landing pages, whereas before you may have had to use gut instinct about whether a given change to a landing page was affecting overall Quality Score (or whether it was a change to the ad, for example).
As Blaize Bolton, Team Strategist at Performance Marketing Agency Thrive Digital told me:
As agency marketers, we don’t like to assume things based on the nature of our jobs. We can now pinpoint changes to Quality Score to a certain day, which is actual proof of improvement. To show this to a client is a big deal.
Overall, if your CPC drops, now you can better understand whether it may be because of changes made to a landing page.
2. You can identify which keywords can benefit most from an updated landing page
Prior to this AdWords update, ad relevancy, expected click through rate and landing page relevancy data existed, but you had to mouse over each keyword to get this data to pop up on a keyword-by-keyword basis. Because you couldn’t analyze the data at scale, you couldn’t prioritize your biggest opportunities for improvement.
However, now that you can export this data historically (for dates later than January 22, 2016), you can do a deep dive into your campaigns and identify where a better, more relevant landing page could really help.
You can now pull every keyword in your AdWords account — broken out by campaign — and identify any underperforming landing pages.
Now, an Excel deep dive into your AdWords campaigns can help you reveal landing page weaknesses.
Specifically, here’s what Thrive Digital’s Managing Director Ross McGowan recommends:
You can break down which of your landing pages are above average, or those that require tweaking. For example, you might index your campaigns by the status AdWords provides, assigning anything “Above Average” as 3, “Average” as 2 and “Below Average” as 1. You can then find a weighted average for each campaign or ad group and make a call on what to focus on from there.
What should you do when you notice a low landing page experience score?
As Google states, landing page experience score is an indication of how useful the search engine believes your landing page is to those who click on your ad. They recommend to, “make sure your landing page is clear and useful… and that it is related to your keyword and what customers are searching for.”
In short, it’s very important that your landing pages are highly relevant to your ad. Sending traffic to generic pages on your website may not cut it. Moreover, once you are noticing low landing page engagement scores, it’s time to try optimizing these pages with some quick wins.
In the words of Thrive’s Ross McGowan:
Figure out what a user wants, and do everything you can to tailor the on-page experience to them. Whether that be [using] Dynamic Text Replacement, A/B testing elements to get the best user experience, or spending less time on technical issues and more on writing great content.
Finally, for more on AdWords’ latest improvements, AdAlysis founder Brad Geddes has written a great article on Search Engine Land. His company had enough data on hand to attempt a reverse-engineer of the formula for Quality Score to get a sense of how changes to one of the QS components would impact overall score. His recommendation is much the same as Ross’, in that, if a landing page’s score is particularly low, your best bet is to focus on increasing user’s interaction with the page.
Link juice is a non-technical SEO term used to reference the SEO value of a hyperlink to a particular website or webpage. According to Google, a multitude of quality hyperlinks (or just “links”) are one of the most important factors for gaining top rankings in the Google search engine. The term “link juice” is SEO industry jargon. It’s often talked about in relation to link building efforts such as guest posting, blogger outreach, linkbait and broken link building. How Does Link Juice Work? Link juice, link authority, and backlink authority are all different words that mean essentially the same thing….
There was a time when simply launching an A/B test was a big deal.
I remember my first test. It was a lead gen form. I completely redesigned it. I learned nothing. And it felt like I was on top of the world.
Today, things are different, especially if you’re a major e-commerce company doing high-volume conversion optimization in a team setting. The demands have shifted; the expectations are far greater. New tools are being created to solve new problems.
So what does it take to own enterprise e-commerce CRO in 2016 compared to before?
Make money during A/B tests
While “always be testing” is a great mantra, I have to ask, “is you ‘always be banking?’”
Most of us have been running tests that inform us first, and make money later. For example, you might run a test where you’ve got a clear winner, but it’s one of 5 other variations, so you’re only benefiting from it 20% of the time during the length of the experiment.
Furthermore, you may have 4 variations that are underperforming versus your Control, so you could even be losing money while you test. Imagine spending an entire year testing in that manner. You’d rarely be fully benefiting from your positive test results!
Of course, as part of a controlled experiment and in order to generate valid insights, it’s important to distribute traffic evenly and fairly between all variations (across multiple days of the week, etc).
But there also comes a time to be opportunistic.
Enter the multi-arm bandit (MAB) approach. MAB is an automated testing mechanism that diverts more traffic to better performing variations. Thresholds can be set to control how much better a variation has to perform before it is favored by the mechanism.
Hold your horses: MAB sounds amazing, but it is not the solution to all of your problems. It’s best reserved for times when the potential revenue gains outweigh the potential insights to be gained or the test has little long-term value.
Say, for example, you’re running a pre-Labor Day promotion and you’ve got a site-wide banner. This banner’s only going to be around for 5-10 days before you switch to the next holiday. So really, you just want to make the most of the opportunity and not think about it again until next year.
A bandit algorithm applied to an A/B test of your banner will help you find the best performer during the period of the experiment, and help generate the most revenue during the testing period.
While you may not be able to infer too many insights from the experiment, you should be able to generate more revenue than had you either not tested at all or gone with a traditional, even split test.
BEFORE: Test, analyze results, decide, implement, make money later.
TODAY: Test and make money while you’re at it.
When to do it: Best used in cases where what you learn is not that useful for the future.
When not to do it: Not necessarily the most useful for long-term testing programs.
Track long-term revenue gains
If you’ve been testing over the course of many months and years, accurately tracking and reporting your cumulative gains can become a serious challenge.
You’re most likely testing across different zones of your website – homepage, category page, product detail page, site-wide, checkout, etc. Multiply those zones by the number of viewport ranges you’re specifically testing on.
What do you do, sum up each individual increase and project out over the course of a year? Do you create an equation to calculate the combined effect of all of your tests? Do you avoid trying to report at all?
There isn’t one good solution, but rather a few options that all have their strengths and weaknesses:
The first, and easiest, is using a formula to determine combined results. You’ll want a strong mathematician to help you with this one. Personally, I always have a lingering doubt that none of what is being reported is accurate, even using conservative estimations. And as time goes on, things only get less accurate.
The second is to periodically re-test your original Control from the moment at which you started testing. Say, every 6 months, test your best performing variation against the Control you had 6 months prior. If you’ve been testing across the funnel, test the entire funnel in one experiment.
Yes, it will be difficult. Yes, your developers will hate you. And yes, you will be able to prove the value of your work in a very confident manner.
It’s best to run these sorts of tests with a duplicate of each variation (2 “old” Controls vs 2 best performers) just to add an extra layer of certainty when you look at your results. It goes without saying that you should run these experiments for as long as reasonably possible.
Another option is to always be testing your “original” Control vs your most recent best performer in a side experiment. Take 10% of your total traffic and segment it to a constantly running experiment that pits the original control version of your site against your latest best performer.
It’s an experiment running in the background, not affected by what you are currently testing. It should serve as a constant benchmark to calculate the total effect of all your tests, combined.
Technically, this will be a challenge. You’ll be asking a lot of your developers and your analytics people, and at one point, you may ask yourself if it’s all worth it. But in the end, you will have some awesome reports to show, demonstrating the ridiculous revenue you’ve generated through CRO.
BEFORE: Individual test gains, cumulated.
TODAY: Taking into consideration interaction effects, re-running Control vs combined new variations OR using a model to predict combined effect of tests.
When to do it: When you want to better estimate the combined effect of multiple testing wins.
When not to do it: When your tests are highly seasonal and can’t be combined OR when it becomes impossible from a technical perspective (hence the importance of doing so in a reasonable time frame—don’t wait 2 years to do it).
Track and distribute cumulative insights
If you do this right, you will learn a ton about your customers and how to increase your revenue in the future. Ideally, you should have a goody-bag of insights to look through whenever you’re in need of inspiration.
So, how do you track insights over time and revalidate them in subsequent experiments? Also, does Jenny in branding know about your latest insights into the importance of your product imagery? How do you get her on board and keep her up to date on a consistent basis?
Both of these challenges deserve attention.
The simplest “system” for tracking insights is via spreadsheet, with columns that codify insights by type, device, and any other useful criteria for browsing and grouping. This proves unscalable when you’re testing at high velocity. That’s where a custom platform comes into play that does the job of tracking and sharing insights.
For example, the team at The Next Web created in internal tool for tracking tests, insights, then easily sharing ideas via Slack. There are other publicly available options, most of which integrate with Optimizely or VWO.
BEFORE: Excel sheets, Powerpoint presentations, word of mouth, or nothing at all.
TODAY: A shared and tagged database of insights that link back to the experiments that generated them and is updated on the fly. Tools such as Experiment Engine, Effective Experiments, Iridion and Liftmap are all solving some part of this puzzle.
When to do it: When you’re learning a lot of valuable things, but having trouble tracking or sharing what you learn. (BTW, if you’re not having this problem, you might be doing something wrong.)
When not to do it: When the future is of little importance.
Code implementation-ready variations
High velocity testing doesn’t just mean quickly getting tests out the door; it means being able to implement winners immediately and move on. To make this possible, your test code has to be ready to implement, meaning:
Code should be modularized. Your scripts should be modularized into sections functionality and design changes.
BEFORE: Messy jQuery.
TODAY: Modularized experiment code, separated css that aligns with classnames.
When to do it: When you wish to make the implementation process as painless as possible.
When not to do it: When you just don’t care.
Create FOOC-free variations
If your test variations “flicker” or “flash” as they load, you’re experiencing Flash of Original Content or FOOC. It will affect your results if it goes untreated. Some of the best ways to prevent it are as follows:
Place your code snippets as high as possible on the page.
Improve site load time in general (regardless of your testing tool).
Briefly hide the body or div element being tested.
Some people think of A/B testing as a way to improve the look of their website, while others use it to test the fundamentals of their business. Take advantage of the tools at your disposal to get to the heart of what makes your business tick.
For example, we tested reducing the product range of one of our clients and discovered that they could save millions on manufacturing and marketing without losing revenue. What are the big lingering questions you could answer through A/B testing?
BEFORE: Most of us tested button colors at one point or another.
TODAY: Business decisions are being validated through A/B tests.
When to do it: When business decisions can be tested online, in a controlled manner.
When not to do it: When most factors cannot be controlled for online, during the length of an A/B test.
Use data science to test predictions, not ideas
It is highly likely that you are underutilizing the customer analytics that are available to you. Most of us don’t have the team in place or the time to dig through the data constantly. But this could be costing you dearly in missed opportunities.
If you have access to a data scientist, even on a project-basis, you can uncover insights that will vastly improve the quality of your A/B test hypotheses.
TODAY: Predictive analytics can uncover data-driven test hypotheses.
When to do it: When you’ve got lots of well-organized analytics data.
When not to do it: When you prefer the spaghetti method.
Optimize for volume of tests
There was a time when “always be testing” was enough. These days, it’s about “always be testing in 100 different places at once.” This creates new challenges:
How do you test in multiple parts of the same funnel synchronously without concern for cross pollination?
How do you organize your human resources in a way to get all the work done?
This is the art of being a conversion optimization project manager: knowing how to juggle speed vs value of insights and considering resource availability. At WiderFunnel, we do a few things that help make sure we go as fast as possible without sacrificing insights:
We stagger “difficult” experiments with “easy” ones so that production can be completed on “difficult” ones while “easy” ones are running.
We integrate with testing tool APIs to quickly generate coding templates, meaning our development doesn’t need to do any manual work before starting to code variations.
We use detailed briefs to keep everyone on the same page and reduce gaps in communication.
We schedule experiments based on “insight flow” so that earlier experiments help inform subsequent ones.
We use algorithms to control for cross-pollination so that multiple tests within the same funnel can be run while being able to segment any cross-pollinated visitors.
BEFORE: Running one experiment at a time.
TODAY: Running experiments across devices, segments, and funnels.
When to do it: When you’ve got the traffic, conversions and the team to make it happen.
When not to do it: When there aren’t enough conversions to go around for all of your tests.
Don’t get stuck in the optimization ways of the past. The industry is moving quickly, and the only way to stay ahead of your competitors (who are also testing) is to always be improving your conversion optimization program.
Bring your testing strategies into the modern era by mastering the 8 tactics outlined above. You’re an optimizer, after all―it’s only fitting that you optimize your optimization.
Do you agree with this list? Are there other aspects of modern-era CRO not listed here? Share your thoughts in the comments!
Starting April 21, we will be expanding our use of mobile-friendliness as a ranking signal. This change will affect mobile searches in all languages worldwide and will have a significant impact in our search results.
With so many websites and landing pages not optimized for mobile, it was hard not to be skeptical about the potential impact of the update. But it has been a long time coming. After all, Google claimed that it was a “mobile first” company as far back as 2010.
Was the “Mobilegeddon” nickname overly dramatic? Perhaps. Did the update completely strip non-mobile sites from mobile search results? No. But it has definitely had an effect.
What do you need to know?
First off, you need to know that Google is committed to providing a frictionless experience to mobile users. Websites and landing pages that are not meeting the standards that Google has set are being pushed down in mobile search results.
The message is clear: ignore Google’s mobile standards at your own peril.
In this article, we’ll take a look at some data that shows how Google’s April 21st update has affected search results on desktop and on mobile. We’ll also give you a quick glance into the future of mobile at Google, and how they’re showing their commitment to ensuring the best possible experience for mobile users — and for mobile advertisers.
Non-mobile sites are slipping in mobile search results
When Google makes a big algorithm change, it generally takes a couple of weeks before we really start to see changes.
Within a month, Marketing Land found that, while some sites saw no change at all, others were losing up to 35% of their mobile search rankings in the top three positions. Interestingly, rankings had only dropped 10% on desktop search.
One study by Stone Temple Consulting found that nearly 50% of non-mobile friendly URLs had dropped in rank, but in many cases the top search results were replaced with new non-mobile friendly URLs.
The author of the analysis, Eric Enge, posited that this rather mystifying turn of events may be attributed to these three factors:
The Search Quality Update (an algorithm update from May that changed how Google assesses the quality of search results).
Other, smaller algorithm tweaks (Google is constantly updating its search algorithms).
General churn that takes place in Google’s search results.
Enge goes on to say:
This is likely just the start of what Google plans to do with this algorithm. It is typical for Google to test some things and see how they work. Once they have tuned it, and gain confidence on how the algo works on the entire web as a data set, they can turn up the volume and make the impact significantly higher.
So what the heck has Google been up to since that first month? It looks like Enge was right: they’ve been busy tweaking that algorithm.
Almost three months after the update, Moovweb posted the results of a study in which they had analyzed more than 1,000 ecommerce keywords over a variety of industries. The article states:
We found that 83% of the time, the top result is tagged as mobile-friendly by Google. 81% of the time the top 3 results are mobile-friendly. And when you consider all ten of the spots on Google’s first page, 77% of the search results are “mobile-friendly.”
Google is clearly making an effort to get mobile-friendly results to the top. The problem is that for far too many keywords, not enough websites are actually following along for Google to even give a complete set of mobile-friendly options on the first page of results.
Google’s search results are only mobile-friendly 83% of the time. Try and keep up, marketers. Click To Tweet
Mobile-friendliness is affecting AdWords, too
In response to declining traffic caused by the algorithm update, many marketers have begun buying more mobile ads. Sites that are seeing less organic traffic have raised the total CPC (across the board) by 16% as compared to this time last year, according to an Adobe Digital Index report.
Additionally, mobile-friendliness is now a factor in determining Quality Score. Marketers who have built mobile-friendly landing pages in response have been rewarded with winning more auctions and getting more clicks, according to the Wall Street Journal.
Any marketers who use landing pages (and that should be all of us) should be paying attention to this. Mobile usage has been increasing constantly over the last few years, with no signs of slowing down. Marketing campaigns that do not have a mobile dimension are going to be losing to the ones that do.
Putting the effort into creating mobile-friendly landing pages is going to be a major factor in both getting clicks to that page and keeping overall campaign costs down — in fact, it already is!
Google has more mobile innovations on the way for marketers
Google is clearly not finished with its mobile updates. They recently unveiled a few new features that should make digital marketers as giddy as a puppy in a mud puddle.
As we all know, conversions on mobile are notoriously low — according to Monetate, mobile users convert about half as often as their desktop counterparts.
Google is doing its level best to help raise low conversion rates with some rather clever methods of presenting products on mobile. First up is the “Buy Now” button that will be shown in mobile product search results.
Apologies for the blurry image – Google hasn’t made any other images of this feature publicly available. Image source.
When you select a product that appears in search results, you’ll be taken to a microsite within Google that has the look and feel of that particular retailer. From there, you can choose to buy the product, or search for another product from that retailer if the specific item you want isn’t shown.
So far about a dozen merchants are using it and apparently it’s been successful (no word as to the actual numbers). Expect them to launch more products with different merchants soon.
This is further testament to Google’s “mobile first” attitude. If Google can help make sales, they will. Similarly, marketers should be doing everything they can to help give their customers an enjoyable mobile experience.
More clicks + better mobile experience after the click + more sales = a winning equation
So far about a dozen merchants are using it and apparently it’s been successful (no word as to the actual numbers). Expect them to launch more products with different merchants soon.
for everyone involved.
Make sure your websites and landing pages are up to code
Google has never before given us the recipe for success to a search algorithm update. But in this case, they’re so committed to the mobile experience that they have laid out the groundwork so that anyone can be compliant and win traffic (and conversions).
Don’t forget: this isn’t just about making Google happy. The reason Google wants you on board is so that you’re providing a great experience for your mobile users. If you concentrate on making that experience one they find useful, informative and delightful, the rest will fall in place.
Thanks for downloading the checklist.
Now go put it to good use!