Back in October, we were the first to claim that 2019 will be the year of page speed. We’ve got our eyes on the market and lemme tell you: Google is sending serious signals that it’s crunch time to deal with your slow pages.
Faster pages are a strategic marketing priority.
And sure enough, Google has made yet another change to uphold that prediction. In early November, they quietly rolled out the most significant update to a core performance tool we’ve seen to date, announcing the latest version of PageSpeed Insights.
So what does this update mean for marketers and their bottom line?
If you’ve used PageSpeed Insights to test page performance, it’s time to retest! Because your old speed scores don’t matter anymore. The good news is that you’ll have new data at your fingertips to help you speed up in ways that actually matter to your prospects and potential conversions.
Let’s take a closer look at this update and explore why it should play a role in your page speed strategy in 2019.
“You can’t improve what you don’t measure.”
PageSpeed Insights is easily Google’s most popular tool for measuring web performance.
When you look at the screenshot below, you can see why. It provides an easy-to-interpret color-coded scoring system that you don’t need an engineering degree to understand—red is bad, green is good. Your page is either fast, average, or slow. The closer to a perfect 100 you can get, the better. The scores also come with recommendations of what you can do to improve. It’s almost too easy to understand.
PageSpeed Insights v.4 (October 2019)
Earlier versions of PageSpeed Insights had some issues with how they reported performance. Simple results could be misleading, and experts soon discovered that implementing Google’s suggested optimizations didn’t necessarily line up with a better user experience. You might’ve gotten great scores, sure, but your pages weren’t always any faster or your visitors more engaged. Don’t even get me started on your conversion rates.
As Benjamin Estes over at Moz explains, “there are smarter ways to assess and improve site speed. A perfect score doesn’t guarantee a fast site.” Many experts like Estes began turning to more reliable tools—like GTMetrix, Pingdom, or Google’s own Lighthouse—to run more accurate performance audits. And who would blame them?
The latest version of PageSpeed Insights (v.5) fixes these issues by putting the focus where it should be: on user experience. This is a huge leap forward for marketers because it means that the tool is directly relevant to conversion optimization. It can help you get faster in ways that translate into higher engagement and conversion rates.
Lighthouse is excellent because it gives you a more accurate picture of how your landing pages perform with lab and field data. The lab data means you get results ASAP, whether you’ve seen traffic yet or not. This gives you a way to test and improve your pages before you point your ads at them.
New lab data from Lighthouse provides a much better picture of what a user experiences.
The Lighthouse engine behind PageSpeed Insights also brings more user-centric performance metrics with it, two of which are very important to your landing pages:
First Meaningful Paint (FMP) is the time it takes for the first valuable piece of content to load—usually a hero shot or video above the fold. It’s the “is this useful?” moment when you catch—or lose—a visitor’s attention. Even if the rest of your page loads later, it’s paramount that the first page elements appear as quickly as possible.
2. PageSpeed Insights Gives You Better Opportunities and Diagnostics
You can bid adieu to the short checklist of optimizations that experts like Ben Estes called out. Google has replaced the (moderately useful) feature with new opportunities and audits that will actually help you improve your visitor experience. These include new suggestions and estimated savings for each.
Your priorities should be much clearer:
Opportunities and Diagnostics in PageSpeed Insights
How your Unbounce Pages Stack Up
Faster pages earn you more traffic and better engagement. As a result, page speed has a major impact on your conversion rates and can even help you win more ad impressions for less. That’s why we’ve made page speed our priority into 2019.
To show how Unbounce stacks up in the real world, we chose to test an actual page created by one of our customers, Webistry, a digital marketing agency. Their “Tiny Homes of Maine” page is a real-world example.
Click here to expand.
We tested two versions of “Tiny Homes of Maine” using Google PageSpeed Insights v.5, running a minimum of three tests using the median results. The results below focus on the mobile scores:
Tiny Homes of Maine with Speed Boost
Speed Boost + Auto Image Optimizer
Next, we retested the Tiny Homes of Maine adding our upcoming Auto Image Optimizer into the mix. This new tool automatically optimizes your images as your page is published. You can fine-tune your settings, but we used the defaults here. Check out the mobile results:
Tiny Homes of Maine with Speed Boost + Auto Image Optimizer
The score jumped from a respectable 88 to an incredible 96 and, more meaningfully, we saw time to interactive improve from 4.4 sec to 2.7 sec. That’s 12.3 seconds faster than the average mobile web page, and 0.3 seconds faster than Google’s ideal 3 second load time.
Here we’ve shared the time to interactive speeds from both tests, for desktop and mobile, measured against the average web page:
Overall, when we tested, we saw Speed Boost and Auto Image Optimizer create a dramatic difference in performance without sacrificing visual appeal or complexity. We took a compelling page that converts well and upped the ante by serving it at blazing speeds. Whether on a mobile or desktop, the page loads in a way that significantly improves the visitor’s experience.
Speed Boost is already available to all our customers, and the Auto Image Optimizer is coming very soon. This means your own landing pages can start achieving speeds like the ones above right now. Read more about our page speed initiatives.
But hold up. What about AMP? You might already know about AMP (accelerated mobile) pages, which load almost instantly—like, less than half a second instantly. Not only do they lead to crazy engagement, but they eliminate waiting on even slow network connections. This makes your content accessible to everyone, including the 70% of global users still on 3G connections—or 70% of pedestrians on their phones while they wait at a crosswalk.
While AMP can be complicated to build, Unbounce’s drag-and-drop builder lets you create AMP in the same way you create all your landing pages. If you’d like to try it out for yourself, you can sign up for AMP beta which opens in January 2019.
For the speed test above, we decided to leave AMP out of it since AMP restricts some custom functionality and the page we used would’ve required a few design changes. It wouldn’t be apples to apples. But we’re pretty pumped to show you more of it in the next while.
Page Speed & Your Bottom Line
Seconds are one thing, but dollars are another. Google recognizes the direct impact that fast load times have on your bottom line, which is why they released the Impact Calculator in February 2018. This tool sheds more light on why providing accurate measurements is so important.
Let’s revisit our Tiny Homes landing page above as an example. Imagine this landing page gets 1,000 visitors a month, at a conversion rate of 3.5% (which is just slightly higher than the average Real Estate industry landing page in our Conversion Benchmark Report). If the conversion rate from lead to sale is 5%, and each conversion is worth an average of $54,000 (which is the mid-priced home on their landing page), then their average lead value is $2700.
Tiny Homes of Maine in the Impact Calculator
When we input those numbers into the Impact Calculator and improve their mobile page speed from 4.4 seconds to 2.8 seconds, as shown in the test above, the impact to revenue for this one page could be $52,580.
Heck yes, speed matters.
And if we forecast the near-instant speeds promised by Accelerated Mobile Pages (AMP), that page could see a potential annual revenue impact of more than $179,202 USD if it were to load in 1 second.
And that’s one landing page!
If you’ve been struggling with how to improve your page loading times, this latest version of PageSpeed Insights now gives you a much more meaningful picture of how you’re doing—and how to get faster.
You may not have considered speed a strategic priority, but when seconds can equate to tens of thousands of dollars, you need to. Try the Impact Calculator yourself or contact our sales team if you’d like to see what kind of revenue impact Unbounce landing pages can get you.
B2B products and services can be difficult to fully capture on a landing page—we know from experience.
Whether it’s defining your conversion goal, ordering your page sections, or writing copy that resonates, it’s not always a walk in the park. Not to mention B2B can involve so many more decision makers you may need to appeal to. Showcasing the value of something like software at scale can be trickier than explaining how your cutting-edge hoverboard might benefit just one person.
But, in our view, building a successful B2B page boils down to a few key things:
Creating an engaging experience that makes prospects acutely aware of the problem you solve
Promoting your offer clearly and simply, and
Cleverly leading visitors through consideration, towards conversion.
Persuasion sounds great in theory, hey, but what does this actually involve?
To help you better understand what makes an effective B2B landing page, we’ve analyzed six Unbounce-built pages doing a great job. Scroll through the examples to see what they do especially well, and how you can take their techniques to the next level.
1. PIM on Cloud
Image courtesy of PIM on Cloud. (Click image to see the full page.)
Best practice to steal: Where appropriate, bring prospects through several stages of the customer journey.
Sales cycles vary per industry, sure, but the process always starts with building interest and (ideally) ends with a purchase decision. Designed properly, some landing pages can take readers through each of these stages as they scroll from top to bottom. We found PIM on Cloud’s long-form landing page does this really well.
This brand builds awareness by offering a description of their service (in the first two page sections), they guide prospects through consideration with a list of features and benefits, and then drive conversions by detailing available plans alongside their calls to action (i.e. “Choose plan” or “Ask for pricing”, respectively).
Though some landing pages are designed to increase conversions at the bottom of the funnel, providing a more holistic journey—like PIM on Cloud does—allows a wider net for prospects to learn more. This page could even be a destination URL for many of PIM’s branded Google Ads because it’s so high-level.
Of course, some visitors will also know exactly what they’re looking for from the start, so PIM on Cloud includes anchor navigation on this page for a choose-your-adventure experience. Thanks to this, more qualified prospects can jump straight to the details most relevant to them. While landing pages shouldn’t have tons of links on them (your main site navigation would be a real no-no, for example), anchor navigation is recommended if you’re trying to cover a lot of info at once. They can make longer pages like this more digestible.
Bonus: PIM on Cloud’s landing page provides readers with an FAQ section and a contact form, further opportunity for prospects to evaluate their decision—and for the brand to collect valuable leads. When you make landing pages that cover a broad offer, be sure to consider whether you might use an FAQ to ease any potential friction, and leave a way people can get in touch with you directly just in case.
2. Resource Guru
Image courtesy of Resource Guru. (Click image to see the full page.)
Best practice to steal: Help prospects visualize a complex idea.
Many B2B products and services solve complex problems. As a result, landing pages need to be designed in such a way that they make it easy for potential customers to understand features and benefits. One way to do this is to incorporate visual elements like videos, images, and even animations—all of which can help drive conversions. According to Eyeview, using a video on your landing page can increase conversions by up to 80%.
Resource Guru’s landing page is effective because it greets viewers with a large play button as soon as they land. Pressing play is intuitive and launches a high-quality explainer video. They let this video do the talking, then quickly request an action from visitors.
Taking it to the next level:
Instead of a simple play button, this landing page could have benefitted from including a video thumbnail featuring people’s faces. Visually compelling thumbnails that align with your video’s content can actually increase play rate.
Additionally, it’s always a good idea to reiterate all the core points from your video script on your landing page in text. This ensures that even in the event you have a low play rate, prospects can still learn about your offer without having to click play. Whether they left their headphones at home that day or prefer text, it’s good to have a backup plan.
Image courtesy of Blink. (Click image to see the full page.)
Best practice to steal: Include the right kind of proof to build trust and credibility.
Blink’s landing page above relies heavily on testimonials and a list of select, high-profile clients, which are presented immediately below their contact form. Also, rather than diving into product features, Blink backs up their expertise by showcasing industry awards.
Taking it to the next level:
Although testimonials, logos, and other social proof are effective, it’s worth noting that Blink misses the opportunity to (immediately) explain what they actually do for customers at the start of this page.
The digital asset management company applies the rule of three when presenting their key benefits and testimonials. This clear, concise, and easy-to-consume structure is also key to the landing page’s successful layout: it introduces the product, backs up their claims with stats, and provides an easy way for prospects to request a demo. The easier visitors can consume and retain the content on your landing page, the better equipped they are to make a decision to purchase. They’re also more likely to keep scrolling instead of being overwhelmed by too much info.
Taking it to the next level:
Headline clarity is key, and you only have the first few words of anything to convince people to keep reading. In my opinion, MediaValet could have benefited from using a variation of their sub-headline (“Organize your assets, marketing content and media in one central location with digital asset management.”) as their primary headline to make their product offer that much more obvious.
5. Vivonet Kiosk
Image courtesy of Vivonet Kiosk. (Click image to see full page.)
Best practice to steal: A floating CTA button gives you a greater chance to convert.
A landing page has one goal—to convince visitors to take action. Whatever the intended next step, it’s your job to create a clear, strategically placed call to action that lets visitors know what to do next. Using multiple CTAs can be distracting to your audience, but a consistent CTA that follows visitors throughout their experience? That’s crystal clear.
Vivonet Kiosk uses a floating CTA button that follows visitors as they scroll down the page. No matter where they’re at, the “Talk to Us About Kiosks” button remains in the bottom right-hand corner of their screen.
Best practice to steal: Have a conversation with your prospects.
Alright, y’got me. I’m using an Unbounce example here, but I think you’ll agree it’s pretty good. This is a landing page we created to speak about a problem we solve, and drive signups.
In the screenshot you may notice that this page actually breaks one of the rules we established above: it includes the main site navigation. Think of this as a hybrid, as well as a great example of how flexible you can be. Our page is structured with the persuasive force of a landing page (and built using our builder)—but incorporates neatly into the rest of our site, living on our domain and sharing the site’s nav. We do this fairly often when we want to build a web page especially quickly for the site that would otherwise require a ton of dev work.
Since Unbounce markets to marketers, we also wanted to overcome the hardened shell of skepticism that so many of us develop when it comes to other people’s campaigns. So this landing page uses a conversational framework to build trust. It offers a straightforward rundown of both the problem—running ads has become increasingly pricey—and the solution before it ever pitches our platform. And the inclusion of a chatbot invites you to ask questions we don’t cover, keeping the conversation going.
Of course, a landing page with an educational tone risks losing the reader’s attention—the same way a boring teacher might. In addition to a friendlier tone, we use interactive elements, animations, and social proof in the form of quotes from digital marketers. All of these elements keep things lively and provide added detail.
Like the example from PIM on Cloud, we also anticipated less qualified prospects might visit the page, so we include tabs and collapsible page sections that provide more info or answer questions. If a reader happens to hit the page without a strong understanding of what we mean by “landing page,” for instance, they can click to learn the answer, without leaving. Like any good conversationalist, we listen as well as talk.
We’ve been hearing it for years, though any one of us would be forgiven for letting it slide.
There are other priorities, after all. Marketers have been busy ensuring content is GDPR compliant. We’ve installed SSL certificates, made sure that our pages are mobile-responsive, and conducted conversion optimization experiments.
Some of us have had kids to raise. (And others, dogs.)
But Google has been sending some serious signals lately that suggest sluggish loading is a problem you can no longer sleep on.
In fact, if we look at Google’s actions, it’s undeniable that 2019 will be the year of page speed, the year of the lightning bolt. It’s the year when the difference between fast and slow content becomes the difference between showing up in the search results (whether paid or organic) or disappearing completely.
If you’ve been putting off improvements to your landing page performance until now, chances are that slow content is already killing your conversions. But in 2019, slow content will kill your conversions… to death.
Not convinced? Let’s explore the evidence together.
Google has been saying speed matters since forever
One of the reasons marketers aren’t taking Google’s latest messaging about page speed as seriously as they should is that the company has been asking us to speed up for at least a decade.
Way back in June of 2009, Google launched its “Let’s make the web faster” initiative, which sought to realize co-founder Larry Page’s vision of “browsing the web as fast as turning the pages of a magazine.”
“Let’s make the web faster” video posted on June 22, 2009 (via YouTube)
As part of this initiative, Google made a number of commitments, but they stressed that better speed wasn’t something they could achieve alone. On the same day, a post called “Speed Matters” on the Google AI blog contained a similar message:
Because the cost of slower performance increases over time and persists, we encourage site designers to think twice about adding a feature that hurts performance if the benefit of the feature is unproven.
These weren’t just empty words. The publication of the “Let’s make the web faster” and “Speed Matters” posts signaled a burst of activity from Google. This included:
making speed a ranking factor for desktop searches (2010)
releasing PageSpeed tools for Firefox (2009) and Chrome (2011)
adding the capacity to preload the first search result to Chrome (2011)
But that was nearly ten years ago, and Google followed it with… almost nothing.
Digital marketers and web devs thought they were safe to focus on other things.
Then, in February of 2017, Google returned to the subject of speed in a big way, publishing an industry benchmark report that’s been widely shared ever since.
You may have seen some of the results:
Google’s benchmark revealed that as load times get longer, the probability of bounce increases significantly (via Think with Google).
The first version of the benchmark found that the average mobile landing page was taking 22 seconds to load. This average came down to 15.3 seconds in 2018, but it’s still a significant concern.
(If you’d like a visceral reminder of why a 15-second average wait is still a major problem, hold your breath for that long.)
While the core message that “speed matters” was the same in 2009, in the report Google was now warning that “consumers are more demanding than ever before. And marketers who are able to deliver fast, frictionless experiences will reap the benefits.”
The benchmark report sounded an alarm. And the 2018 update dialed up the volume: “Today it’s critical that marketers design fast web experiences across all industry sectors.”
Google and Page Speed: A Timeline
Much like “Let’s make the web faster,” the 2017 benchmark preceded a flurry of activity from Google, this time laser-focused on mobile page speeds. Here are a few of the more significant moments that should concern you:
May 2017: Google introduces AMP landing pages to AdWords
This update to AdWords (now Google Ads) makes it possible for advertisers to point their mobile search ads to Accelerated Mobile Pages (AMP), an ultra-light standard for web pages that is designed to load in less than a second on a mobile device. It’s the strongest indicator yet that Google wants you to get behind AMP in a big way.
June 2017 to February 2018: Google makes its tools more insistent
In this period, performance tools like PageSpeed Insights and “Test My Site” began making more forceful claims about speed improvements. In February, Google even announced two new tools. The Mobile Speed Scorecard lets you measure your domain’s load time against up to ten of your competitors. And the Impact Calculator produces an estimate of the revenue impact you’d see by speeding up your site. (They’re done with being subtle.)
July 2018: Google’s “Speed Update” drops
While speed has been a ranking factor in desktop search results since 2010, the “Speed Update” applies stronger standards to mobile searches. Alongside mobile-first indexing, this places renewed pressure on site creators to ensure their mobile landing page experiences are speedy and engaging.
July 2018: Mobile Speed Score is added to Google Ads
Though Mobile Speed Score doesn’t (yet) have a direct impact on your cost-per-click (CPC), loading times already factor into your Quality Score because they determine landing page experience. By isolating mobile load times, Google Ads now makes it “easier to diagnose and improve your mobile site speed.” Hint, hint.
Google is making mobile page speed mandatory…
It’s not a drip, it’s a monsoon. Looking at the full timeline of announcements, launches, and product updates reveals that Google has been more active than in 2009—and that this initiative is ongoing. Take a look:
Want a better view of this timeline? Click above to open a larger version.
Since 2009, one of the ongoing arguments that Google has been making—through releasing tools and metrics like PageSpeed Insights, Lighthouse, “Test My Site,” the Speed Scorecard, Impact Calculator, and Mobile Speed Score—is that speed matters.
Since 2017, though, that argument has gotten much louder. And while no single action or announcement on this timeline should send you into a tizzy just yet, it’s worth remembering that Google’s gentle reminders tend to become more or less mandatory.
The search engine’s previous drips about mobile responsiveness or, say, web security both manifested in concrete changes to their browser and search engine that forced marketers to prioritize.
In 2016, for instance, you could have safely put SSL certification on your “nice-to-have” list because all Google promised was a small boost to encrypted sites in the search rankings. Nice, to have, but not critical. In 2018, Google Chrome began actively flagging non-HTTPS sites as “Not Secure.”
Unbounce wanted to know what, if anything, digital marketers are doing to meet Google’s new performance standards. So in the “Inside Unbounce” tent at this year’s Call to Action conference, we conducted an informal survey of attendees.
Participants could choose any landing page they wanted. (A majority of these participants weren’t Unbounce customers, but we were happy to measure pages created with our own builder as well.)
Together, we’d run the selected page through Google’s “Test My Site” tool and record the results.
An attendee uses “Test My Site” at CTAConf 2018. Unbounce wanted to know, how fast are you?
Our numbers beat the benchmark by a significant margin. That’s not shocking considering CTAConf is a digital marketing conference. The average load time was 10.27 seconds, five seconds faster than Google’s 2018 benchmark.
But it wasn’t all good news, and just how bad it got surprised us:
Only 1.6% of the 188 attendee landing pages we tested at CTAConf loaded in three seconds. Not a single one we tested loaded faster than that.
This means even savvy marketers are not getting the opportunity to convert because a majority of prospects bounce before the content ever loads. Imagine stressing over the color of a button or the length of your headline copy only to discover most people who click on your ad will never even see the resulting landing page.
It’s no wonder, then, that Google is putting increased pressure on marketers to meet their standards in 2019. They can’t afford to be serving up a heaping spoonful of frustration with each search results. And neither can you.
Major players are already sprinting ahead
Even if Google weren’t forcing our hands, it’s hard to imagine a business that wouldn’t benefit from allocating resources to ensuring their website loads like lightning. Major web brands like Etsy and eBay have long been transparent about the importance of speed to their business, and many more companies are waking up to it.
TELUS, one of Canada’s largest telecommunication companies, committed to improving user experience across their web properties in a series of recent blog posts. According to the blog, this initiative to improve performance and speed is “aligned with what Google was really saying: Improving the customer experience is paramount.”
We reached out to Josh Arndt, Senior Technology Architect and Performance Program Lead at TELUS Digital, who explained why this move made a lot of sense:
Customers expect to be able to do what they want in a way that fits their life. While users come to our website for the content, speed – or lack of – may be the first point of friction in their digital journey. Our goal is to remove friction and make their experience effortless and rewarding. As such, performance and other web quality characteristics will always be on our roadmap.
TELUS recognizes that speed—or a lack of it—serves as the unofficial gatekeeper to their content. In this context, page speed is a natural priority, even if it’s one many of us have been collectively ignoring.
Our manifesto, or what page speed means to Unbounce
As the market leader in landing pages, Unbounce recognizes that being capable of extremely fast speeds represents a significant advantage for our clients. Turbo-charged landing pages result in more traffic and higher engagement, boosting conversions and helping PPC campaigns win increased ad impressions for less.
We’ve been happy to make it our priority into 2019. At the same time, though, we also want to remove some of the obstacles to building faster landing pages.
Over the past few months, our developers have been optimizing Unbounce pages for the recommendations made by Google’s PageSpeed Insights. This bundle of technical improvements (we call it Speed Boost) automatically takes care of many of the technical details that can be a hurdle to improving performance, especially if development hours are tight or (let’s be realistic here) non-existent.
Speed versus beauty
Another sticking point when it comes to speeding up is that few marketers feel comfortable sacrificing visuals for faster load times. Image file sizes have increased to match the larger display resolutions and higher pixel density of modern mobile devices, one reason the average page size has doubled in the past three years.
With the addition of support for ultra-light SVG images and the recent integration of the free Unsplash image galleries right within the Unbounce builder, we’re helping marketers keep things looking slick without weighing down the landing page.
And we’re working toward creating even more optimization opportunities in the near future, including the Auto Image Optimizer, which automatically compresses the images on your landing pages. (You can decide how much or little compression you want.)
The result will be Cheetah speeds—no, scratch that, cheetah-with-a-rocket-strapped-to-its-back speeds—but without the need to sacrifice either visual allure or creative control.
We’ve taken the pressure off. Check out our plans and pricing for desktop and mobile landing pages that are always optimized with speed in mind. It guarantees a better user experience and less ad spend wasted on ads that don’t convert.
Unbounce + AMP Landing Pages
When it comes to improving page speeds on mobile devices, accelerated mobile pages (AMP) set the gold standard by offering load times that are typically much quicker on a 3G connection—and under a second on 4G.
AMP implementation also has a democratizing effect, which Facebook advertising expert Mari Smith points out:
If you wait too long to ensure speedy landing pages, your competitors will zoom right past you…It’s a total race right now. Specifically, with the pending issues around net neutrality, page speed could become far more important than it already is.
But AMP can also be hard. As Unbounce’s Larissa Hildebrandt put it in a recent post, “the reason the AMP framework creates a fast page is because it is so restrictive.”
If all this sounds like a killer headache in the making, you’re right.
While Unbounce has been greatly interested in supporting AMP, we wanted to make sure it’s fast and easy for our customers to implement. So when Unbounce launches support for AMP landing pages in early 2019, you’ll be able to use our drag-and-drop builder to create AMP landing pages in no time.
No marketer can afford to ignore page speed in 2019. Mobile speeds can have a dramatic effect on paid advertising spend and your conversion rates, and Google’s actions so far show that the search engine is cracking down on the slow-to-load across all devices.
What does the future hold? I don’t pretend to have a crystal ball, but here are a few educated guesses:
If mobile loading times don’t get much faster, then we can expect more pressure from Google. This could take the form of further changes to indexing or Google Ads, another round of benchmarks, or the addition of new features and tools.
There’s a growing sense of urgency among marketers, and the major players are already moving to improve their loading times. Even if you’re in the small business space, these things tend to have a trickle-down effect. If you don’t work to improve your performance, chances are your competitors will.
As development on AMP continues, the standard will gain new flexibility while maintaining optimal speeds. It’s already overcome early limitations, and it’s likely we’ll see adoption rates accelerate across all industries.
Since 2009, we’ve seen some remarkable developments in mobile technology, including widespread adoption of touchscreens, the rollout of 4G cellular capabilities, and voice-based search. But the web itself hasn’t always evolved to match—instead, it’s gotten slower and heavier. (Haven’t we all?)
In 2019, though, that will begin to change, for all of the reasons discussed above. The web will speed up and slim down, and those who don’t match the new paradigm will be left behind.
Thankfully, if 2019 is The Year of Page Speed, then you’ve still got opportunities to start speeding up in advance. Let us know your plans in the comments below.
Ever heard the saying “Cart before the horse”? Or “You have to crawl before you can walk”? Or “You can’t put lipstick on a landing page with 27 links”?
That last one may be exclusive to landing page software employees, but the sentiment is the same. Unless the foundation of your landing page is strong, any optimization beyond that will be a waste of your time—and ad spend. Because even the slickest, fanciest landing page will leak precious conversions if it lacks certain crucial elements.
For the sake of those ad dollars, let’s go back to basics.
In collaboration with our friends (and customers!) at Skillshare, we’ve created a free video crash course on the fundamentals of a high-converting landing page. Whether you’re building your first page or just want a refresher, you’ll get a checklist to set up each of your pages for success.
The full course, Creating Dedicated Landing Pages: How to Get Better ROI for Your Marketing Spend, is hosted by Unbounce VP of Product Marketing Ryan Engley and comprised of 11 videos totalling a quick 31 minutes. Sign up for a free Skillshare account and dive right into binge mode, or keep scrolling for an overview of what every landing page you create should have.
Bonus: Skillshare is offering 2 free months and access to thousands of other marketing classes just for signing up through our course.
Who’s it for?
Anyone running marketing campaigns! But in particular, those who execute on them.
Whether you’re responsible for launching paid advertising campaigns, build and design landing pages yourself, or work with designers and copywriters to create them, this course will ensure you’ve covered every base to create a compelling and high-converting post-click experience.
In a nutshell: It’s for anyone who runs paid marketing campaigns and wants to get the most bang for their buck.
What will it teach me?
In 11 videos, Ryan will take you through the process of creating a persuasive marketing campaign, cover each step of building a successful landing page within it, and explain the “why” behind it all so you’re taught to fish instead of just being handed the fish.
A few tidbits to start
If you’re thinking, “What’s wrong with sending people to my homepage?” then Attention Ratio is a great place to start.
“Your website is a bit of a jack of all trades,” Ryan explains. “Usually it’ll have a ton of content for SEO purposes, maybe information about your team…but if you’re running a marketing campaign and you have a single call to action in mind, your website’s not going to do you any favours.”
The more links you have on your page, the more distractions there are from your campaign’s CTA. You don’t want people to explore—you want them to act. And an Attention Ratio of 1:1 is a powerful way of achieving that.
Somewhat self-explanatory, your Unique Selling Proposition describes the benefit you offer, how you solve for prospects’ needs, and what distinguishes you from the competition. This doesn’t all have to fit in one sentence, rather, it can reveal itself throughout the page. But if you’re going to focus on one place to do the “heavy lifting,” as Ryan calls it, this place should be your headline and subhead.
Take Skillshare’s landing page for a content marketing course by Buzzfeed’s Matt Bellassai (if his name doesn’t ring a bell, Google him, grab some popcorn, and come back to us with a few laughter-induced tears streaming down your face). Without even looking at the rest of the page, you know exactly what you’ll get out of this course and how it will help you achieve a goal.
What’s more convincing than word of mouth? Since we don’t advise stalking and hiring people’s friends to tell prospects how great you are, the next best thing is to feature testimonials on your landing page. The key here is that you’re establishing trust and credibility by having someone else back you up.
Customer quotes, case studies, and product reviews are just a few of the many ways you can inject social proof into your landing page. Think of it as a “seal of approval” woven into your story that shows prospects you deliver on the promise of your Unique Selling Proposition.
Customer testimonials serve as the proof in your pudding.
Watch all 11 episodes of Creating Dedicated Landing Pages: How to Get Better ROI for Your Marketing Spend to set your landing pages up for success in less time than it takes to finish your lunch break. Beyond being 100% free, it’ll save you a lot of guesswork in building landing pages that convert and precious ad spend to boot. So settle in for a mini binge watch with a sandwich on the company tab—you earned it.
A Guide To Embracing Challenges And Excelling At Your UX Design Internship
This is the story about my user design internship. I’m not saying that your internship is going to be anything like mine. In fact, if there’s one thing I can say to shape your expectations, it would be this: be ready to put them all aside. Above all else, remember to give yourself space and time to learn. I share my story as a reminder of how much I struggled and how well everything went despite my difficulties so that I’ll never stop trying and you won’t either.
It all started in May 2018, when I stepped off the plane in Granada, Spain, with a luggage at my side, laptop on my back, and some very rusty Spanish in my head. It was my first time in Europe and I would be here for the next three months doing an internship in UX design at Badger Maps. I was still pretty green in UX, having been learning about it for a barely a year at this point but I felt ready and eager to gain experience in a professional setting.
Follow along as I learned how to apply technical knowledge to complete the practical design tasks assigned to me:
Create a design system for our iOS app using Sketch;
Design a new feature that would display errors occurring in data imports;
Learn the basics HTML, CSS, and Flexbox to implement my design;
Create animations with Adobe Illustrator and After Effects.
This article is intended for beginners like me. If you are new to UX design looking to explore the field — read on to learn if a UX design internship is the right thing for you! For me, the work I ended up completing went well beyond my expectations. I learned how to a design system, how to compromise design with user needs, the challenges of implementing a new design, and how to create some “moments of delight.” Every day at the internship presented something new and unpredictable. At the conclusion of my internship, I realized I had created something real, something tangible, and it was like everything I had struggled with suddenly fell into place.
My first task was to create a design system for our existing iOS app. I had created design systems in the past for my own projects and applications, but I had never done them retrospectively and never for a design that wasn’t my own. To complete the assignment, I needed to reverse engineer the mockups in Sketch; I would first need to update and optimize the file in order to create the design system.
It was also at this opportune moment when I learned the Sketch program on my computer had been outdated for about a year and a half. I didn’t know about any of the symbols, overrides and other features in the newer versions. Lesson learned: keep your software updated.
Before worrying about the symbols page, I went through the mockups artboard by artboard, making sure they were updated and true to the current released version of the application. Once that was done, I began creating symbols and overrides for different elements. I started with the header and footer and moved on from there.
As a rule of thumb, if an element showed up in more than one page, I would make it a symbol. I added different icons to the design system as I went, building up the library. However, it quickly became clear that the design system was evolving and changing faster than I could try to organize it. Halfway through, I stopped trying to keep the symbols organized, opting instead to go back and reorganize them once I had finished recreating each page. When I stopped going back and forth between mockups and symbols and worrying about the organization for both, I could work more efficiently.
It was easy to come to appreciate the overrides and symbols in Sketch. The features made the program much more powerful than what I was used to and increased the workability of the file for future designs. The task of creating the design system itself challenged me to dive deep into the program as well as understand all the details of the design of our application. I began to notice small inconsistencies in spacing, icon size, or font sizes that I was able to correct as I worked.
The final step was to go back into the symbols page and organize everything. I weeded through all the symbols, deleted those not in use and any replicas. Despite being a little tedious, this was a very valuable step in the process. Going through the symbols after working through the document gave me a chance to reevaluate how I had created the symbols for each page. Grouping them together forced me to consider how they were related throughout the app.
By going through this thought process, I realized how challenging it was to create a naming system. I needed to create a system broad enough to encompass enough elements, specific enough to avoid being vague, and that could easily be understood by another designer. It took me a few tries before I landed upon a workable system that I was happy with. Ultimately, I organized elements according to where they were used in the application, grouping pieces like lists together. It worked well for an application like Badger that had distinct designs for different features in the app. The final product was a more organized file that would be a lot easier to work with for any future design iterations.
As a capstone to this project, I experimented with modernizing the design. I redesigned the headers throughout the app, drawing on native apple apps for inspiration. Happily, the team was excited about it as well and are considering implementing the changes in future updates to the app.
Overall, working a Sketch file to such detail was an unexpectedly helpful experience. I left with a much greater fundamental understanding of things like font size, color, and spacing by virtue of redoing every page. The exercise of copying existing design required a minute attention to detail that was very satisfying. It was like putting together a Lego model: I had all the pieces and knew what the end product needed to look like. I just needed to organize everything and put them together to create the finished product. This is one of the reasons why I enjoy doing UX design. It’s about the problem solving and piecing together a puzzle to create something that everyone can appreciate.
Chapter 2: The Design
The next part of my internship allowed me to get into the weeds with some design work. The task: to design a new import page for the Badger web application.
The team was working on redesigning the badger to CRM integration to create a system that allowed users to view any data syncs and manage their accounts themselves. The current connection involves a lot of hands-on work from badger CSAs and AEs to set up and maintain. By providing an interface for users to directly interact with the data imports, we wanted to improve the user experience for our CRM integration.
My goal was to design a page that would display errors occurring in any data imports that also communicated to users how and where to make the necessary changes to their data. If there were more errors associated with a single import or users would like to view all errors at once, they should be able to download an excel file of all that information.
Create an import page that informs the user on the status of an import in process;
Provide a historical record of account syncs between Badger and the CRM with detailed errors associated with each import;
Provide links to the CRM for each account that has an import error in Badger;
Allow users to download an excel file of all outstanding errors.
Badger customer with CRM account: As a customer with a CRM, I want to be able to connect my CRM to my badger and visualize all data syncs so that I’m aware of all errors in the process and can make changes as necessary.
Badger: As a badger, I want users to be able to manage and view the status of their CRM integration so that I can save time and manual work helping and troubleshooting users syncing their badger to their CRM accounts.
Before I really delved into the design, we needed to go through some thinking to decide what information to show and how:
Bulk versus continuous imports Depending on the type of user, there are two ways to import data to Badger. If done through spreadsheets, the imports would be batched and we would be able to visualize the imports in groups. Users integrated with their CRMs, however, would need to have their Badger data updated constantly as they made changes within their CRM. The design needed to be able to handle both use cases.
Import records Because this was a new feature, we weren’t absolutely sure of the user behavior. Thus, deciding how to organize the information was challenging. Should we allow users to go for an infinity scroll in a list of their history? How would they search for a specific import? Should they be able to? Should we show the activity day-by-day or month by month?
Ultimately, we were only able to make a best guess for each of these issues — knowing that we could make appropriate adjustments in the future once users began using the feature. After thinking these issues out, I moved into wireframing. I had the opportunity to design something completely different and this was both liberating and challenging. The final design was a culmination of individual elements from various designs that were created along the way.
The hardest part of this process was learning to start over. I eventually learned that forcing something into my design for solely aesthetic purposes was not ideal. Understanding this and letting my ideas go was key to arriving at a better design. I needed to learn how to go start over again and again to explore different ideas.
1. Using white space
Right off the bat, I needed to explore what information we wanted to show on the page. There were many details we could include — and definitely the room to do it.
All the unnecessary information added way too much cognitive load and took away from what the user was actually concerned about. Instead of trying to get rid of all the white space, I needed to work with it. With this in mind, I eventually chucked out all the irrelevant information to show only what we expect our users to be most concerned about: the errors associated with data imports.
This was the final version:
The next challenge was deciding between a sidebar versus a header for displaying information. The advantages to the sidebar was that the information would be consistently visible as the user scrolled. But we also had to ensure that the information contained in the sidebar was logically related to what was going on in the rest of the page.
The header offered the advantage of a clean, single column design. The downside was that it took up a lot of vertical real estate depending on how much information was contained in the header. It also visually prioritized the contents of the header over what was below it for the user.
Once I worked out what information to display where, the sidebar navigation became the more logical decision. We expect users to be primarily concerned with the errors associated with their imports and with a large header, too much of that information would fall below the fold. The sidebar could then be a container for an import and activity summary that would be visible as the user scrolled.
Sidebar design: After I decided on having a sidebar, it came down to deciding what information to include and how to display it.
I struggled to create a design that was visually interesting because there was little information to show. For this reason, I once again found myself adding in unnecessary elements to fill up the space although I wanted to prioritize the user. I experimented with different content and color combinations, trying to find the compromise between design and usability. The more I worked with it, the more I was able to parse down the design to the bare bones. It became easier to differentiate useful information from fillers. The final product is a streamlined design with just a few summary statistics. It also offers great flexibility to include more information in the future.
Import process: The import progress page was created after the design for the import page was finalized. The biggest design challenge here was deciding how to display the in-progress import sync. I tried different solutions from pop-ups and overlays but ultimately settled with showing the progress in the sidebar. This way, users can still resolve any errors and see the historical record of their account data while an import is in progress. To prevent any interruptions to the import, the ‘Sync data’ and ‘Back to Badger’ buttons are disabled so users can’t leave the page.
With the designs done, I moved onto HTML and CSS.
Chapter 3: HTML/CSS
This project was my first experience with any type of coding. Although I had tried to learn HTML and CSS before, I had never reached any level of proficiency. And what better way to start than with a mockup of one’s own design?
Understanding the logic of organizing an HTML document reminded me of organizing the Sketch document with symbols and overrides. However, the similarities ended there. Coding felt like a very alien thing that I was consistently trying to wrap my head around. As my mentor would say, “You’re flexing very different muscles in programming than you are in design.” With the final product in hand now, I’m fully convinced that learning to code is the coolest thing I’ve learned to do since being potty trained.
The first challenge, after setting up a document and understanding the basics, was working with Flexbox. The design I had created involved two columns side by side. The right portion was meant to scroll while the left remained static. Flexbox seemed like a clean solution for this purpose, assuming I could get it to work.
Implementing Flexbox consisted of a lot of trial and error and blind copying of code while I scrambled through various websites, reading tutorials and inspecting code. With guidance from my mentor through this whole process, we eventually got it to work. I will never forget the moment when I finally understood that by using flex-direction: column I would get all of the elements into a single column, and flex-direction: row helped placed them in one row.
It makes so much sense now, although my initial understanding of it was the exact opposite (I thought flex-direction: column would put elements in columns next to each other). Surprisingly, I didn’t even come to this realization until after the code was working. I was reviewing my code and realized I didn’t understand it at all. What tipped me off? In my CSS, I had coded flex-direction: row into the class I named column. This scenario was pretty indicative of how the rest of my first coding experience went. My mental model was rarely aligned with the logic of the code, and they often clashed and went separate ways. When this happened, I had to go back, find my misconceptions, and correct the code.
After setting up Flexbox, I needed to figure out how to get the left column to stay fixed while the right portion scrolled. Turns out this couldn’t be achieved with a single line of code as I had hoped. But working through this helped me understand the parent-child relationship that aided me immensely with the rest of the process.
Coding the vertical timeline and the dial was also a process. The timeline was simpler than I had originally anticipated. I was able to create a thin rectangle, set an inner shadow and a gradient filling to it, and assign it to the width of each activity log.
The dial was tricky. I tried implementing it with pure CSS with very little success. There were a few times I considered changing the design for something simpler (like a progress bar) but I’m quite happy I stuck with it.
There was a moment in my coding process where I threw every line of code I’d ever written into every class to try to make it work. To make up for this lack of hindsight, I needed to spend quite a while going through and inspecting all the elements to remove useless code. I felt like a landlord kicking out the tenants who weren’t paying rent. It was most definitely a lesson learned in maintaining a level of housekeeping and being judicious and thoughtful with code.
The majority of the experience felt like blind traversing and retrospective learning. However, nothing was more satisfying than seeing the finished product. Going through the process made me interact with my work in a way I had never done before and gave me insight into how design is implemented. In all of my expectations for the internship, I never anticipated being able to code and create one of my own designs. Even after being told I would be able to do so on my first day, I didn’t believe it until after seeing this page completed.
Chapter 4: Working With Baby Badgers
As part of the process integrating Badger users with their CRM accounts, we needed our users to sign into their CRM — requiring us to redirect them out of badger to the native CRM website. To prevent a sudden, jarring switch from one website to another, I needed to design intermediate loading pages.
I started out with your run-of-the-mill static redirection page. They were simple and definitely fulfilled their purpose, but we weren’t quite happy with them.
The challenge was to create something simple and interesting that informed the user they were leaving our website in just a few seconds it was visible. The design would need to introduce itself, explain why it was there, and leave before anyone got tired of looking at it. It was essentially an exercise in speed dating. With that in mind, I decided to try animations — specifically that of a cheeky little badger, inspired by the existing logo.
Using the badger logo as a starting reference point, I created different badger characters in Adobe Illustrator. The original logo felt a little too severe for a loading animation, so I opted for something a little cuter. I kept the red chest and facial features from the original logo for consistency and worked away at creating a body and head around these elements. The head and stripes took a while to massage into shapes that I was happy with. The body took the form a little easier, but it took a little longer to find the right proportion between the size of the head and the body. Once I nailed that down, I was ready to move onto animating.
My first instinct was to try a stop-motion animation. I figured it was going to be great — a lá Wallace and Gromit. But after the first attempt and then the second, and all the ensuing ones, it became clear that watching that show as a child had not fully equipped me with the skills required to do a stop-motion animation.
I just wasn’t able to achieve the smoothness I wanted, and there were small inconsistencies that felt too jarring for a very short loading animation. Animation typically runs at 23 frames per second, and my badger animation only had about 15 frames per second. I considered adding more frames, but upon suggestion from my mentor, decided to try character animation instead.
This was the first time I had animated anything that was more than 5 moving parts and there was definitely a learning curve to understanding how to animate a two-dimensional character in a visually satisfying way. I needed to animate the individual elements to move by themselves independent of the whole in order to make the motion believable. As I worked on the animation, the layers I imported became increasingly granular. The head went from being one layer to five as I learned the behavior of the program and how to make the badger move.
I anchored each limb of the body and set each body part as a child to the parent layer of the body. I set the anchor points accordingly at the top of the thighs and shoulders to make sure they moved appropriately and then, using rotations and easing, simulated the movement of the body parts. The head was a tad bit tricky and required some vertical movement independent of the body. To make the jump seem more realistic, I wanted the head to hang in space a little before being pushed up by the rest of the body, and to come down just slightly after the rest of him. I also adjusted the angle I tried to make him seems as if he were leading with his nose, pointing up during the jump, and straightforward while he ran.
The animation featured on the page redirecting the user back to badger displayed the baby badger running back to badger with a knapsack full of information from the CRM.
And finally: the confused badger. This was done for the last page I needed to create: an error page notifying the user of unexpected complications in the integration process. And what better way to do that then a sympathetic, confused badger?
The tricky part here was combining the side profile of the existing cartoon badger and the logo to create a front-facing head shape. Before beginning this project, I had never once seen a real live badger. Needless to say, Badger has found its way into my google image searches this month. I was surprised to see how flat the head of a badger actually is. In my first few designs, I tried to mimic this but wasn’t satisfied with the result. I worked with the shape some more, adjusting the placement of the nose, the stripes, and the ears to achieve the final result:
This animation process has forced me to take my preexisting knowledge to a higher level. I needed to push myself beyond what I knew rather than limiting myself with what I thought I could do. I originally started with the stop-motion animation because I didn’t trust myself to do character animation. By giving myself the chance to try something new and different, I was able to achieve something that exceeded my own expectations.
The three months I spent at my internship were incredibly gratifying. Every single day was about learning and trying something new. There were challenges to everything I did — even with tasks I was more familiar with such as design. Every time I created something, I was very insecure and apprehensive about how it would be received. There was a lot of self-doubt and lots of discarded ideas.
For that reason, it was incredible to be part of a team and to have a mentor to lead me in the right direction. Being told to try something else was often the only encouragement I needed to try something else and achieve something bigger and better. I like to picture myself as a rodent in a whack-a-mole game, being hit on the head over and over but always popping up again and again. Now the struggles and challenges have come to an end, I only want to do it all over again.
I appreciate what I’ve learned and how I was pushed to go beyond what I thought I could do. It’s crazy to see how far I’ve come in a few months. My understanding of being a UX designer has grown immensely, from figuring out the features, to hammering out the design, and then writing front-end code to implement it. This internship has taught me how much more I have to learn and has motivated me to keep working. I’ve come to understand that what I can do should never be limited by what I know how to do.
If I have a set of items, which have variable lengths of content inside, and set their parent to display: flex, the items will display as a row and line up at the start of that axis. In the example below my three items have a small amount of content and are able to display the content of each item as an unbroken line. There is space at the end of the flex container which the items do not grow into because the initial value of flex-grow is 0, do not grow.
If I add more text to these items, they eventually fill the container, and the text begins to wrap. The boxes are assigned a portion of the space in the container which corresponds to how much text is in each box — an item with a longer string of text is assigned more space. This means that we don’t end up with a tall skinny column with a lot of text when the next door item only contains a single word.
This behavior is likely to be familiar to you if you have ever used Flexbox, but perhaps you have wondered how the browser is working that sizing out, as if you look in multiple modern browsers you will see that they all do the same thing. This is down to the fact that detail such as this is worked out in the specification, making sure that anyone implementing Flexbox in a new browser or other user agent is aware of how this calculation is supposed to work. We can use the spec to find this information out for ourselves.
The CSS Intrinsic And Extrinsic Sizing Specification
You fairly quickly discover when looking at anything about sizing in the Flexbox specification, that a lot of the information you need is in another spec — CSS Intrisnic and Extrinsic Sizing. This is because the sizing concepts we are using aren’t unique to Flexbox, in the same way that alignment properties aren’t unique to Flexbox. However, for how these sizing constructs are used in Flexbox, you need to look in the Flexbox spec. It can feel a little like you are jumping back and forth, so I’ll round up a few key definitions here, which I’ll be using in the rest of the article.
The preferred size of a box is the size defined by a width or a height, or the logical aliases for these properties of inline-size and block-size. By using:
Or the logical alias inline-size:
You are stating that you want your box to be 500 pixels wide, or 500 pixels in the inline direction.
The min-content size is the smallest size that a box can be without causing overflow. If your box contains text then all possible soft-wrapping opportunities will be taken.
The max-content size is the largest size the box can be to contain the contents. If the box contains text with no formatting to break it up, then it will display as one long unbroken string.
Flex Item Main Size
The main size of a flex item is the size it has in the main dimension. If you are working in a row — in English — then the main size is the width. In a column in English, the main size is the height.
Items also have a minimum and maximum main size as defined by their min-width or min-height on the main dimension.
Working Out The Size Of A Flex Item
Now that we have some terms defined, we can have a look at how our flex items are sized. The initial value of the flex properties are as follows:
The flex-basis is the thing that sizing is calculated from. If we set flex-basis to 0 and flex-grow to 1 then all of our boxes have no starting width, so the space in the flex container is shared out evenly, assigning the same amount of space to each item.
This shows us that figuring out what auto means is pretty important if we want to know how Flexbox works out the size of our boxes. The value of auto is going to be our starting point.
When auto is defined as a value for something in CSS, it will have a very specific meaning in that context, one that is worth taking a look at. The CSS Working Group spend a lot of time figuring out what auto means in any context, as this talk for spec editor Fantasai explains.
We can find the information about what auto means when used as a flex-basis in the specification. The terms defined above should help us dissect this statement.
“When specified on a flex item, the auto keyword retrieves the value of the main size property as the used `flex-basis`. If that value is itself auto, then the used value is `content`.”
So if our flex-basis is auto, Flexbox has a look at the defined main size property. We would have a main size if we had given any of our flex items a width. In the below example, the items all have a width of 110px, so this is being used as the main size as the initial value for flex-basis is auto.
However, our initial example has items which have no width, this means that their main size is auto and so we need to move onto the next sentence, “If that value is itself auto, then the used value is content.”
We now need to look at what the spec says about the content keyword. This is another value that you can use (in supporting browsers) for your flex-basis, for example:
flex: 1 1 content;
The specification defines content as follows:
“Indicates an automatic size based on the flex item’s content. (It is typically equivalent to the max-content size, but with adjustments to handle aspect ratios, intrinsic sizing constraints, and orthogonal flows”
In our example, with flex items that contain text, then we can ignore some of the more complicated adjustments and treat content as being the max-content size.
So this explains why, when we have a small amount of text in each item, the text doesn’t wrap. The flex items are auto-sized, so Flexbox is looking at their max-content size, the items fit in their container at that size, and the job is done!
The story doesn’t end here, as when we add more content the boxes don’t stay at max-content size. If they did they would break out of the flex container and cause overflow. Once they fill the container, the content begins to wrap and the items become different sizes based on the content inside them.
Resolving Flexible Lengths
It’s at this point where the specification becomes reasonably complex looking, however, the steps that need to happen are as follows:
First, add up the main size of all the items and see if it is bigger or smaller than the available space in the container.
If the container size is bigger than the total, we are going to care about the flex-grow factor, as we have space to grow.
If the container size is smaller than the total then we are going to care about the flex-shrink factor as we need to shrink.
Freeze any inflexible items, which means that we can decide on a size for certain items already. If we are using flex-grow this would include any items which have flex-grow: 0. This is the scenario we have when our flex items have space left in the container. The initial value of flex-grow is 0, so they get as big as their max-width and then they don’t grow any more from their main size.
If we are using flex-shrink then this would include any items with flex-shrink: 0. We can see what happens in this step if we give our set of flex items a flex-shrink factor of 0. The items become frozen in their max-content state and so do not flex and arrange themselves to fit in the container.
In our case — with the initial values of flex items — our items can shrink. So the steps continue and the algorithm enters a loop in which it works out how much space to assign or take away. In our case we are using flex-shrink as the total size of our items is bigger than the container, so we need to take away space.
The flex-shrink factor is multiplied by the items inner base size, in our case that is the max-content size. This gives a value with which to reduce space. If items removed space only according to the flex-shrink factor then small items could essentially vanish, having had all of their space removed, while the larger item still has space to shrink.
There is an additional step in this loop to check for items which would become smaller or larger than their target main size, in which case the item stops growing or shrinking. Again, this is to avoid certain items becoming tiny, or massive in comparison to the rest of the items.
All that was simplified in terms of the spec as I’ve not looked at some of the more edge-casey scenarios, and you can generally simply further in your mind, assuming you are happy to let Flexbox do its thing and are not after pixel perfection. Remembering the following two facts will work in most cases.
If you are growing from auto then the flex-basis will either be treated as any width or height on the item or the max-content size. Space will then be assigned according to the flex-grow factor using that size as a starting point.
If you are shrinking from auto then the flex-basis will either be treated as any width or height on the item or the max-content size. Space will then be removed according to the flex-basis size multiplied by the flex-shrink factor, and therefore removed in proportion to the max-content size of the items.
Controlling Growing And Shrinking
I’ve spent most of this article describing what Flexbox does when left to its own devices. You can, of course, exercise greater control over your flex items by using the flex properties. They will hopefully seem more predictable with an understanding of what is happening behind the scenes.
By setting your own flex-basis, or given the item itself a size which is then used as the flex-basis you take back control from the algorithm, telling Flexbox that you want to grow or shrink from this particular size. You can turn off growing or shrinking altogether by setting flex-grow or flex-shrink to 0. On this point, however, it is worth using a desire to control flex items as a time to check whether you are using the right layout method. If you find yourself trying to line up flex items in two dimensions then you might be better choosing Grid Layout.
Debugging Size Related Issues
If your flex items are ending up an unexpected size, then this is usually because your flex-basis is auto and there is something giving that item a width, which is then being used as the flex-basis. Inspecting the item in DevTools may help identify where the size is coming from. You can also try setting a flex-basis of 0 which will force Flexbox to treat the item as having zero width. Even if this isn’t the outcome that you want, it will help to identify the flex-basis value in use as being the culprit for your sizing issues.
A much-requested feature of Flexbox is the ability to specify gaps or gutters between flex items in the same way that we can specify gaps in grid layout and multi-column layout. This feature is specified for Flexbox as part of Box Alignment, and the first browser implementation is on the way. Firefox expects to ship the gap properties for Flexbox in Firefox 63. The following example can be viewed in Firefox Nightly.
As with grid layout, the length of the gap is taken into account before space is distributed to flex items.
In this article, I’ve tried to explain some of the finer points of how Flexbox works out how big the flex items are. It can seem a little academic, however, taking some time to understand the way this works can save you huge amounts of time when using Flexbox in your layouts. I find it really helpful to come back to the fact that, by default, Flexbox is trying to give you the most sensible layout of a bunch of items with varying sizes. If an item has more content, it is given more space. If you and your design don’t agree with what Flexbox thinks is best then you can take control back by setting your own flex-basis.
There’s a joke in the marketing world that A/B testing actually stands for “Always Be Testing.” It’s a good reminder that you can’t get stellar results unless you can compare one strategy to another, and A/B testing examples can help you visualize the possibilities. I’ve run thousands of A/B tests over the years, each designed to help me hone in on the best copy, design, and other elements to make a marketing campaign truly effective. I hope you’re doing the same thing. If you’re not, it’s time to start. A/B tests can reveal weaknesses in your marketing strategy, but they…
Earlier this year, a man drove his car into a lake after following directions from a smartphone app that helps drivers navigate by issuing turn-by-turn directions. Unfortunately, the app’s programming did not include instructions to avoid roads that turn into boat launches.
From the perspective of the app, it did exactly what it was programmed to do, i.e. to find the most optimal route from point A to point B given the information made available to it. From the perspective of the man, it failed him by not taking the real world into account.
The same principle applies for accessibility testing.
Designing For Accessibility And Inclusion
The more inclusive you are to the needs of your users, the more accessible your design is. Let’s take a closer look at the different lenses of accessibility through which you can refine your designs. Read article →
Automated Accessibility Testing
I am going to assume that you’re reading this article because you’re interested in learning how to test your websites and web apps to ensure they’re accessible. If you want to learn more about why accessibility is necessary, the topic has been covered extensively elsewhere.
Automated accessibility testing is a process where you use a series of scripts to test for the presence, or lack of certain conditions in code. These conditions are dictated by the Web Content Accessibility Guidelines (WCAG), a standard by the W3C that outlines how to make digital experiences accessible.
For example, an automated accessibility test might check to see if the tabindex attribute is present and if its value is greater than0. The pseudocode would be something like:
Failures can then be collected and used to generate reports that disclose the number, and severity of accessibility issues. Certain automated accessibility products can also integrate as a Continuous Integration or Continuous Deployment (CI/CD) tool, presenting just-in-time warnings to developers when they attempt to add code to a central repository.
These automated programs are incredible resources. Modern websites and web apps are complicated things that involve hundreds of states, thousands of lines of code, and complicated multi-screen interactions. It’d be absurd to expect a human (or a team of humans) to mind all the code controlling every possible permutation of the site, to say nothing of things like regressions, software rot, and A/B tests.
Automation really shines here. It can repeatedly and tirelessly pour over these details with perfect memory, at a rate far faster than any human is capable of.
Automated accessibility tests aren’t a turnkey solution, nor are they a silver bullet. There are some limitations to keep in mind when using them.
Thinking To Think Of Things
One of both the best and worst aspects of the web is that there are many different ways to implement a solution to a problem. While this flexibility has kept the web robust and adaptable and ensured it outlived other competing technologies, it also means that you’ll sometimes see code that is, um, creatively implemented.
For example, the automated accessibility testing site Tenon.io wisely includes a rule that checks to see if a form element has both a label element and an aria-label associated with it, and if the text strings for both declarations differ. If they do, it will flag it as an issue, as the visible label may be different than what someone would hear if they were navigating using a screen reader.
If you’re not using a testing service that includes this rule, it won’t be reported. The code will still “pass”, but it’s passing by omission, not because it’s actually accessible.
Some automated accessibility tests cannot parse the various states of interactive content. Critical parts of the user interface are effectively invisible to automation unless the test is run when the content is in an active, selected, or disabled state.
By interactive content, I mean things that the user has yet to take action on, or aren’t present when the page loads. Unopened modals, collapsed accordions, hidden tab content and carousel slides are all examples.
It takes sophisticated software to automatically test the various states of every component within a single screen, let alone across an entire web app or website. While it is possible to augment testing software with automated accessibility checks, it is very resource-intensive, usually requiring a dedicated team of engineers to set up and maintain.
Just having the presence of ARIA does not guarantee that it will automatically make something accessible. Unfortunately, and in spite of its first rule of use, ARIA is commonly misunderstood, and consequently abused. A lot of off-the-shelf code has this problem, perpetuating the issue.
<!-- Never do this -->
To further complicate the issue, support for ARIA is varied across browsers. While an attribute may be used appropriately, the browser may not communicate the declared role, property, or state to assistive technology.
There is also the scenario where ARIA can be applied to an element and be valid from a technical standpoint, yet be unusable from an assistive technology perspective. For example:
Tired of unevenly cooked asparagus? Try this tip from the world’s oldest cookbook.
Headings — especially first-level headings — are vital in communicating the purpose of a page. If a person is using assistive technology to navigate, the aria-hidden declaration applied to the h1 element will make it difficult for them to quickly determine the page’s purpose. It will force them to navigate around the rest of the page to gain context, an annoying and labor-intensive process.
Some automated accessibility tests may scan the code and not report an error since the syntax itself is valid. The automation has no way of knowing the greater context of the declaration’s use.
This isn’t to say you should completely avoid using ARIA! When authored with care and deliberation, ARIA can fix the gaps in accessibility that sometimes plague complicated interactions; it provides some much-needed context to the people who rely on assistive technology.
As the soggy car demonstrates, computers are awful at understanding the overall situation of the outside world. It’s up to us humans to be the ultimate arbiters in determining if what the computer spits out is useful or not.
Before we discuss how to provide appropriate context, there are a few common misunderstandings about accessibility work that need to be addressed:
Second, accessibility is more than just screen readers. The rules outlined in the Web Content Accessibility Guidelines ensure that the largest number of people can read and operate technology, regardless of ability or circumstance.
Third, disabilities can be conditional and can be brought about by your environment. It can be a short-term thing, like rain on your glasses, sleep deprivation, or an allergies-induced migraine. It can also be longer-term, such as a debilitating illness, broken limb, or a depressive episode. Multiple, compounding conditions can (and do) affect individuals.
That all being said, many accessibility fixes that help screen readers work properly also benefit other assistive technologies.
Get Your Feet Wet
Knowing where to begin can be overwhelming. Consider Michiel Bijl’s great advice:
“Before you release a website, tab through it. If you cannot see where you are on the page after each tab; you're not finished yet. #a11y
Tab through a few of the main user flows on your website or web app to determine if all interactive components’ focus states are visually apparent, and if they can be activated via keyboard input. If there’s something you can click or tap on that isn’t getting highlighted when receiving keyboard focus, take note of it. Also pay attention to the order interactive components are highlighted when focused — it should match the reading order of the site.
If you need a baseline to compare your testing to, Dave Rupert has an excellent project called A11Y Nutrition Cards, which outlines expected behavior for common interactive components. In addition, Scott O’Hara maintains a project called a11y Styled Form Controls. This project provides examples of components such as switches, checkboxes, and radio buttons that have well-tested and documented support for assistive technology. A clever reader might use one of these resources to help them try out the other!
The Fourth Myth
With that out of the way, I’m going to share a fourth myth with you: not every assistive technology user is a power user. Like with any other piece of software, there’s a learning curve involved.
In her post about Aaptiv’s redesign, Lisa Zhu discovers that their initial accessibility fix wasn’t intuitive. While their first implementation was “technically” correct, it didn’t line up with how people who rely on VoiceOver actually use their devices. A second solution simplified the interaction to better align with their expectations.
Don’t assume that just because something hypothetically functions that it’s actually usable. Trust your gut: if it feels especially awkward, cumbersome, or tedious to operate for you, chances are it’ll be for others.
Dive Right In
While not every accessibility issue is a screen reader issue, you should still get in the habit of testing your site with one. Not an emulator, simulator, or some other proxy solution.
If you find yourself struggling to operate a complicated interactive component using basic screen reader commands, it’s probably a sign that the component needs to be simplified. Chances are that the simplification will help non-assistive technology users as well. Good design benefits everyone!
The same goes for navigation. If it’s difficult to move around the website or web app, it’s probably a sign that you need to update your heading structure and landmark roles. Both of these features are used by assistive technology to quickly and efficiently navigate.
Another good thing to review is the text content used to describe your links. Hopping from link to link is another common assistive technology navigation technique; some screen readers can even generate a list of all link content on the page:
“Think before you link! Your "helpful" click here links look like this to a screen reader user. ALT = JAWS links list”
When navigating using an ordered list devoid of the surrounding non-link content, avoiding ambiguous terms like “click here” or “more info” can go a long way to ensuring a person can understand the overall meaning of the page. As a bonus, it’ll help alleviate cognitive concerns for everyone, as you are more accurately explaining what a user should expect after activating a link.
How To Test
Each screen reader has a different approach to how it announces content. This is intentional. It’s a balancing act between the product’s features, the operating system it is installed on, the form factor it is available in, and the types of input it can receive.
The Browser Wars taught us the folly of developing for only one browser. Similarly, we should not cater to a single screen reader. It is important to note that many people rely exclusively on a specific screen reader and browser combination — by circumstance, preference, or necessity’making this all the more important. However, there is a caveat: each screen reader works better when used with a specific browser, typically the one that allows it access to the greatest amount of accessibility API information.
All of these screen readers can be used for free, provided you have the hardware. You can also virtualize that hardware, either for free or on the cheap.
Automated accessibility tests should be your first line of defense. They will help you catch a great deal of nitpicky, easily-preventable errors before they get committed. Repeated errors may also signal problems in template logic, where one upstream tweak can fix multiple pages. Identifying and resolving these issues allows you to spend your valuable manual testing time much more wisely.
It may also be helpful to log accessibility issues in a place where people can collaborate, such as Google Sheets. Quantifying the frequency and severity of errors can lead to good things like updated documentation, opportunities for lunch and learn education, and other healthy changes to organizational workflow.
The two most popular screen readers on Windows are JAWS and NVDA.
JAWS (Job Access With Speech) is the most popular and feature-rich screen reader on the market. It works best with Firefox and Chrome, with concessions for supporting Internet Explorer. Although it is pay software, it can be operated in full in demo mode for 40 minutes at a time (this should be more than sufficient to perform basic testing).
Google recently folded TalkBack, their mobile screen reader, into a larger collection of accessibility services called the Android Accessibility Suite. It works best with Mobile Chrome. While many Android apps are notoriously inaccessible, it is still worth testing on this platform. Android’s growing presence in emerging markets, as well as increasing internet use amongst elderly and lower-income demographics, should give pause for consideration.
If you do not require the use of assistive technology on a frequent basis then you do not fully understand how the people who do interact with the web.
Much like traditional user testing, being too close to the thing you created may cloud your judgment. Empathy exercises are a good way to become aware of the problem space, but you should not use yourself as a litmus test for whether the entire experience is truly accessible. You are not the expert.
If your product serves a huge population of users, if its core base of users trends towards having a higher probability of disability conditions (specialized product, elderly populations, foreign language speakers, etc.), and/or if it is required to be compliant by law, I would strongly encourage allocating a portion of your budget for testing by people with disabilities.
“At what point does your organisation stop supporting a browser in terms of % usage? 18% of the global pop. have an #Accessibility requirement, 2% people have a colour vision deficient. But you consider 2% IE usage support more important? Support everyone be inclusive.”
This isn’t to say you should completely delegate the responsibility to these testers. Much as how automated accessibility testing can detect smaller issues to remove, a first round of basic manual testing helps professional testers focus their efforts on the complicated interactions you need an expert’s opinion on. In addition to optimizing the value of their time, it helps to get you more comfortable triaging. It is also a professional courtesy, plain and simple.
There are a few companies that perform manual testing by people with disabilities:
We also need to acknowledge the other large barrier to accessible sites that can’t be automated away: poor user experience.
User experience can make or break a product. Your code can compile perfectly, your time to first paint can be lightning quick, and your Webpack setup can be beyond reproach. All this is irrelevant if the end result is unusable. User experience encompasses all users, including those who navigate with the aid of assistive technology.
If a person cannot operate your website or web app, they’ll abandon it and not think twice. If they are forced to use your site to get a service unavailable by other means, there’s a growing precedent for taking legal action (and rightly so).
As a discipline, user experience can be roughly divided into two parts: how something looks and how it behaves They’re intrinsically interlinked concepts — work on either may affect both. While accessible design is a topic unto itself, there are some big-picture things we can keep in mind when approaching accessible user experiences from a testing perspective:
How It Looks
The WCAG does a great job covering a lot of the basics of good design. Color contrast, font size, user-facing state: a lot of these things can be targeted by automation. What you should pay attention to is all the atomic, difficult to quantify bits that compound to create your designs. Things like the words you choose, the fonts you use to display them, the spacing between things, affordances for interaction, the way you handle your breakpoints, etc.
“A good font should tell you: the difference between m and rn the difference between I and l the difference between O and 0.”
It’s one of those “an ounce of prevention is worth a pound of cure” situations. Smart, accessible defaults can save countless time and money down the line. Lean and mean startups all the way up to multinational conglomerates value efficient use of resources, and this is one of those places where you can really capitalize on that. Put your basic design patterns — say collected in something like a mood board or living style guide — in front of people early and often to see if your designed intent is clear.
How It Behaves
An enticing color palette and collection of thoughtfully-curated stock photography only go so far. Eventually, you’re going to have to synthesize all your design decisions to create something that addresses a need.
Behavior can be as small as a microinteraction, or as large as finding a product and purchasing it. What’s important here is to make sure that all the barriers to a person trying to accomplish the task at hand are removed.
If you’re using personas, don’t create a separate persona for a user with a disability. Instead, blend accessibility considerations into your existing ones. As a persona is an abstracted representation of the types of users you want to cater to, you want to make sure the kinds of conditions they may be experiencing are included. Disability conditions aren’t limited to just physical impairments, either. Things like a metered data plan, non-native language, or anxiety are all worth integrating.
“When looking at your site's analytics, remember that if you don't see many users on lower end phones or from more remote areas, it's not because they aren't a target for your product or service. It is because your mobile experience sucks. As a developer, it's your job to fix it.”
User testing, ideally simulating conditions as close to what a person would be doing in the real world (including their individual device preferences and presence of assistive technology), is also key. Verifying that people are actually able to make the logical leaps necessary to operate your interface addresses a lot of cognitive concerns, a difficult-to-quantify yet vital thing to accommodate.
We Shape Our Tools, Our Tools Shape Us
Our tool use corresponds to the kind of work we do: Carpenters drive nails with hammers, chefs cook using skillets, surgeons cut with scalpels. It’s a self-reinforcing phenomenon, and it tends to lead to over-categorization.
Sometimes this over-categorization gets in the way of us remembering to consider the real world. A surgeon might have a carpentry hobby; a chef might be a retired veterinarian. It’s important to understand that accessibility is everyone’s responsibility, and there are many paths to making our websites and web apps the best they can be for everyone. To paraphrase Mikey Ilagan, accessibility is a holistic practice, essential to some but useful to all.
Used with discretion, ARIA is a very good tool to have at our disposal. We shouldn’t shy away from using it, provided we understand the how and why behind why they work.
The same goes for automated accessibility tests, as well as GPS apps. They’re great tools to have, just get to know the terrain a little bit first.
Your best customers don’t just buy one product or use your service once. They come back again and again for more. Customer retention increases your customers’ lifetime value and boosts your revenue. It also helps you build amazing relationships with your customers. You aren’t just another website or store. They trust you with their money because you give them value in exchange. What what does customer retention mean? And how can you achieve customer retention through relationship-building strategies? Let’s explore these concepts in depth and look at a few examples that you can apply to your own business. What is…