Google Analytics (GA) is capable of generating incredibly detailed and comprehensive data. It provides the insights needed to fine-tune your site, reduce UX friction and ultimately maximize conversions. But there’s a catch. It’s only effective if you actually know how to interpret the data. Unfortunately, not all users fully understand the core metrics, and there’s uncertainty as to how to decipher them. Here, we’ll take a look at six of the most misunderstood metrics in GA to find out what the data means and how to apply it in order to optimize your site. 1. Direct Traffic At first…
Today, we are talking about user research, a critical component of any design toolkit. Quality user research allows you to generate deep, meaningful user insights. It’s a key component of WiderFunnel’s Explore phase, where it provides a powerful source of ideas that can be used to generate great experiment hypothesis.
Unfortunately, user research isn’t always as easy as it sounds.
Do any of the following sound familiar:
During your research sessions, your participants don’t understand what they have been asked to do?
The phrasing of your questions has given away the answer or has caused bias in your results?
During your tests, it’s impossible for your participants to complete the assigned tasks in the time provided?
After conducting participants sessions, you spend more time analyzing the research design rather than the actual results.
If you’ve experienced any of these, don’t worry. You’re not alone.
Even the most seasoned researchers experience “oh-shoot” moments, where they realize there are flaws in their research approach.
Fortunately, there is a way to significantly reduce these moments. It’s called pilot testing.
Pilot testing is a rehearsal of your research study. It allows you to test your research approach with a small number of test participants before the main study. Although this may seem like an additional step, it may, in fact, be the time best spent on any research project.
Just like proper experiment design is a necessity, investing time to critique, test, and iteratively improve your research design, before the research execution phase, can ensure that your user research runs smoothly, and dramatically improves the outputs from your study.
And the best part? Pilot testing can be applied to all types of research approaches, from basic surveys to more complex diary studies.
Start with the process
At WiderFunnel, our research approach is unique for every project, but always follows a defined process:
Developing a defined research approach (Methodology, Tools, Participant Target Profile)
Pilot testing of research design
Recruiting qualified research participants
Execution of research
Analyzing the outputs
Reporting on research findings
Each part of this process can be discussed at length, but, as I said, this post will focus on pilot testing.
Your research should always start with asking the high-level question: “What are we aiming to learn through this research?”. You can use this question to guide the development of research methodology, select research tools, and determine the participant target profile. Pilot testing allows you to quickly test and improve this approach.
WiderFunnel’s pilot testing process consists of two phases: 1) an internal research design review and 2) participant pilot testing.
During the design review, members from our research and strategy teams sit down as a group and spend time critically thinking about the research approach. This involves reviewing:
Our high-level goals for what we are aiming to learn
The tools we are going to use
The tasks participants will be asked to perform
The research participant sample size, and
The participant target profile
Our team often spends a lot of time discussing the questions we plan to ask participants. It can be tempting to ask participants numerous questions over a broad range of topics. This inclination is often due to a fear of missing the discovery of an insight. Or, in some cases, is the result of working with a large group of stakeholders across different departments, each trying to push their own unique agenda.
However, applying a broad, unfocused approach to participant questions can be dangerous. It can cause a research team to lose sight of its original goals and produce research data that is difficult to interpret; thus limiting the number of actionable insights generated.
To overcome this, WiderFunnel uses the following approach when creating research questions:
Phase 1: To start, the research team creates a list of potential questions. These questions are then reviewed during the design review. The goal is to create a concise set of questions that are clearly written, do not bias the participant, and compliment each other. Often this involves removing a large number of the questions from our initial list and reworking those that remain.
Phase 2: The second phase of WiderFunnel’s research pilot testing consists of participant pilot testing.
This follows a rapid and iterative approach, where we pilot our defined research approach on an initial 1 to 2 participants. Based on how these participants respond, the research approach is evaluated, improved, and then tested on 1 to 2 new participants.
Researchers repeat this process until all of the research design “bugs” have been ironed out, much like QA-ing a new experiment. There are different criteria you can use to test the research experience, but we focus on testing three main areas: clarity of instructions, participant tasks and questions, and the research timing.
Clarity of instructions: This involves making sure that the instructions are not misleading or confusing to the participants
Testing of the tasks and questions: This involves testing the actual research workflow
Research timing: We evaluate the timing of each task and the overall experiment
Let’s look at an example.
Recently, a client approached us to do research on a new area of their website that they were developing for a new service offering. Specifically, the client wanted to conduct an eye tracking study on a new landing page and supporting content page.
With the client, we co-created a design brief that outlined the key learning goals, target participants, the client’s project budget, and a research timeline. The main learning goals for the study included developing an understanding of customer engagement (eye tracking) on both the landing and content page and exploring customer understanding of the new service.
Using the defined learning goals and research budget, we developed a research approach for the project. Due to the client’s budget and request for eye tracking we decided to use Sticky, a remote eye tracking tool to conduct the research.
We chose Sticky because it allows you to conduct unmoderated remote eye tracking experiments, and follow them up with a survey if needed.
In addition, we were also able to use Sticky’s existing participant pool, Sticky Crowd, to define our target participants. In this case, the criteria for the target participants were determined based on past research that had been conducted by the client.
Leveraging the capabilities of Sticky, we were able to define our research methodology and develop an initial workflow for our research participants. We then created an initial list of potential survey questions to supplement the eye tracking test.
At this point, our research and strategy team conducted an internal research design review. We examined both the research task and flow, the associated timing, and finalized the survey questions.
In this case, we used open-ended questions in order to not bias the participants, and limited the total number of questions to five. Questions were reworked from the proposed lists to improve the wording, ensure that questions complimented each other, and were focused on achieving the learning goals: exploring customer understanding of the new service.
To help with question clarity, we used Grammarly to test the structure of each question.
Following the internal design review, we began participant pilot testing.
Unfortunately, piloting an eye tracking test on 1 to 2 users is not an affordable option when using the Sticky platform. To overcome this we got creative and used some free tools to test the research design.
We chose to use Keynote presentation (timed transitions) and its Keynote Live feature to remotely test the research workflow, and Google Forms to test the survey questions. GoToMeeting was used to observe participants via video chat during the participant pilot testing. Using these tools we were able to conduct a quick and affordable pilot test.
The initial pilot test was conducted with two individual participants, both of which fit the criteria for the target participants. The pilot test immediately pointed out flaws in the research design, which included confusion regarding the test instructions and issues with the timing for each task.
In this case, our initial instructions did not provide our participants with enough information on the context of what they were looking for, resulting in confusion of what they were actually supposed to do. Additionally, we made an initial assumption that 5 seconds would be enough time for each participant to view and comprehend each page. However, the supporting content page was very context rich and 5 seconds did not provide participants enough time to view all the content on the page.
With these insights, we adjusted our research design to remove the flaws, and then conducted an additional pilot with two new individual participants. All of the adjustments seemed to resolve the previous “bugs”.
In this case, pilot testing not only gave us the confidence to move forward with the main study, it actually provide its own “A-ha” moment. Through our initial pilot tests, we realized that participants expected a set function for each page. For the landing page, participants expected a page that grabbed their attention and attracted them to the service, whereas, they expect the supporting content page to provide more details on the service and educate them on how it worked. Insights from these pilot tests reshaped our strategic approach to both pages.
The seemingly ‘failed’ result of the pilot test actually gave us a huge Aha moment on how users perceived these two pages, which not only changed the answers we wanted to get from the user research test, but also drastically shifted our strategic approach to the A/B variations themselves.
In some instances, pilot testing can actually provide its own unique insights. It is a nice bonus when this happens, but it is important to remember to always validate these insights through additional research and testing.
Still not convinced about the value of pilot testing? Here’s one final thought.
By conducting pilot testing you not only improve the insights generated from a single project, but also the process your team uses to conduct research. The reflective and iterative nature of pilot testing will actually accelerate the development of your skills as a researcher.
Pilot testing your research, just like proper experiment design, is essential. Yes, this will require an investment of both time and effort. But trust us, that small investment will deliver significant returns on your next research project and beyond.
Do you agree that pilot testing is an essential part of all research projects?
Have you had an “oh-shoot” research moment that could have been prevented by pilot testing? Let us know in the comments!
AdWords can be an incredibly powerful platform for growing your business, but the truth is that most businesses are wasting thousands of dollars every month due to poor account management. If you feel like you could be generating more revenue from AdWords, chances are that you’re making some of these ten common mistakes that are inhibiting your performance. In this post I will be showing you ten common AdWords mistakes that I’ve seen literally hundreds of advertisers make. And, more importantly, I’m going to show you exactly how to fix them, so you can take next steps towards creating an…
One of the biggest risks of building a product is to build the wrong thing. You’ll pour months (even years) into building it, only to realize that you just can’t make it a success. At Hanno, we see this happening time and time again. That’s why we’ve put together a Lean Validation Playbook.
“Lean” in this case means that you’re moving swiftly to figure out what you’re going to build and how you’re going to build it with as few resources as possible. These resources might include time, money and effort. The lean startup methodology is advocated by Eric Reis, who has massively influenced the way we work through his book The Lean Startup.
Digital Marketing Agency RevUnit rocked the house for their client by turning a deceptively simple idea into a 400% lift in PPC conversions.
When I first met Seth Waite over a Google Hangout a few weeks ago, he mentioned that his agency, RevUnit, had done some “pretty fun things with Unbounce” for clients.
It took a little while for me to understand what Seth really meant by “fun;” he meant innovative, experimental digital marketing that actually moves the needle on results. I’ll admit, fun isn’t the first word I’d use to describe Seth’s story.
It’s also deceptively simple.
Based out of Las Vegas, Seth is the CMO at RevUnit, a full-scale digital agency that takes pride in their ability to “Build Small. Learn Fast. Iterate Often.”
This is the story of how Seth’s team at RevUnit used Unbounce to iterate a PPC — and it all started with a simple audit.
A little bit of background
RevUnit’s newest client, School of Rock, had a little bit of an Adwords addiction. Their PPC spending was on overdrive. But the ROI? Well, there was room for improvement.
School of Rock is a music school with more than 160 franchise locations worldwide. They came to RevUnit after experiencing poor-performing Adwords campaigns with a specialized PPC agency. Lead acquisition via PPC for new enrolments was slow and lagging.
School of Rock’s main goal was to drive new student enrolment to individual franchises. In other words, they needed to get more students signed up for music classes at one of the more than 160 locations worldwide.
The question was, how could they increase enrolments and lower the cost of acquisition at the same time?
It all started with a simple audit
Before digging in and building new campaigns from scratch, RevUnit performed a full audit of School of Rock’s Adwords account concentrating on keywords, ads and landing pages.
The AdWords account consisted of 160+ campaigns, 800,000+ keywords and 160+ landing pages. It’s important to note that each campaign represents a franchise location (for instance, “School of Rock Scottsdale” is a single campaign) and each of those franchises locations had their own dedicated landing page.
During the audit Seth’s team found some pretty common mistakes, particularly with the landing pages associated with each campaign. Here’s what they were working with in the beginning:
Problems with the “before” landing pages:
Pages were very slow to load. Search engines like Google see this as a poor experience for users, and as a result, penalize pages with a lower quality score.
The lead forms embedded into each landing page were pretty long. Too many form fields can cause visitors friction, meaning they’re less likely to complete the form (and more likely to bounce).
There were some general design and copy issues, the biggest being that content was not designed for easy reading. While there was a lot of information on the pages, it not tell a compelling story.
The pages did not mirror their upstream ads. Without a strong message match, visitors are more likely to bounce, again resulting in a lower quality score from Google.
Campaigns weren’t enabled with click-to-call tracking so it was impossible to measure how many phone calls were generated from Adwords activities.
Seth’s team hypothesized that if they tackled each of the problems above, School of Rock would yield better results from their AdWords campaigns.
But (and this was a pretty big ‘but’), they couldn’t really afford to tackle 160 different landing pages without knowing for sure.
Here’s the good part
Instead of jumping in willy nilly, Seth’s team decided to use Unbounce to create a template for just one of the franchise locations. Basically, he created a single landing page to test out his hypothesis. The idea was that if the template actually increased enrollment for one of the franchise locations it could be replicated for others.
Sidnee Schaefer, RevUnit’s Senior Marketing Strategist, then went to the whiteboard with Seth and other members of the team to design the new strategic landing pages. After creating a mockup of the new page’s layout, Sidnee jumped into the Unbounce builder to implement the design.
The newly designed landing page template aimed to follow a story that is easy-to-digest and comprehend while presenting a clean and well-structured format. The page was built to create the shortest path to conversion without sacrificing need-to-know information.
According to Seth,
Every brand has a very different story and we knew how important it was to tell the story of how School of Rock is different than the average music school. We designed the page to reflect this brand positioning.
For the new School of Rock landing pages, content was strategically placed into sections covering who, what, where and why (including reviews). “We kept the copy clear and strong to avoid burdening people with too much information,” says Seth.
RevUnit also used Zapier to bridge a connection between Unbounce and School of Rock’s CRM system, so new leads go directly to franchises once submitted.
The result of RevUnit’s pilot was pretty convincing: a 75% increase in average weekly conversions and a 50% decrease in cost per conversion.And, all these new leads were acquired using half the budget.
But that’s not all.
Seth didn’t stop with “good enough” – that’s just not his kind of fun.
Here’s the even *better* good part
The cherry on top of this masterminded plan is how RevUnit implemented Dynamic Text Replacement (DTR) to really match Google search queries with the landing page’s headline.
DTR is an Unbounce feature that lets you tailor the text on your landing page to match keyboard parameters, pay-per-click (PPC) campaigns, and other sources, using external variables you can attach to the URL.
DTR automatically updates specified content on your page (like a word in your headline) based on a visitor’s search query. RevUnit used DTR on their client’s landing page to ensure each visitor was served up the most relevant copy possible.
We used dynamic content on the landing page which allowed us to show personalized content to different site visitors based on keywords and locations from the ads. This helped us match the perfect ad with the perfect landing page.
In other words, when a searcher types in “drum lessons, Scottsdale, AZ” dynamic text replacement (DTR) is used to match the landing page headline with the Google search query. As a result, when the visitor clicks through to the School of Rock landing page, the headline would look something like this, “Scottsdale Drum Lessons.”
A strong message match between the traffic source (PPC ad, social media, dedicated email or otherwise) and the landing page headline helps visitors understand that they are in the right place (and prompts thoughts like “yes, this is exactly what I was looking for!”).
According to Seth, here’s why DTR was a game changer for this campaign, “because our PPC keyword strategy was very focused on instrument lessons (guitar, piano, etc), we’d need five landing pages (a different landing page for each instrument type) for each franchise location.”
This would have normally been a painful and timely undertaking but, as Seth put it, “Unbounce had a solution.”
Here’s how they used DTR:
We strategically designed the pages with DTR in mind, so that instrument keywords could be placed throughout the page. Instead of having to create 750+ landing pages, we only had to create one for each franchise location.
After the pilot’s stellar performance, Seth knew with confidence that it was time to roll it out to the rest of the 160+ School of Rock franchise locations.
Again, the results were incredible:
The number of monthly conversions improved 5x, by 250%, and the cost per conversion decreased by 82%. School of Rock has seen a huge improvement to their ROI on AdWords and their lead volume is stabilized.
What did the mean for School of Rock? Well, according to Seth, the “average value of improvements made based on customer lifetime value is potentially a 400% increase in yearly revenue based on new leads.”
The numbers are impressive but the best part of this story is that it’s easy for data-driven marketers to replicate. Start with a guess – a hunch, a hypothesis, an idea – and test it out. In other words, “Build Small. Learn Fast. Iterate Often.”
Not everything that glitters is gold. Only by testing can you know for sure if you’ve hit the jackpot. Image via Shutterstock.
So far, video backgrounds have been implemented fairly successfully on websites (they add a certain cool-factor, right?), but there is some debate over whether or not they should be used on landing pages. While video backgrounds may look beautiful, initial research reveals that they could prove too distracting for some landing pages, and could contribute to lower conversion rates.
As is the case with most new innovations in web design, it can be tempting to use this new technology without a clear understanding on how it affects conversion.
Nonetheless, marketers love video backgrounds: they are modern, appeal to the inner design ego in all of us and have already been hailed as one of the biggest design trends of 2016. Trendy marketers have made it clear that they definitely want to use them on landing pages.
In fact, when Unbounce released video backgrounds as a built in feature, it become one of the most popular discussions in our community. Ever. And, when we opened it up for beta testing, we got some pretty enthusiastic responses.
Like Jon here…
And, of course, Gary…
So, video backgrounds on a website? Go for it. But video backgrounds on a landing page? Not so fast.
Here’s why: Video backgrounds can make pages load slower and distract visitors from your Call to Action (CTA). And since every great landing page has only one end goal (conversions), it begs the question: Should we nix the idea of using video background altogether?
Well, not entirely.
Like anything else you implement on a landing page, you’re going to want to test that puppy out thoroughly to see what effect (if any) it has on conversion rates.
Here at Unbounce, we’ve been testing out the use of video backgrounds on landing pages. Based on our results, we’ve come up with some guidelines outlining when to use a video background versus a static hero imageand best practices for applying a video background.
When should you use a video background on a landing page?
I looped in Unbounce’s senior conversion expert, Michael Aagaard, to explain how using a video background on landing pages has worked for us:
We’ve been experimenting with video backgrounds for a while now. What we see is a tendency for video backgrounds to work well on landing pages where the goal is to communicate a certain “vibe” or “feeling.
In other words, video backgrounds could work well on landing pages that promote a unique atmosphere, like a conference, performing arts event or restaurant.
Video backgrounds can help demonstrate a hard-to-describe experience or atmosphere.
When shouldn’t you use a video background on a landing page?
Aagaard explains that video backgrounds could have an adverse effect on landing pages when there’s a complex sales offer at stake. When that’s the case, he recommends concentrating on the landing page copy to convince users to convert:
With more complex offers where you need to read a lot of copy in the first screenful, video backgrounds can be a bit distracting.
Copy has a direct and measurable effect on landing page conversions. If your offer requires a lot of explaining, use your words rather than running the risk of distracting visitors with video.
The Unbounce house rules for using video backgrounds
Landing pages are different from websites, and thus deserve their own set of laws for applying video backgrounds. Here’s our (not-yet-foolproof) list of ground rules for using video backgrounds on a landing page. Is this a comprehensive, complete, end-all, be-all list? Of course not! Join the dialogue and add your own rules and/or lessons learned in the comments below.
1. Avoid major distractions
Keep the conversion goal front and center. The video background content should always support the overall goal of the page. ConversionXL founder Peep Laja has a similar opinion:
Video that doesn’t add value works against the conversion goal.
Essentially, video backgrounds shouldn’t distract visitors from the primary goal of the page — rather, they should supplement or enhance the CTA.
The video background on this landing page enhances the CTA without distracting visitors.
2. Contrast is essential
In most cases, you’ll want to have some text layered on top of the video background — make sure it’s legible and easy to read throughout the entire video loop. Generally, aim for a strong light/dark contrast between the video background and the copy.
One way to ensure full, legible contrast is by applying a solid, monochromatic filter on top of the video. Not only does this look super professional, but also the color contrast makes the text, form and CTA on the landing page really pop.
The monochromatic filter applied on top of this video background makes the text and CTA really pop. BTW, like this ^? Log into Unbounce to use this brand spankin’ new template.
3. Short loop
A 5-10 second video loop should be enough time to get the point across without sacrificing quick load time.
Keep in mind that a background video will be playing on a constant loop. If the video is too short, the loop will appear disjointed or incomplete. On the other hand, if the video is too long, the viewer may click away from the website, or onto another page before the video has had a chance to work its magic in eliciting the desired emotional response.
Look for (or produce) a simple looping background that is relevant to the content of your landing page. There are many libraries of stock video clips online (here’s a pretty good roundup). If you can’t produce your own footage, make sure to double-check the copyrights associated with any video before you use it.
The general rule of thumb is that sound should always be muted (on all Unbounce pages, audio is turned off by default). If, for some reason, you need to add sound to your video background, don’t autoplay the video with sound — let viewers press play when they’re ready.
5. Remove visual controls
As long as the video content is relevant and the quality sufficient, there should be no reason for landing page visitors to press play or pause.
So, if you follow all of our House Rules, placing a video in the background of your landing page should increase conversion, right? Or, at the very least, it won’t actually hurt conversion… right?
Video backgrounds are still in the early days of their inception and, like any good data-driven marketer, you’re going to want to take it for a test drive before committing fully.
A/B testing is both an art and a science. It’s also very unpredictable. Most marketing departments, usability specialists, designers and management rely on a mixture of experience, gut instinct and personal opinion when it comes to deciding what makes a delightful marketing experience for their customers.
We recommend running an A/B test to compare how your page performs with a video background compared to a static image. Start by segmenting a small portion of traffic towards the page — just to be safe.
At the end of the day, it’s your customers and your brand that will decide what converts best.
In the physical world, no one builds anything without detailed blueprints, because people’s lives are on the line. In the digital world, the stakes just aren’t as high.
It’s called “software” for a reason: because when it hits you in the face, it doesn’t hurt as much. No one is going to die if your website goes live with the header’s left margin 4 pixels out of alignment with the image below it.
But, while the users’ lives might not be on the line, design blueprints (also called design specifications, or specs) could mean the difference between a correctly implemented design that improves the user experience and satisfies customers and a confusing and inconsistent design that corrupts the user experience and displeases customers.
For those of us who create digital products, design specs could mean the difference between efficient collaboration and a wasteful back-and-forth process with costly implementation mistakes and delivery delays. It could also mean the difference between your business making money and losing money, in which case lives might actually be on the line.
In short, specs can help us to build the right product more quickly and more efficiently.
“A blueprint is a reproduction of a technical drawing, documenting an architecture or an engineering design, using a contact print process on light-sensitive sheets. Introduced in the 19th century, the process allowed rapid and accurate reproduction of documents used in construction and industry. The blue-print process was characterized by light colored lines on a blue background, a negative of the original.”
Architectural blueprints were the photocopier of the 19th century. They were the cheapest, most reliable technology available to copy technical drawings.
Blueprints were created by sending light around an ink drawing on transparent film. The light would shine through everywhere except the ink and hit a paper coated with a light-sensitive material, turning that paper blue. This outlined a white copy of the engineering drawing on a dark-blue background.
These copies were then distributed to builders who were responsible for implementing the designs in those drawings.
Today, many graphic designers also distribute design specs to the front-end developers who are responsible for implementing the designs. Design specs are no longer made with paper and light, and they are no longer blue, but, as before, they ensure that the product gets built correctly.
From Bricks To Bits And Bytes
For a former real estate developer, working with graphic designs without specs was like getting a set of architectural blueprints with all of the drawings and none of the numbers. Without the necessary CSS “measurements,” I was forced to hunt through layers and sublayers of shapes and text elements to figure out the right HEX value for the border around the “Buy” button or the font family used in the “Forgot Password?” field. Such a workflow was very unproductive.
I was starving for specs when my friend Chen Blume5 approached me with the idea of Specctr156, a tool that would bring the familiar benefits of architectural blueprints to the world of graphic design and front-end web development. I immediately recognized the value and potential of this idea, so we started working together right away, and soon after that, the first version of Specctr was released.
Initially, the Specctr plugin was for Adobe Fireworks users8 only, which at the time — 2012 — seemed to be the best tool for UI and web designers. Later, we expanded the range of supported apps, and today it includes Fireworks, Illustrator, Photoshop and InDesign.
A Picture (And Some Numbers) Are Worth More Than A Thousand Words
The phrase “A picture is worth a thousand words” means that a complex idea can be conveyed with just a single still image. It also characterizes well one of the main goals of visualization, which is to make it possible to absorb large amounts of data quickly. However, in the design and development business, a picture or a single PSD is not enough.
Developers need to know a design’s exact attributes to be able to write the HTML and CSS necessary to recreate the text and shape elements via code. If a PSD is not accompanied by detailed specs, then making approximate guesses or hunting through layers could lead either to errors or the loss of precious development time.
When developing something, I might need several minutes to load the necessary mental models in my head before I can be productive. Any interruption could bring a wrecking ball to the intricate imaginary machinery I’ve struggled to assemble inside my head.
This is why having to look up an RGB value or turn to a teammate to ask which typeface is being used could lead to big gaps in my productivity.
And if you’re a member of a distributed or remote team, then you don’t even have the luxury of immediately getting your questions answered by a colleague — you’re off to an asynchronous communication tool like Skype, Hipchat or, worse, email. As Chris Parnin puts it11:
“The costs of interruptions have been studied in office environments. An interrupted task is estimated to take twice as long and contain twice as many errors as uninterrupted tasks. Workers have to work in a fragmented state as 57% of tasks are interrupted. For programmers, there is less evidence of the effects and prevalence of interruptions. Typically, the number that gets tossed around for getting back into the ‘zone’ is at least 15 minutes after an interruption. Interviews with programmers produce a similar number.”
A Carnival Of Errors: Developer Edition
Julia had been at her computer for eight straight hours and was late for dinner with her parents, but she had promised to have this CSS transition between the “product” overlay and “buy” overlay on the master branch by the end of the day. She was almost done, but the type on this “Submit” button didn’t look the same as the one that was live on the website now.
“It’s fine,” she thought. “I’ll change it tomorrow.”
Faced with short deadlines and the prospect of rummaging through Photoshop layers, some developers would take a stab in the dark with what type to use — thus, negating the hours of design research they’ve invested with one stress-fueled decision.
In the end, we’ll have to redo it anyway, but for now we’ll meet the deadline. It’s all about developer convenience.
No one in the history of forever put in extra effort to do the wrong thing. Mistakes are usually the result of following a tempting shortcut.
The record industry’s failed attempt to halt the digital distribution of music is a good example of this. Spotify’s whole business model13 is based on the fact that “people were willing to do the right thing but only if it was just as rewarding, and much less hassle, than doing the wrong thing.”
Give your front-end engineer a fully spec’ed design and then bask in the rays of gratitude emanating from their face. They’ll get all of your margins and padding exactly right; that subtle gradient will have the precise values you took so long to match; and it will all get done faster. Why would they do anything else? All of the information they need is right there in front of them!
The Triumph Of Tediousness: Designer Edition
Lauren took a second to appreciate her finished design. It was well-balanced and conveyed a sense of calmness, all while guiding attention towards the “Submit” button.
She was tired and ready to go home after a long day of work, but she had promised to deliver the finished design so that Julia could get a head start on developing it for tomorrow’s deadline. She sometimes created specs for the developers she worked with, but she just didn’t have it in her to type and draw out each individual annotation “by hand.”
“Julia will figure it out,” she thought to herself as she hit “Send.”
It’s all about designer convenience.
If design specs (i.e. blueprints) have so much to offer, then why aren’t they a part of every designer’s workflow? The reason I, as a developer, might skip looking up the type is the same reason many designers don’t create specs: It’s easier not to.
This is because designers are not using the right tools. They manually measure and draw each dimension, and they type each pixel value and RGB value “by hand,” using the same general-purpose drawing tools that they used to create the design.
Any time you ask an artist to stop creating and focus on process, you’re fighting an uphill battle. The hill becomes dramatically steeper when the process is slow and tedious.
With the right tools to automate the creation of specs, designers can reduce costs and enable their whole team to reap the benefits of creating and distributing design specs.
Let’s Create (And Use) Design Specs
The two examples above — with Julia and Lauren — are imaginary, but that doesn’t mean they don’t happen constantly in real life. Developers should not have to make any guesses that lead to errors and lose time. On the other hand, creating detailed specs manually is tedious and takes a lot of the designer’s time.
Is there a better way? I believe there is.
We should start using tools that help us to create design specs with a minimum of hassle. Such tools would save time for both designers and developers and would lead to better designer-developer workflows.
Below are some excerpts from a design document annotated with Specctr. With the help of the Specctr plugin, a designer could quickly provide the color values of any design element, along with the exact width and height, gradient values, type attributes (including font family, weight, kerning, leading, etc.), margins, padding, border properties and more. This would greatly help the developer to implement the design because they would not need to hunt through layers and sublayers or make any guesses.
As a bonus side effect, using detailed design specs will help you to avoid errors and inconsistencies in the final version of the design when it’s implemented in real life. Below is an example of the “drift” that can occur when implementation details are not made explicit and are left up to the developer’s guesswork.
Note: Specctr is not the only tool that automatically generates detailed design specs. Plugins such as PNG Express23 (designed to work with Photoshop) do similar tasks, but I’ve been mentioning Specctr because I developеd it myself and have the most experience with it. If you have tried other spec-generation tools, please, share your experience in the comments below.
Components And Style Guides
Developers have long been familiar with the advantages of breaking a large system down into small components through object-oriented programming24, which is currently the dominant programming paradigm, thanks to the adoption of languages such as Java25. Breaking a complex project into self-contained parts that make up the whole allows a single part to be reused in multiple places in a project and allows for greater project organization and easier maintenance.
Designers are also finding26 that breaking down a design into its atomic components allows for greater efficiency because they’re able to combine them to reuse their code and styles27. Seeing the components from which a project’s entire design is derived allows for the immediate communication of style choices made across that project. Examples of the components that would be shown are the grid, buttons, forms, tables and lists.
Components combined with design specs make up a style guide33. A style guide serves as a reference both to communicate a project’s design aesthetic and to provide details of its implementation to developers. Developers no longer have to rely on designers to spec individual documents, and can instead use this reference to find the information they need. In this way, a style guide is another great tool for more efficient collaboration between designers and developers.
I reached out to a few designers for comments about the process they follow to document designs. One of my favorite responses comes from Jason Csizmadi, senior visual designer at Cooper37:
“Developers at all stages of projects expect and demand strong documentation.
Although documentation is never the most exciting aspect of design, it’s a critical step in ensuring smooth working relationships, timely delivery and a successful hand-off at the end. Ultimately, design documentation acts as a life-support system, ensuring that your vision is executed properly.”
Like any good business process, design specs should support the primary endeavor — in this case, to create beautiful websites and applications. Creating these products requires collaboration between designers and developers, and effective collaboration requires effective communication. Investing in the development of workflows and tooling around to make this communication easier and more efficient will pay off big with the speed and effectiveness with which products are built and, ultimately, with the success of the businesses that depend on those products.