Tag Archives: -back-to-top

Your frequently asked conversion optimization questions, answered!

Reading Time: 28 minutes

Got a question about conversion optimization?

Chances are, you’re not alone!

This Summer, WiderFunnel participated in several virtual events. And each one, from full-day summit to hour-long webinar, ended with a TON of great questions from all of you.

So, here is a compilation of 29 of your top conversion optimization questions. From how to get executive buy-in for experimentation, to the impact of CRO on SEO, to the power (or lack thereof) of personalization, you asked, and we answered.

As you’ll notice, many experts and thought-leaders weighed in on your questions, including:

Now, without further introduction…

Your conversion optimization questions

Optimization Strategy

  1. What do you see as the most common mistake people make that has a negative effect on website conversion?
  2. What are the most important questions to ask in the Explore phase?
  3. Is there such a thing as too much testing and / or optimizing?


  1. Do you get better results with personalization or A/B testing or any other methods you have in mind?
  2. Is there such a thing as too much personalization? We have a client with over 40 personas, with a very complicated strategy, which makes reporting hard to justify.
  3. With the advance of personalization technology, will we see broader segments disappear? Will we go to 1:1 personalization, or will bigger segments remain relevant?
  4. How do you explain personalization to people who are still convinced that personalization is putting first and last name fields on landing pages?

SEO versus CRO

  1. How do you avoid harming organic SEO when doing conversion optimization?

Getting Buy-in for Experimentation

  1. When you are trying to solicit buy-in from leadership, do you recommend going for big wins to share with the higher ups or smaller wins?
  2. Who would you say are the key stakeholders you need buy-in from, not only in senior leadership but critical members of the team?

CRO for Low Traffic Sites

  1. Do you have any suggestions for success with lower traffic websites?
  2. What would you prioritize to test on a page that has lower traffic, in order to achieve statistical significance?
  3. How far can I go with funnel optimization and testing when it comes to small local business?

Tips from an In-House Optimization Champion

  1. How do you get buy-in from major stakeholders, like your CEO, to go with a conversion optimization strategy?
  2. What has surprised you or stood out to you while doing CRO?

Optimization Across Industries

  1. Do you have any tips for optimizing a website to conversion when the purchase cycle is longer, like 1.5 months?
  2. When you have a longer sales process, getting them to convert is the first step. We have softer conversions (eBooks) and urgent ones like demo requests. Do we need to pick ONE of these conversion options or can ‘any’ conversion be valued?
  3. You’ve mainly covered websites that have a particular conversion goal, for example, purchasing a product, or making a donation. What would you say can be a conversion metric for a customer support website?
  4. Do you find that results from one client apply to other clients? Are you learning universal information, or information more specific to each audience?
  5. For companies that are not strictly e-commerce and have multiple business units with different goals, can you speak to any challenges with trying to optimize a visible page like the homepage so that it pleases all stakeholders? Is personalization the best approach?
  6. Do you find that testing strategies differ cross-culturally?

Experiment Design & Setup

  1. How do you recommend balancing the velocity of experimentation with quality, or more isolated design?
  2. I notice that you often have multiple success metrics, rather than just one? Does this ever lead to cherry-picking a metric to make sure that the test you wanted to win seem like it’s the winner?
  3. When do you make the call for A/B tests for statistical significance? We run into the issue of varying test results depending on part of the week we’re running a test. Sometimes, we even have to run a test multiple times.
  4. Is there a way to conclusively tell why a test lost or was inconclusive?
  5. How many visits do you need to get to statistically relevant data from any individual test?
  6. We are new to optimization. Looking at your Infinity Optimization Process, I feel like we are doing a decent job with exploration and validation – for this being a new program to us. Our struggle seems to be your orange dot… putting the two sides together – any advice?
  7. When test results are insignificant after lots impressions, how do you know when to ‘call it a tie’ and stop that test and move on?

Testing and technology

  1. There are tools meant to increase testing velocity with pre-built widgets and pre-built test variations, even – what are your thoughts on this approach?

Your questions, answered

Q: What do you see as the most common mistake people make that has a negative effect on website conversion?

Chris Goward: I think the most common mistake is a strategic one, where marketers don’t create or ensure they have a great process and team in place before starting experimentation.

I’ve seen many teams get really excited about conversion optimization and bring it into their company. But they are like kids in a candy store: they’re grabbing at a bunch of ideas, trying to get quick wins, and making mistakes along the way, getting inconclusive results, not tracking properly, and looking foolish in the end.

And this burns the organizational momentum you have. The most important resource you have in an organization is the support from your high-level executives. And you need to be very careful with that support because you can quickly destroy it by doing things the wrong way.

It’s important to first make sure you have all of the right building blocks in place: the right process, the right team, the ability to track and the right technology. And make sure you get a few wins, perhaps under the radar, so that you already have some support equity to work with.

Further reading:

Back to Top

Q: What are the most important questions to ask in the Explore phase?

Chris Goward: During Explore, we are looking for your visitors’ barriers to conversion. It’s a general research phase. (It’s called ‘Explore’ for a reason). In it, we are looking for insights about what questions to ask and validate. We are trying to identify…

  • What are the barriers to conversion?
  • What are the motivational triggers for your audience?
  • Why are people buying from you?

And answering those questions comes through the qualitative and quantitative research that’s involved in Explore. But it’s a very open-ended process. It’s an expansive process. So the questions are more about how to identify opportunities for testing.

Whereas Validate is a reductive process. During Validate, we know exactly what questions we are trying to answer, to determine whether the insights gained in Explore actually work.

Further reading:

  • Explore is one of two phases in the Infinity Optimization Process – our framework for conversion optimization. Read about the whole process, here.

Back to Top

Q: Is there such a thing as too much testing and / or optimizing?

Chris Goward: A lot of people think that if they’re A/B testing, and improving an experience or a landing page or a website…they can’t improve forever. The question many marketers have is, how do I know how long to do this? Is there going to be diminishing returns? By putting in the same effort will I get smaller and smaller results?

But we haven’t actually found this to be true. We have yet to find a company that we have over-A/B tested. And the reason is that visitor expectations continue to increase, your competitors don’t stop improving, and you continuously have new questions to ask about your business, business model, value proposition, etc.

So my answer is…yes, you will run out of opportunities to test, as soon as you run out of business questions. When you’ve answered all of the questions you have as a business, then you can safely stop testing.

Of course, you never really run out of questions. No business is perfect and understands everything. The role of experimentation is never done.

Case Study: DMV.org has been running an optimization program for 4+ years. Read about how they continue to double revenue year-over-year in this case study.

Back to Top

Q: Do you get better results with personalization or A/B testing or any other methods you have in mind?

Chris Goward: Personalization is a buzzword right now that a lot of marketers are really excited about. And personalization is important. But it’s not a new idea. It’s simply that technology and new tools are now available, and we have so much data that allows us to better personalize experiences.

I don’t believe that personalization and A/B testing are mutually exclusive. I think that personalization is a tactic that you can test and validate within all your experiences. But experimentation is more strategic.

At the highest level of your organization, having an experimentation ethos means that you’ll test anything. You could test personalization, you could test new product lines, or number of products, or types of value proposition messaging, etc. Everything is included under the umbrella of experimentation, if a company is oriented that way.

Personalization is really a tactic. And the goal of personalization is to create a more relevant experience, or a more relevant message. And that’s the only thing it does. And it does it very well.

Further Reading: Are you evaluating personalization at your company? Learn how to create the most effective personalization strategy with our 4-step roadmap.

Back to Top

Q: Is there such a thing as too much personalization? We have a client with over 40 personas, with a very complicated strategy, which makes reporting hard to justify.

Chris Goward: That’s an interesting question. Unlike experimentation, I believe there is a very real danger of too much personalization. Companies are often very excited about it. They’ll use all of the features of the personalization tools available to create (in your client’s case) 40 personas and a very complicated strategy. And they don’t realize that the maintenance cost of personalization is very high. It’s important to prove that a personalization strategy actually delivers enough business value to justify the increase in cost.

When you think about it, every time you come out with a new product, a new message, or a new campaign, you would have to create personalized experiences against 40 different personas. And that’s 40 times the effort of having a generic message. If you haven’t tested from the outset, to prove that all of those personas are accurate and useful, you could be wasting a lot of time and effort.

We always start a personalization strategy by asking, ‘what are the existing personas?’, and proving out whether those existing personas actually deliver distinct value apart from each other, or whether they should be grouped into a smaller number of personas that are more useful. And then, we test the messaging to see if there are messages that work better for each persona. It’s a step by step process that makes sure we are only creating overhead where it’s necessary and will create value.

Further Reading: Are you evaluating personalization at your company? Learn how to create the most effective personalization strategy with our 4-step roadmap.

Back to Top

Q: With the advance of personalization technology, will we see broader segments disappear? Will we go to 1:1 personalization, or will bigger segments remain relevant?

Chris Goward: Broad segments won’t disappear; they will remain valid. With things like multi-threaded personalization, you’ll be able to layer on some of the 1:1 information that you have, which may be product recommendations or behavioral targeting, on top of a broader segment. If a user falls into a broad segment, they may see that messaging in one area, and 1:1 messaging may appear in another area.

But if you try to eliminate broad segments and only create 1:1 personalization, you’ll create an infinite workload for yourself in trying to sustain all of those different content messaging segments. And it’s almost impossible for a marketing department practically to create infinite marketing messages.

Hudson Arnold: You are absolutely going to need both. I think there’s a different kind of opportunity, and a different kind of UX solution to those questions. Some media and commerce companies won’t have to struggle through that content production, because their natural output of 1:1 personalization will be showing a specific product or a certain article, which they don’t have to support from a content perspective.

What they will be missing out on is that notion of, what big segments are we missing? Are we not targeting moms? Newly married couples? CTOs vs. sales managers? Whatever the distinction is, that segment-level messaging is going to continue to be critical, for the foreseeable future. And the best personalization approach is going to balance both.

Back to Top

Q: How do you explain personalization to people who are still convinced that personalization is putting first and last name fields on landing pages?


André Morys: I compare it to the experience people have in a real store. If you go to a retail store, and you want to buy a TV, the salesperson will observe how you’re speaking, how you’re walking, how you’re dressed, and he will tailor his sales pitch to the type of person you are. He will notice if you’ve brought your family, if it’s your first time in a shop, or your 20th. He has all of these data points in his mind.

Personalization is the art of transporting this knowledge of how to talk to people on a 1:1 level to your website. And it’s not always easy, because you may not have all of the data. But you have to find out which data you can use. And if you can do personalization properly, you can get big uplift.

John Ekman: On the other hand, I heard a psychologist once say that people have more in common than what separates them. If you are looking for very powerful persuasion strategies, instead of thinking of the different individual traits and preferences that customers might have, it may be better to think about what they have in common. Because you’ll reach more people with your campaigns and landing pages. It will be interesting to see how the battle between general persuasion techniques and individual personalization techniques will result.

Chris Goward: It’s a good point. I tend to agree that the nirvana of 1:1 personalization may not be the right goal in some cases, because there are unintended consequences of that.

One is that it becomes more difficult to find generalized understanding of your positioning, of your value proposition, of your customers’ perspectives, if everything is personalized. There are no common threads.

The other is that there is significant maintenance cost in having really fine personalization. If you have 1:1 personalization with 1,000 people, and you update your product features, you have to think about how that message gets customized across 1,000 different messages rather than just updating one. So there is a cost to personalization. You have to validate that your approach to personalization pays off, and that is has enough benefit to balance out your cost and downside.

David Darmanin: [At Hotjar], we aren’t personalizing, actually. It’s a powerful thing to do, but there is a time to deploy it. If personalization adds too much complexity and slows you down, then obviously that can be a challenge. Like most things that can be complex, I think that they are the most valuable, when you have a high ticket price or very high value, where that touch of personalization has a big impact.

With Hotjar, we’re much more volume and lower price points, so it’s not yet a priority for us. Having said that, we have looked at it. But right now, we’re a startup, at the stage where speed is everything. And having many common threads is as important as possible, so we don’t want to add too much complexity now. But if you’re selling very expensive things, and you’re at a more advanced stage as a company, it would be crazy not to leverage personalization.

Video Resource: This panel response comes from the Growth & Conversion Virtual Summit held this Spring. You can still access all of the session recordings for free, here.

Back to Top

Q: How do you avoid harming organic SEO when doing conversion optimization?

Chris Goward: A common question! WiderFunnel was actually one of Google’s first authorized consultants for their testing tool, and Google told us is that they support optimization fully. They do not penalize companies for running A/B tests, if they are set up properly and the company is using a proper tool.

On top of that, what we’ve found is that the principles of conversion optimization parallel the principles of good SEO practice.

If you create a better experience for your users, and more of them convert, it actually sends a positive signal to Google that you have higher quality content.

Google looks at pogo-sticking, where people land on the SERP, find a result, and then return back to the SERP. Pogo-sticking signals to Google that this is not quality content. If a visitor lands on your page and converts, they are not going to come back to the SERP, which sends Google a positive signal. And we’ve actually never seen an example where SEO has been harmed by a conversion optimization program.

Video Resource: Watch SEO Wizard Rand Fishkin’s talk from CTA Conf 2017, “Why We Can’t Do SEO without CRO

Back to Top

Q:When you are trying to solicit buy-in from leadership do you recommend going for big wins to share with the higher ups or smaller wins?

Chris Goward: Partly, it depends on how much equity you have to burn up front. If you are in a situation where you don’t have a lot of confidence from higher-ups about implementing an optimization program, I would recommend starting with more under the radar tests. Try to get momentum, get some early wins, and then share your success with the executives to show the potential. This will help you get more buy-in for more prominent areas.

This is actually one of the factors that you want to consider when prioritizing where to test. The “PIE Framework” shows you the three factors to help you prioritize.

PIE framework for A/B testing prioritization.
A sample PIE prioritization analysis.

One of them is Ease. Potential, Importance, and Ease. And one of the important aspects within Ease is political ease. So you want to look for areas that have political ease, which means there might not be as much sensitivity around them (so maybe not the homepage). Get those wins first, and create momentum, and then you can start sharing that throughout the organization to build that buy-in.

Further Reading: Marketers from ASICS’ global e-commerce team weigh in on evangelizing optimization at a global organization in this post, “A day in the life of an optimization champion

Back to Top

Q: Who would you say are the key stakeholders you need buy-in from, not only in senior leadership but critical members of the team?

Nick So: Besides the obvious senior leadership and key decision-makers as you mention, we find getting buy-in from related departments like branding, marketing, design, copywriters and content managers, etc., can be very helpful.

Having these teams on board can not only help with the overall approval process, but also helps ensure winning tests and strategies are aligned with your overall business and marketing strategy.

You should also consider involving more tangentially-related teams like customer support. This makes them a part of the process and testing culture, but your customer-facing teams can also be a great source for business insights and test ideas as well!

Back to Top

Q: Do you have any suggestions for success with lower traffic websites?

Nick So: In our testing experience, we find we get the most impactful results when we feel we have a strong understanding of the website’s visitors. In the Infinity Optimization Process, this understanding is gained through a balanced approach of Exploratory research, and Validated insights and results.

infinity optimization process
The Infinity Optimization Process is iterative and leads to continuous growth and insights.

When a site’s traffic is low, the ability to Validate is decreased, and so we try to make up for it by increasing the time spent and work done in the Explore phase.

We take those yet-to-be-validated insights found in the Explore phase, and build a larger, more impactful single variation, and test the cluster of changes. (This variation is generally more drastic than we would create for a higher-traffic client, since we can validate those insights easily through multiple tests.)

Because of the more drastic changes, the variation should have a larger impact on conversion rate (and hopefully gain statistical significance with lower traffic). And because we have researched evidence to support these changes, there is a higher likelihood that they will perform better than a standard re-design.

If a site does not have enough overall primary conversions, but you definitely, absolutely MUST test, then I would look for a secondary metric further ‘upstream’ to optimize for. These should be goals that indicate or guide the primary conversion (e.g. clicks to form > form submission, add to cart > transaction). However with this strategy, stakeholders have to be aware that increases in this secondary goal may not be tied directly to increases of the primary goal at the same rate.

Back to Top

Q: What would you prioritize to test on a page that has lower traffic, in order to achieve statistical significance?

Chris Goward: The opportunities that are going to make the most impact really depend on the situation and the context. So if it’s a landing page or the homepage or a product page, they’ll have different opportunities.

But with any area, start by trying to understand your customers. If you have a low-traffic site, you’ll need to spend more time on the qualitative research side, really trying to understand: what are the opportunities, the barriers your visitors might be facing, and drilling into more of their perspective. Then you’ll have a more powerful test setup.

You’ll want to test dramatically. Test with fewer variations, make more dramatic changes with the variations, and be comfortable with your tests running longer. And while they are running and you are waiting for results, go talk to your customers. Go and run some more user testing, drill into your surveys, do post-purchase surveys, get on the phone and get the voice of customer. All of these things will enrich your ability to imagine their perspective and come up with more powerful insights.

In general, the things that are going to have the most impact are value proposition changes themselves. Trying to understand, do you have the right product-market fit, do you have the right description of your product, are you leading with the right value proposition point or angle?

Back to Top


Q: How far can I go with funnel optimization and testing when it comes to small local business?


David Darmanin: What do you mean by small local business? If you’re a startup just getting started, my advice would be to stop thinking about optimization and focus on failing fast. Get out there, change things, get some traction, get growth and you can optimize later. Whereas, if you’re a small but established local business, and you have traffic but it’s low, that’s different. In the end, conversion optimization is a traffic game. Small local business with a lot of traffic, maybe. But if traffic is low, focus on the qualitative, speak to your users, spend more time understanding what’s happening.

John Ekman:

If you can’t test to significance, you should turn to qualitative research.

That would give you better results. If you don’t have the traffic to test against the last step in your funnel, you’ll end up testing at the beginning of your funnel. You’ll test for engagement or click through, and you’ll have to assume that people who don’t bounce and click through will convert. And that’s not always true. Instead, go start working with qualitative tools to see what the visitors you have are actually doing on your page and start optimizing from there.

André Morys: Testing with too small a sample size is really dangerous because it can lead to incorrect assumptions if you are not an expert in statistics. Even if you’re getting 10,000 to 20,000 orders per month, that is still a low number for A/B testing. Be aware of how the numbers work together. We’ve had people claiming 70% uplift, when the numbers are 64 versus 27 conversions. And this is really dangerous because that result is bull sh*t.

Video Resource: This panel response comes from the Growth & Conversion Virtual Summit held this Spring. You can still access all of the session recordings for free, here.

Back to Top

Q: How do you get buy-in from major stakeholders, like your CEO, to go with an evolutionary, optimized redesign approach vs. a radical redesign?

Jamie Elgie: It helps when you’ve had a screwup. When we started this process, we had not been successful with the radical design approach. But my advice for anyone championing optimization within an organization would be to focus on the overall objective.

For us, it was about getting our marketing spend to be more effective. If you can widen the funnel by making more people convert on your site, and then chase the people who convert (versus people who just land on your site) with your display media efforts, your social media efforts, your email efforts, and with all your paid efforts, you are going to be more effective. And that’s ultimately how we sold it.

It really sells itself though, once the process begins. It did not take long for us to see really impactful results that were helping our bottom line, as well as helping that overall strategy of making our display media spend, and all of our media spend more targeted.

Video Resource: Watch this webinar recording and discover how Jamie increased his company’s sales by more than 40% with evolutionary site redesign and conversion optimization.

Back to Top

Q: What has surprised you or stood out to you while doing CRO?

Jamie Elgie: There have been so many ‘A-ha!’s, and that’s the best part. We are always learning. Things that we are all convinced we should change on our website, or that we should change in our messaging in general, we’ll test them and actually find out.

We have one test running right now, and it’s failing, which is disappointing. But our entire emphasis as a team is changing, because we are learning something. And we are learning it without a huge amount of risk. And that, to me, has been the greatest thing about optimization. It’s not just the impact to your marketing funnel, it’s also teaching us. And it’s making us a better organization because we’re learning more.

One of the biggest benefits for me and my team has been how effective it is just to be able to say, ‘we can test that’.

If you have a salesperson who feels really strongly about something, and you feel really strongly that they’re wrong, the best recourse is to put it out on the table and say, ok, fine, we’ll go test that.

It enables conversations to happen that might not otherwise happen. It eliminates disputes that are not based on objective data, but on subjective opinion. It actually brings organizations together when people start to understand that they don’t need to be subjective about their viewpoints. Instead, you can bring your viewpoint to a test, and then you can learn from it. It’s transformational not just for a marketing organization, but for the entire company, if you can start to implement experimentation across all of your touch points.

Case Study: Read the details of how Jamie’s company, weBoost, saw a 100% lift in year-over-year conversion rate with and optimization program.

Back to Top

Q: Do you have any tips for optimizing a website to conversion when the purchase cycle is longer, like 1.5 months?

Chris Goward: That’s a common challenge in B2B or with large ticket purchases for consumers. The best way to approach this is to

  1. Track your leads and opportunities to the variation,
  2. Then, track them through to the sale,
  3. And then look at whether average order value changes between the variations, which implies the quality of the leads.

Because it’s easy to measure lead volume between variations. But if lead quality changes, then that makes a big impact.

We actually have a case study about this with Magento. We asked the question, “Which of these calls-to-action is actually generating the most valuable leads?”. And ran an experiment to try to find out. We tracked the leads all the way through to sale. This helped Magento optimize for the right calls-to-action going forward. And that’s an important question to ask near the beginning of your optimization program, which is, am I providing the right hook for my visitor?

Case Study: Discover how Magento increased lead volume and lead quality in the full case study.

Back to Top

Q: When you have a longer sales process, getting visitors to convert is the first step. We have softer conversions (eBooks) and urgent ones like demo requests. Do we need to pick ONE of these conversion options or can ‘any’ conversion be valued?

Nick So: Each test variation should be based on a single, primary hypothesis. And each hypothesis should be based on a single, primary conversion goal. This helps you keep your hypotheses and strategy focused and tactical, rather than taking a shotgun approach to just generally ‘improve the website’.

However, this focused approach doesn’t mean you should disregard all other business goals. Instead, count these as secondary goals and consider them in your post-test results analysis.

If a test increases demo requests by 50%, but cannibalizes ebook downloads by 75%, then, depending on the goal values of the two, a calculation has to be made to see if the overall net benefit of this tradeoff is positive or negative.

Different test hypotheses can also have different primary conversion goals. One test can focus on demos, but the next test can be focused on ebook downloads. You just have to track any other revenue-driving goals to ensure you aren’t cannibalizing conversions and having a net negative impact for each test.

Back to Top

Q: You’ve mainly covered websites that have a particular conversion goal, for example, purchasing a product, or making a donation. What would you say can be a conversion metric for a customer support website?

Nick So: When we help a client determine conversion metrics…

…we always suggest following the money.

Find the true impact that customer support might have on your company’s bottom line, and then determine a measurable KPI that can be tracked.

For example, would increasing the usefulness of the online support decrease costs required to maintain phone or email support lines (conversion goal: reduction in support calls/submissions)? Or, would it result in higher customer satisfaction and thus greater customer lifetime value (conversion goal: higher NPS responses via website poll)?

Back to Top

Q: Do you find that results from one client apply to other clients? Are you learning universal information, or information more specific to each audience?

Chris Goward: That question really gets at the nub of where we have found our biggest opportunity. When I started WiderFunnel in 2007, I thought that we would specialize in an industry, because that’s what everyone was telling us to do. They said, you need to specialize, you need to focus and become an expert in an industry. But I just sort of took opportunities as they came, with all kinds of different industries. And what I found is the exact opposite.

We’ve specialized in the process of optimization and personalization and creating powerful test design, but the insights apply to all industries.

What we’ve found is people are people, regardless of whether they’re shopping for a server, or shopping for socks, or donating to third-world countries, they go through the same mental process in each case.

The tactics are a bit different, sometimes. But often, we’re discovering breakthrough insights because we’re able to apply principles from one industry to another. For example, taking an e-commerce principle and identifying where on a B2B lead generation website we can apply that principle because someone is going through the same step in the process.

Most marketers spend most of their time thinking about their near-field competitors rather than in different industries, because it’s overwhelming to look at all of the other opportunities. But we are often able to look at an experience in a completely different way, because we are able to look at it through the lens of a different industry. That is very powerful.

Back to Top

Q: For companies that are not strictly e-commerce and have multiple business units with different goals, can you speak to any challenges with trying to optimize a visible page like the homepage so that it pleases all stakeholders? Is personalization the best approach?

Nick So: At WiderFunnel, we often work with organizations that have various departments with various business goals and agendas. We find the best way to manage this is to clearly quantify the monetary value of the #1 conversion goal of each stakeholder and/or business unit, and identify areas of the site that have the biggest potential impact for each conversion goal.

In most cases, the most impactful test area for one conversion goal will be different for another conversion goal (e.g. brand awareness on the homepage versus checkout for e-commerce conversions).

When there is a need to consider two different hypotheses with differing conversion goals on a single test area (like the homepage), teams can weigh the quantifiable impact + the internal company benefits in their decision and make that negotiation of prioritization and scheduling between teams.

I would not recommend personalization for this purpose, as that would be a stop-gap compromise that would limit the creativity and strategy of hypotheses, as well as create a disjointed experience for visitors, which would generally have a negative impact overall.

If you HAVE to run opposing strategies simultaneously on an area of the site, you could run multiple variations for different teams and measure different goals. Or, run mutually exclusive tests (keeping in mind these tactics would reduce test velocity, and would require more coordination between teams).

Back to Top


Q: Do you find testing strategies differ cross-culturally? Do conversion rates vary drastically across different countries / languages when using these strategies?

Chris Goward: We have run tests for many clients outside of the USA, such as in Israel, Sweden, Australia, UK, Canada, Japan, Korea, Spain, Italy and for the Olympics store, which is itself a global e-commerce experience in one site!

There are certainly cultural considerations and interesting differences in tactics. Some countries don’t have widespread credit card use, for example, and retailers there are accustomed to using alternative payment methods. Website design preferences in many Asian countries would seem very busy and overly colorful to a Western European visitor. At WiderFunnel, we specialize in English-speaking and Western-European conversion optimization and work with partner optimization companies around the world to serve our global and international clients.

Back to Top

Q: How do you recommend balancing the velocity of experimentation with quality, or more isolated design?

Chris Goward: This is where the art of the optimization strategist comes into play. And it’s where we spend the majority of our effort – in creating experiment plans. We look at all of the different options we could be testing, and ruthlessly narrow them down to the things that are going to maximize the potential growth and the potential insights.

And there are frameworks we use to do that. Its all about prioritization. There are hundreds of ideas that we could be testing, so we need to prioritize with as much data as we can. So, we’ve developed some frameworks to do that. The PIE Framework allows you to prioritize ideas and test areas based on the potential, importance, and ease. The potential for improvement, the importance to the business, and the ease of implementation. And sometimes these are a little subjective, but the more data you can have to back these up, the better your focus and effort will be in delivering results.

Further Reading:

Back to Top

Q: I notice that you often have multiple success metrics, rather than just one? Does this ever lead to cherry-picking a metric to make sure that the test you wanted to win seem like it’s the winner?

Chris Goward: Good question! We actually look for one primary metric that tells us what the business value of a winning test is. But we also track secondary metrics. The goal is to learn from the other metrics, but not use them for decision-making. In most cases, we’re looking for a revenue-driving primary metric. Revenue-per-visitor, for example, is a common metric we’ll use. But the other metrics, whether conversion rate or average order value or downloads, will tell us more about user behavior, and lead to further insights.

There are two steps in our optimization process that pair with each other in the Validate phase. One is design of experiments, and the other is results analysis. And if the results analysis is done correctly, all of the metrics that you’re looking at in terms of variation performance, will tell you more about the variations. And if the design of experiments has been done properly, then you’ll gather insights from all of the different data.

But you should be looking at one metric to tell you whether or not a test won.

Further Reading: Learn more about proper design of experiments in this blog post.

Back to Top


Q: When do you make the call for A/B tests for statistical significance? We run into the issue of varying test results depending on part of the week we’re running a test. Sometimes, we even have to run a test multiple times.

Chris Goward: It sounds like you may be ending your tests or trying to analyze results too early. You certainly don’t want to be running into day-of-the-week seasonality. You should be running your tests over at least a week, and ideally two weekends to iron out that seasonality effect, because your test will be in a different context on different days of the week, depending on your industry.

So, run your tests a little bit longer and aim for statistical significance. And you want to use tools that calculate statistical significance reliably, and help answer the real questions that you’re trying to ask with optimization. You should aim for that high level of statistical significance, and iron out that seasonality. And sometimes you’ll want to look at monthly seasonality as well, and retest questionable things within high and low urgency periods. That, of course, will be more relevant depending on your industry and whether or not seasonality is a strong factor.

Further Reading: You can’t make business decisions based on misleading A/B test results. Learn how to avoid the top 3 mistakes that make your A/B test results invalid in this post.

Back to Top

Q: Is there a way to conclusively tell why a test lost or was inconclusive? To know what the hidden gold is?

Chris Goward: Developing powerful hypotheses is dependent on having workable theories. Seeking to determine the “Why” behind the results is some of the most interesting part of the work.

The only way to tell conclusively is to infer a potential reason, then test again with new ways to validate that inference. Eventually, you can form conversion optimization theories and then test based on those theories. While you can never really know definitively the “why” behind the “what”, when you have theories and frameworks that work to predict results, they become just as useful.

As an example, I was reviewing a recent test for one of our clients and it didn’t make sense based on our LIFT Model. One of the variations was showing under-performance against another variation, but I believed strongly that it should have over-performed. I struggled for some time to align this performance with our existing theories and eventually discovered the conversion rate listed was a typo! The real result aligned perfectly with our existing framework, which allowed me to sleep at night again!

Back to Top

Q: How many visits do you need to get to statistically relevant data from any individual test?

Chris Goward: The number of visits is just one of the variables that determines statistical significance. The conversion rate of the Control and conversion rate delta between the variations are also part of the calculation. Statistical significance is achieved when there is enough traffic (i.e. sample size), enough conversions, and the conversion rate delta is great enough.

Here’s a handy Excel test duration calculator. Fortunately, today’s testing tools calculate statistical significance automatically, which simplifies the conversion champion’s decision-making (and saves hours of manual calculation!)

When planning tests, it’s helpful to estimate the test duration, but it isn’t an exact science. As a rule-of-thumb, you should plan for smaller isolation tests to run longer, as the impact on conversion rate may be less. The test may require more conversions to potentially achieve confidence.

Larger, more drastic cluster changes would typically run for a shorter period of time, as they have more potential to have a greater impact. However, we have seen that isolations CAN have the potential to have big impact. If the evidence is strong enough, test duration shouldn’t hinder you from trying smaller, more isolated changes as they can lead to some of the biggest insights.

Often, people that are new to testing become frustrated with tests that never seem to finish. If you’ve run a test with more than 30,000 to 50,000 visitors and one variation is still not statistically significant over another, then your test may not ever yield a clear winner and you should revise your test plan or reduce the number of variations being tested.

Further Reading: Do you have to wait for each test to reach statistical significance? Learn more in this blog post: “The more tests, the better!” and other A/B testing myths, debunked

Back to Top

Q: We are new to optimization (had a few quick wins with A/B testing and working toward a geo targeting project). Looking at your Infinity Optimization Process, I feel like we are doing a decent job with exploration and validation – for this being a new program to us. Our struggle seems to be your orange dot… putting the two sides together – any advice?

Chris Goward: If you’re getting insights from your Exploratory research, those insights should tie into the Validate tests that you’re running. You should be validating the insights that you’re getting from your Explore phase. If you started with valid insights, the results that you get really should be generating growth, and they should be generating insights.

Part of it is your Design of Experiments (DOE). DOE is how you structure your hypotheses and how you structure your variations to generate both growth and insights, and those are the two goals of your tests.

If you’re not generating growth, or you’re not generating insights, then your DOE may be weak, and you need to go back to your strategy and ask, why am I testing this variation? Is it just a random idea? Or, am I really isolating it against another variation that’s going to teach me something as well as generate lift? If you’re not getting the orange dot right, then you probably need to look at researching more about Design of Experiments.

Q: When test results are insignificant after lots impressions, how do you know when to ‘call it a tie’ and stop that test and move on?

Chris Goward: That’s a question that requires a large portion of “it depends.” It depends on whether:

  • You have other tests ready to run with the same traffic sources
  • The test results are showing high volatility or have stabilized
  • The test insights will be important for the organization

There’s an opportunity cost to every test. You could always be testing something else and need to constantly be asking whether this is the best test to be running now vs. the cost and potential benefit of the next test in your conversion strategy.

Back to Top


Q: There are tools meant to increase testing velocity with pre-built widgets and pre-built test variations, even – what are your thoughts on this approach?


John Ekman: Pre-built templates provide a way to get quick wins and uplift. But you won’t understand why it created an uplift. You won’t understand what’s going on in the brain of your users. For someone who believes that experimentation is a way to look in the minds of whoever is in front of the screen, I think these methods are quite dangerous.

Chris Goward: I’ll take a slightly different stance. As much as I talk about understanding the mind of the customer, asking why, and testing based on hypotheses, there is a tradeoff. A tradeoff between understanding the why and just getting growth. If you want to understand the why infinitely, you’ll do multivariate testing and isolate every potential variable. But in practice, that can’t happen. Very few have enough traffic to multivariate test everything.

But if you don’t have tons of traffic and you want to get faster results, maybe you don’t want to know the why about anything, and you just want to get lift.

There might be a time to do both. Maybe your website performance is really bad, or you just want to try a left-field variation, just to see if it works…if you get a 20% lift in your revenue, that’s not a failure. That’s not a bad thing to do. But then, you can go back and isolate all of the things to ask yourself: Well, I wonder why that won, and start from there.

The approach we usually take at WiderFunnel is to reserve 10% of the variations for ‘left-field’ variations. As in, we don’t know why this will work, but we’re just going to test something crazy and see if it sticks.

David Darmanin: I agree, and disagree. We’re living in an era when technology has become so cheap, that I think it’s dangerous for any company to try to automate certain things, because they’re going to just become one of many.

Creating a unique customer experience is going to become more and more important.

If you are using tools like a platform, where you are picking and choosing what to use so that it serves your strategy and the way you want to try to build a business, that makes sense to me. But I think it’s very dangerous to leave that to be completely automated.

Some software companies out there are trying to build a completely automated conversion rate optimization platform that does everything. But that’s insane. If many sites are all aligned in the same way, if it’s pure AI, they’re all going to end up looking the same. And who’s going to win? The other company that pops up out of nowhere, and does everything differently. That isn’t fully ‘optimized’ and is more human.

Optimization, in itself, if it’s too optimized, there is a danger. If we eliminate the human aspect, we’re kind of screwed.

Video Resource: This panel response comes from the Growth & Conversion Virtual Summit held this Spring. You can still access all of the session recordings for free, here.

Back to Top

What conversion optimization questions do you have?

Add your questions in the comments section below!

The post Your frequently asked conversion optimization questions, answered! appeared first on WiderFunnel Conversion Optimization.

View article:

Your frequently asked conversion optimization questions, answered!


Designing For The Elderly: Ways Older People Use Digital Technology Differently

If you work in the tech industry, it’s easy to forget that older people exist. Most tech workers are really young1, so it’s easy to see why most technology is designed for young people. But consider this: By 2030, around 19% of people in the US will be over 652. Doesn’t sound like a lot? Well it happens to be about the same number of people in the US who own an iPhone today. Which of these two groups do you think Silicon Valley spends more time thinking about?

This seems unfortunate when you consider all of the things technology has to offer older people. A great example is Speaking Exchange3, an initiative that connects retirees in the US with kids who are learning English in Brazil. Check out the video below, but beware — it’s a tear-jerker.

CNA – Speaking Exchange (watch the video on YoutTube4)

While the ageing process is different for everyone, we all go through some fundamental changes. Not all of them are what you’d expect. For example, despite declining health, older people tend to be significantly happier5 and better at appreciating what they have6.

But ageing makes some things harder as well, and one of those things is using technology. If you’re designing technology for older people, below are seven key things you need to know.

(How old is old? It depends. While I’ve deliberately avoided trying to define such an amorphous group using chronological boundaries, it’s safe to assume that each of the following issues becomes increasingly significant after 65 years of age.)

Vision And Hearing

From the age of about 40, the lens of the eye begins to harden, causing a condition called “presbyopia.” This is a normal part of ageing that makes it increasingly difficult to read text that is small and close.

The font size a 75 chooses on his Kindle.7
Here’s a 75-year-old with his Kindle. Take a look at the font size he picks when he’s in control. Now compare it to the average font size on an iPhone. (Image: Navy Design28208.) (View large version9)

Color vision also declines with age, and we become worse at distinguishing between similar colors. In particular, shades of blue appear to be faded or desaturated.

Hearing also declines in predictable ways, and a large proportion of people over 65 have some form of hearing loss10. While audio is seldom fundamental to interaction with a product, there are obvious implications for certain types of content.

Key lessons:

  • Avoid font sizes smaller than 16 pixels (depending of course on device, viewing distance, line height etc.).
  • Let people adjust text size themselves.
  • Pay particular attention to contrast ratios11 with text.
  • Avoid blue for important interface elements.
  • Always test your product using screen readers12.
  • Provide subtitles when video or audio content is fundamental to the user experience.

Motor Control

Our motor skills decline with age, which makes it harder to use computers in various ways. For example, during some user testing at a retirement village, we saw an 80-year-old who always uses the mouse with two hands. Like many older people, she had a lot of trouble hitting interface targets and moving from one thing to the next.

In the general population, a mouse is more accurate13 than a finger. But in our user testing, we’ve seen older people perform better using touch interfaces. This is consistent with research that shows that finger tapping declines later14 than some other motor skills.

Key lessons:

  • Reduce the distance between interface elements that are likely to be used in sequence (such as form fields), but make sure they’re at least 2 millimeters apart.
  • Buttons on touch interfaces should be at least 9.6 millimeters diagonally15 (for example, 44 × 44 pixels on an iPad) for ages up to 70, and larger for older people.
  • Interface elements to be clicked with a mouse (such as forms and buttons) should be at least 11 millimeters diagonally.
  • Pay attention to sizing in human interface guidelines (Luke Wroblewski has a good roundup of guidelines16 for different platforms).

Device Use

If you want to predict the future, just look at what middle-class American teens are doing. Right now, they’re using their mobile phones for everything.

Dustin Curtis17

It’s safe to assume Dustin has never watched a 75-year-old use a mobile phone. Eventually, changes in vision and motor control make small screens impractical for everyone. Smartphones are a young person’s tool18, and not even the coolest teenager can escape their biological destiny.

In our research, older people consistently described phones as “annoying” and “fiddly.” Those who own them seldom use them, often not touching them for days at a time. They often ignore SMS’ entirely.

Examples of technology used by the elderly19
Examples of technology used by the elderly (Image: Navy Design28208) (View large version21)

But older people aren’t afraid to try new technology when they see a clear benefit. For example, older people are the largest users of tablets22. This makes sense when you consider the defining difference between a tablet and a phone: screen size. The recent slump in tablet sales23 also makes sense if you accept that older people have longer upgrade cycles than younger people.

Key lessons:

  • Avoid small-screen devices (i.e. phones).
  • Don’t rely on SMS to convey important information.


Older people have different relationships than young people, at least partly because they’ve had more time to cultivate them. For example, we conducted some research into how older people interact with health care professionals. In many cases, they’ve seen the same doctors for decades, leading to a very high degree of trust.

I regard it like going to see old pals.… I feel I could tell my GP almost anything.

– George, 73, on visiting his medical team

But due to health and mobility issues, the world available to the elderly is often smaller — both physically and socially. Digital technology has an obvious role to play here, by connecting people virtually when being in the same room is hard.

Key lessons:

  • Enable connection with a smaller, more important group of people (not a big, undifferentiated social network).
  • Don’t overemphasize security and privacy controls when trusted people are involved.
  • Be sensitive to issues of isolation.

Life Stage

During a user testing session, I sat with a 66-year-old as she signed up for an Apple ID. She was asked to complete a series of security questions. She read the first question out loud. “What was the model of your first car?” She laughed. “I have no idea! What car did I have in 1968? What a stupid question!”

It’s natural for a 30-year-old programmer to assume that this question has meaning for everyone, but it contains an implicit assumption about which life stage the user is at. Don’t make the same mistake in your design.

Key lessons:

  • Beware of content or functionality that implicitly assumes someone is young or at a certain stage in life.

Experience With Technology

I once sat with a man in his 80s as he used a library interface. “I know there are things down there that I want to read” he said, gesturing to the bottom of the screen, “but I can’t figure out how to get to them.” After I taught him how to use a scrollbar, his experience changed completely. In another session, two of the older participants told me that they’d never used a search field before.

Generally when you’re designing interfaces, you’re working within a certain kind of scaffolding. And it’s easy to assume that everyone knows how that scaffolding works. But people who didn’t grow up with computers might have never used the interface elements we take for granted. Is a scrollbar a good design for moving content up and down? Is its function self-evident? These aren’t questions most designers often ask. But the success of your design might depend on a thousand parts of the interface that you can’t control and probably aren’t even aware of.

Key lessons:

  • Don’t make assumptions about prior knowledge.
  • Interrogate all parts of your design for usability, even the parts you didn’t create.


The science of cognition is a huge topic, and ageing changes how we think in unpredictable ways. Some people are razor-sharp in their 80s, while others decline as early as in their 60s.

Despite this variability, three areas are particularly relevant to designing for the elderly: memory, attention and decision-making. (For a more comprehensive view of cognitive change with age, chapter 1 of Brain Aging: Models, Methods, and Mechanisms24 is a great place to start.)


There are different kinds of memory, and they’re affected differently by the ageing process. For example, procedural memory (that is, remembering how to do things) is generally unaffected. People of all ages are able to learn new skills and reproduce them over time.

But other types of memory suffer as we age. Short-term memory and episodic memory are particularly vulnerable. And, although the causes are unclear, older people often have difficulty manipulating the contents of their working memory25. This means that they may have trouble understanding how to combine complex new concepts in a product or interface.

Prospective memory (remembering to do something in the future) also suffers26. This is particularly relevant for habitual tasks, like remembering to take medication at the right time every day.

How do people manage this decline? In our research, we’ve found that paper is king. Older people almost exclusively use calendars and diaries to supplement their memory. But well-designed technology has great potential to provide cues for these important actions.

For older people, paper is king.27
For older people, paper is king. (Image: Navy Design28208) (View large version29)

Key lessons:

  • Introduce product features gradually over time to prevent cognitive overload.
  • Avoid splitting tasks across multiple screens if they require memory of previous actions.
  • During longer tasks, give clear feedback on progress and reminders of goals.
  • Provide reminders and alerts as cues for habitual actions.


It’s easy to view ageing as a decline, but it’s not all bad news. In our research, we’ve observed one big advantage: Elderly people consistently excel in attention span, persistence and thoroughness. Jakob Nielsen has observed similar things, finding that 95% of seniors are “methodical”30 in their behaviors. This is significant in a world where the average person’s attention span has actually dropped below the level of a goldfish31.

It can be a great feeling to watch an older user really take the time to explore your design during a testing session. And it means that older people often find things that younger people skip right over. I often find myself admiring this way of interacting with the world. But the obvious downside of a slower pace is increased time to complete tasks.

Older people are also less adept at dividing their attention32 between multiple tasks. In a world obsessed with multitasking, this can seem like a handicap. But because multi-tasking is probably a bad idea33 in the first place, designing products that help people to focus on one thing at a time can have benefits for all age groups.

Key lessons:

  • Don’t be afraid of long-form text and deep content.
  • Allow for greater time intervals in interactions (for example, server timeouts, inactivity warnings).
  • Avoid dividing users’ attention between multiple tasks or parts of the screen.


Young people tend to weigh a lot of options before settling on one. Older people make decisions a bit differently. They tend to emphasize prior knowledge34 (perhaps because they’ve had more time to accumulate it). And they give more weight to the opinions of experts (for example, their doctor for medical decisions).

The exact reason for this is unclear, but it may be due to other cognitive limitations that make comparing new options more difficult.

Key lessons:

  • Prioritize shortcuts to previous choices ahead of new alternatives.
  • Information framed as expert opinion may be more persuasive (but don’t abuse this bias).


A lot of people in the tech industry talk about “changing the world” and “making people’s lives better.” But bad design is excluding whole sections of the population from the benefits of technology. If you’re a designer, you can help change that. By following some simple principles, you can create more inclusive products that work better for everyone, especially the people who need them the most.

Payment for this article was donated to Alzheimer’s Australia35.

(cc, ml, al, il)


  1. 1 http://bits.blogs.nytimes.com/2013/07/05/technology-workers-are-young-really-young/
  2. 2 http://www.aoa.gov/Aging_Statistics/
  3. 3 http://www.cna.com.br/speakingexchange/
  4. 4 https://www.youtube.com/embed/-S-5EfwpFOk
  5. 5 http://www.economist.com/node/17722567
  6. 6 http://newoldage.blogs.nytimes.com/2014/02/11/what-makes-older-people-happy/
  7. 7 http://www.smashingmagazine.com/wp-content/uploads/2015/01/01-kindle-text-size-opt.jpg
  8. 8 http://www.navydesign.com.au
  9. 9 http://www.smashingmagazine.com/wp-content/uploads/2015/01/01-kindle-text-size-opt.jpg
  10. 10 http://www.nidcd.nih.gov/health/hearing/Pages/Age-Related-Hearing-Loss.aspx
  11. 11 http://webaim.org/resources/contrastchecker/
  12. 12 http://www.afb.org/prodBrowseCatResults.asp?CatID=49
  13. 13 http://www.yorku.ca/mack/hfes2009.html
  14. 14 http://www.medicaldaily.com/finger-tapping-test-shows-no-motor-skill-decline-until-after-middle-age-244927
  15. 15 http://dl.acm.org/citation.cfm?id=1152260
  16. 16 http://www.lukew.com/ff/entry.asp?1085
  17. 17 http://dcurt.is/the-death-of-the-tablet
  18. 18 http://www2.deloitte.com/content/dam/Deloitte/global/Documents/Technology-Media-Telecommunications/gx-tmt-2014prediction-smartphone.pdf
  19. 19 http://www.smashingmagazine.com/wp-content/uploads/2015/01/02-examples-of-technology-opt.jpg
  20. 20 http://www.navydesign.com.au
  21. 21 http://www.smashingmagazine.com/wp-content/uploads/2015/01/02-examples-of-technology-opt.jpg
  22. 22 http://dcurt.is/the-death-of-the-tablet
  23. 23 http://recode.net/2014/08/26/in-defense-of-tablets/
  24. 24 http://www.ncbi.nlm.nih.gov/books/NBK3885/
  25. 25 http://www.psych.utoronto.ca/users/hasher/abstracts/hasher_zacks_88.htm
  26. 26 http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780195156744.001.0001/acprof-9780195156744-chapter-10
  27. 27 http://www.smashingmagazine.com/wp-content/uploads/2015/01/03-paper-is-king-opt.jpg
  28. 28 http://www.navydesign.com.au
  29. 29 http://www.smashingmagazine.com/wp-content/uploads/2015/01/03-paper-is-king-opt.jpg
  30. 30 http://www.nngroup.com/articles/usability-for-senior-citizens/
  31. 31 http://www.statisticbrain.com/attention-span-statistics/
  32. 32 http://www.era.lib.ed.ac.uk/handle/1842/8572
  33. 33 http://news.stanford.edu/news/2009/august24/multitask-research-study-082409.html
  34. 34 http://psycnet.apa.org/index.cfm?fa=search.displayRecord&uid=2000-07430-014
  35. 35 https://fightdementia.org.au

The post Designing For The Elderly: Ways Older People Use Digital Technology Differently appeared first on Smashing Magazine.

See the original article here – 

Designing For The Elderly: Ways Older People Use Digital Technology Differently


Redefining Lazy Loading With Lazy Load XT

Lazy loading is a common software design pattern that defers the initialization of objects until they are needed. Lazy loading images started to become popular on the web back in 2007, when Mika Tuupola drew inspiration from the YUI ImageLoader utility and released a jQuery plugin1. Since then, it’s become a popular technique to optimize page loading and the user experience. In this article I will discuss why we should and shouldn’t use Lazy Load, and how to implement it.

Why Lazy Load?

Images make up over 60% of an average page’s size, according to HTTP Archive2. Images on a web page would be rendered once they are available. Without lazy loading, this could lead to a lot of data traffic that is not immediately necessary (such as images outside of the viewport) and longer waiting times. The problem? Visitors are not patient at all. By lazy loading, images outside of the viewport are loaded only when they would be visible to the user, thus saving valuable data and time.

Lazy loading is not limited to images. It can be used on pages with complex JavaScript, iframes and third-party widgets, delaying the loading of these resources until the user actually needs them.

Why Not Lazy Load?

Lazy loading is not a silver bullet, and it is known to affect performance. For example, most lazy-loading implementations either don’t have a src attribute in the <img> tags (which is invalid syntax, according to the HTML5 standard) or point to a blank image (hello, spacer.gif). This approach requires duplicate <img> tags wrapped in <noscript> tags for browsers with JavaScript disabled (or with the NoScript plugin installed):

<img data-src="path" attributes /><noscript><img src="path" attributes /></noscript>

Fortunately, this duplication doesn’t increase the page’s size significantly when you enable Gzip compression. However, some search engines might not index your images correctly, because the <noscript> tag is not indexed within content, and the <img> tag outside of <noscript> is referring to a blank image. Currently, Google seems to eventually index lazy-loaded images, but other search engines are less likely to.

How Is Lazy Loading Implemented?

You might be overwhelmed by the number of lazy-load plugins out there. You might also think that implementing one is easy: Just monitor page scrolling (or resizing), and then set the src attribute when an image is visible. If only it were that easy. Many things come into play when building a solid solution that works on both desktop and mobile. So, how do you separate the signal from the noise?

  • Throttling
    Checking the visibility of images after every interaction (even a tiny bit of scrolling) could compromise the page’s responsiveness. To ease that, implement some sort of throttling mechanism.
  • All your mobile are belong to us
    There is no scroll event in the Opera Mini browser and some old feature phones. If you receive traffic from those devices, you should monitor and load all images directly.
  • Lazy load or automatic pagination?
    Some implementations check only whether an image is above the fold. If the page is scrolled down to the very bottom via an anchor (or the scrollTo method in JavaScript), then all images below the fold will begin to download, instead of only the images within the viewport. This is more a matter of automatic pagination because users will have to wait for the remaining images to load after an interaction.
  • Dynamic image insertion
    Many websites use AJAX navigation nowadays. This requires a lazy-load plugin to support the dynamic insertion of images. To prevent a memory leak, any references to images that are not in the DOM (for example, ones that appear after an AJAX-based replacement of content) should also be removed automatically.

This list is certainly not comprehensive. We have many more issues to consider, such as the lack of getBoundingClientRect in old browsers, a change in orientation without an ensuing resize event on the iPhone, or the particular handling requirements of the jQuery Mobile framework.

Unfortunately, most plugins do not handle all of the above.

Lazy Load XT

We’ve been optimizing web performance on numerous screens for almost a decade now. Our project Mobile Joomla3 has been applied to over a quarter billion web pages and is still one of the most popular ways to optimize Joomla websites for mobile. Thanks to this, we’ve been lucky to witness the evolution of the web from desktop to mobile and observe trends and changing needs.

With our latest project, RESS.io4, we’ve been working on an easy solution to automatically improve responsive design performance on all devices. Lazy loading became an integral part of the project, but we came to realize that current lazy-load implementations are insufficient for the growing needs of the modern web. After all, it’s not just about desktop, mobile and images anymore, but is more and more about other media as well, especially video (oh, and did I hear someone say “social media widgets”?).

We concluded that the modern web could use a mobile-oriented, fast, extensible and jQuery-based solution. That is why we developed one and called it Lazy Load XT315.

Here are its main principles, which consider both current and future applications:

  • It should support jQuery Mobile6 out of the box.
  • It should support the jQuery7, Zepto8 and DOMtastic9 libraries. Of course, writing the solution in native JavaScript is possible, but jQuery is a rather common JavaScript extension nowadays, and one of our aims was to simplify the transition from the original Lazy Load to Lazy Load XT. This makes jQuery an adequate choice. However, if you don’t want to use jQuery at all, read the “Requirements” section below for details on reducing the size of dependent libraries.
  • It must be easy to start. The default settings should work most of the time. Prepare the HTML, include the JavaScript, et voilà!


Lazy Load XT requires jQuery 1.7+, Zepto 1.0+ or DOMtastic 0.7.2+. Including the plugin is easy and as expected:

<script src="jquery.min.js"></script>
<script src="jquery.lazyloadxt.min.js"></script>

<script>$.lazyLoadXT.extend(edgeY: 200);</script>

<style>img.lazy display:none</style>


By default, the plugin processes all images on the page and obtains an image’s actual source path from the data-src attribute. So, the recommended snippet to place an image on the page is this:

<img class="lazy" data-src="path" [attributes] /><noscript><img src="path" [attributes] /></noscript>

From this snippet, it is clear why we’ve set img.lazy above to display: none: Hiding the image is necessary in case there is no JavaScript, or else both the original image and the placeholder would be displayed. If the src attribute of the <img> tag is not set, then the plugin will set it to be a transparent GIF using the data-uri attribute.

If you’re not worried about users who have disabled JavaScript (or about valid HTML5 code), then just load jquery.lazyloadxt.min.js and replace the src attribute in the images with data-src:

<script src="jquery.min.js"></script>
<script src="jquery.lazyloadxt.min.js"></script>
<img data-src="path" [attributes] />


Lazy Load XT is available in two versions: jquery.lazyloadxt.js and jquery.lazyloadxt.extra.js. The latter includes better support of video elements, both <video> tags and ones embedded in <iframe> (such as YouTube and Vimeo).

Markup changes are similar to the above, and replacing the src attributes with data-src and post with data-poster is sufficient if you’re using them in a <video> element.

<script src="jquery.lazyloadxt.extra.js"></script>
<iframe data-src="//www.youtube.com/embed/[videocode]?rel=0" width="320" height="240"></iframe>
<video data-poster="/path/to/poster.jpg" width="320" height="240" controls>
   <source data-src="/path/to/video.mp4" type='video/mp4; codecs="avc1.42E01E, mp4a.40.2"'>
   <source data-src="/path/to/video.ogv" type='video/ogg; codecs="theora, vorbis"'>
<video data-src="/path/to/video2.mp4" width="320" height="240" controls>


The size of the jquery.lazyloadxt.min.js file is 2.3 KB (or 1.3 KB Gzip’ed), and the size of jquery.lazyloadxt.extra.min.js is 2.7 KB (or 1.4 KB Gzip’ed). That’s small enough, especially compared to jQuery and Zepto.


Even though Lazy Load XT requires jQuery, Zepto or DOMtastic, loading the full versions of any of them is not necessary. For example, DOMtastic requires only a minimal set of modules (attr, class, data, event, selector, type) for you to get a 7.9 KB file (or 2.7 KB Gzip’ed), bringing the total size of both DOMtastic and Lazy Load XT to just 4 KB (Gzip’ed).


We’ve tested Lazy Load XT in the following browsers:

  • Internet Explorer 6 – 11
  • Chrome 1 – 37
  • Firefox 1.5 – 32.0
  • Safari 3 – 7
  • Opera 10.6 – 24.0
  • iOS 5 – 7 (stock browsers)
  • Android 2.3 – 4.4 (stock browsers)
  • Amazon Kindle Fire 2 and HD 8.9 (stock browsers)
  • Opera Mini 7


We have tested Lazy Load XT’s performance on a page with one thousand images and are happy with the results: Scrolling works well even on old Android 2.3 devices.

We also successfully tested various iterations of Lazy Load XT on over one thousand websites for several months in our jQuery Mobile-based Elegance and Flat templates10.


The plugin’s default settings may be modified with the $.lazyLoadXT object:

$.lazyLoadXT.edgeY = 200;
$.lazyLoadXT.srcAttr = 'data-src';

Note that you may change this object at any time: before loading the plugin, between loading and when the document is ready, and after the event is ready. (Note that the last option doesn’t affect initialized images.)

Lazy Load XT supports a lot of options and events, enabling you to integrate other plugins or implement new features. For the full list and details, see Lazy Load XT’s GitHub page11.

AJAX Support

If you use jQuery Mobile with built-in AJAX page loading, then the Lazy Load XT plugin will do all of the magic for you in the pageshow event. In general, you should run the code below to initialize images inside a container with AJAX-loaded content.


Or run this:


Extending Lazy Load XT

Lazy Load XT can be extended easily using the oninit, onshow, onload and onerror handlers or the related lazyinit, lazyshow, lazyload and lazyerror events. In this way, you can create amazing add-ons.

Some examples can be found on the GitHub page12, along with usage instructions13. We’ll highlight just a few of them here.

Loading Animation

Customizing the image-loading animation is easy. By default, Lazy Load XT includes spinner14 and fade-in15 animations, but you can use any effects from the Animate.css16 project or any other.

Responsive Images

Lazy Load XT has two add-ons for responsive images17. One is “srcset,” to polyfill the srcset attribute (and that should be renamed data-srcset):

<img data-srcset="image-hd.jpg 2x, image-phone.jpg 360w, image-phone-hd.jpg 360w 2x">

The second is “picture,” a polyfill for the <picture> tag:

<picture width="640" height="480">
   <br data-src="small320.jpg">
   <br media="(min-width: 321px)" data-src="medium480.jpg">
   <br media="(min-width: 481px)" data-src="large640.jpg">
   <noscript><img src="large640.jpg"></noscript>
   <p>Image caption</p>

Page Widgets

Lazy Load XT makes it possible to lazy-load page widgets18 (such as Facebook, Twitter or whatever widget you like). Insert any HTML code in the page using the “widget” add-on when an element becomes visible. Wrap the code in an HTML comment inside of a <div> with an ID attribute, and give the element a data-lazy-widget attribute with the value of that ID:

<!-- Google +1 Button -->
<div data-lazy-widget="gplus" class="g-plusone" data-annotation="inline" data-width="300"></div>
<div id="gplus"> <!--
      var po = document.createElement('script'),
      s = document.getElementsByTagName('script')[0];
      po.type = 'text/javascript'; po.async = true;
      po.src = 'https://apis.google.com/js/platform.js';
      s.parentNode.insertBefore(po, s);

If the data-lazy-widget attribute has an empty value, then the element itself will be used as a wrapper:

<div data-lazy-widget><!--

Many other add-ons are available, too. They include infinite scrolling, support for background images, loading all images before displaying them (if the browser supports it), and deferring the autoloading of all images.

Is There A Silver Bullet?

Lazy loading images is not a standard browser feature today. Also, no third-party browser extensions exist for such functionality.

One might assume that the lazyload attribute in the “Resource Priorities19” draft specification by Microsoft and Google would do it. However, it has another purpose: to set the background priority for a corresponding resource element (image, video, script, etc.). Thus, if your aim is to load JavaScript or CSS before images, that’s your choice. There is another killer attribute, postpone, which prevents any resource from loading until you set the CSS display property to a value other than none. The good news is that support for the lazyload attribute is in Internet Explorer 11. The bad news is that the postpone attribute has not been implemented yet.

We do not know when or if the draft specification above will ever be fully supported by the major browsers. So, let’s look at the solutions we have now.

Some people have attempted to solve the duplication of the <img> tag in <noscript> tags by keeping only the <noscript> part and processing it with JavaScript. Unfortunately, <noscript> has no content in Internet Explorer, and it is not included in the DOM at all in Android’s stock browser (other browsers may behave similarly).

An alternative would be to use the <script> tag, instead of <noscript>, like so:

<script>function Z()document.write('<br ');</script>
<script>Z();</script><img src="path" attributes />

So, <img> would be an attribute of the <br> tag and would transform <br> tags into <img data-src> at the document.ready event. But this method requires document.write and is not compatible with AJAX-based navigation. We have implemented this method in the script add-on for Lazy Load XT, but the standard way using data-attributes seems to be clearer.

Finally, Mobify has an elegant Capturing API20 (see the recent review on Smashing Magazine21) that transforms HTML into plain text using the following code and then processes it with JavaScript:

document.write('<plaintext style="display:none">');

Unfortunately, this solution has drawbacks of its own: It is quite slow, and the browser might treat it as a JavaScript-based HTML parser. Also, combining this solution with AJAX navigation is not clear, and it is not guaranteed to work correctly in all browsers because the <plaintext> tag was deprecated in HTML 2. It actually doesn’t work in W3C’s Amaya browser and on some feature phones (such as Nokia E70). Nevertheless, these are edge cases, and you may use Mobify.js and Lazy Load XT simultaneously, although that is beyond the scope of this article.

Comparing Lazy Load Solutions

Both Lazy Load XT and the original Lazy Load are not the only solutions around. Below we compare most of the major existing solutions:

Feature LazyLoad for jQuery22 Lazy Load XT3023 Unveil24 Lazy25 (by Eisbehr) Responsive Lazy Loader26 bLazy27 Lazyload28 (by VVO) Echo29
Current version 1.9.3 1.0.5 1.3.0 0.3.7 0.1.7 1.2.2 2.1.3 1.5.0
Dependencies jQuery jQuery, Zepto or DOMtastic jQuery or Zepto jQuery jQuery
Size (Gzip’ed) 1.19 KB 1.31 KB (or 1.45 KB with extras) 338 B 1.45 B 1.23 KB 1.24 KB 1.01 KB 481 B
Skips images above the fold yes yes yes no yes yes no yes
Loading effects yes yes yes (with custom code) yes yes (with custom code) yes (with custom code) no no
Responsive images no yes (via plugin) yes no yes yes yes (with custom code) no
Supports scroll containers yes yes no yes yes no yes no
Supports horizontal scrolling yes yes no no yes yes yes yes
Throttling no yes no yes no yes yes yes
Lazy background images yes yes (via plugin) no yes no no no no
Lazy <video> tag no yes no no no no no no
Lazy iframes no yes no no no no no no
Supports Opera Mini no yes no no no no no no


The total size of media elements on the average web page is increasing constantly. Yet, especially on mobile devices, performance bottlenecks remain, which stem from bandwidth issues, widely varying network latency, and limitations on memory and the CPU. We need solutions for better and faster browsing experiences that work across all devices and browsers.

While no single lazy-load standard exists so far, we welcome you to try Lazy Load XT, especially if lazy-loaded video or other media is an important part of your website’s functionality.

Download and Contribute

Bug reports, patches and feature requests are welcome.

(al, ml)


  1. 1 http://www.appelsiini.net/projects/lazyload
  2. 2 http://httparchive.org/interesting.php
  3. 3 http://www.mobilejoomla.com/
  4. 4 http://ress.io/
  5. 5 https://github.com/ressio/lazy-load-xt
  6. 6 http://jquerymobile.com/
  7. 7 http://jquery.com/
  8. 8 http://zeptojs.com/
  9. 9 http://webpro.github.io/DOMtastic/
  10. 10 http://www.mobilejoomla.com/templates.html
  11. 11 https://github.com/ressio/lazy-load-xt#options”
  12. 12 http://ressio.github.io/lazy-load-xt
  13. 13 https://github.com/ressio/lazy-load-xt/#extendability
  14. 14 https://github.com/ressio/lazy-load-xt/#spinner
  15. 15 https://github.com/ressio/lazy-load-xt/#fade-in-animation
  16. 16 https://github.com/daneden/animate.css
  17. 17 https://github.com/ressio/lazy-load-xt/#responsive-images
  18. 18 https://github.com/ressio/lazy-load-xt/#widgets
  19. 19 https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/ResourcePriorities/Overview.html
  20. 20 https://hacks.mozilla.org/2013/03/capturing-improving-performance-of-the-adaptive-web/
  21. 21 http://www.smashingmagazine.com/2013/10/24/automate-your-responsive-images-with-mobify-js/
  22. 22 http://www.appelsiini.net/projects/lazyload
  23. 23 http://ressio.github.io/lazy-load-xt/
  24. 24 http://luis-almeida.github.io/unveil/
  25. 25 http://jquery.eisbehr.de/lazy/
  26. 26 https://github.com/jetmartin/responsive-lazy-loader
  27. 27 http://dinbror.dk/blazy/
  28. 28 http://vvo.github.io/lazyload/
  29. 29 http://toddmotto.com/echo-js-simple-javascript-image-lazy-loading/
  30. 30 http://ressio.github.io/lazy-load-xt/
  31. 31 https://github.com/ressio/lazy-load-xt
  32. 32 https://raw.github.com/ressio/lazy-load-xt/master/dist/jquery.lazyloadxt.min.js
  33. 33 https://raw.github.com/ressio/lazy-load-xt/master/dist/jquery.lazyloadxt.extra.min.js
  34. 34 http://ressio.github.io/lazy-load-xt/demo/

The post Redefining Lazy Loading With Lazy Load XT appeared first on Smashing Magazine.

More here: 

Redefining Lazy Loading With Lazy Load XT


Design Accessibly, See Differently: Color Contrast Tips And Tools

When you browse your favorite website or check the latest version of your product on your device of choice, take a moment to look at it differently. Step back from the screen. Close your eyes slightly so that your vision is a bit clouded by your eyelashes.

  • Can you still see and use the website?
  • Are you able to read the labels, fields, buttons, navigation and small footer text?
  • Can you imagine how someone who sees differently would read and use it?

In this article, I’ll share one aspect of design accessibility: making sure that the look and feel (the visual design of the content) are sufficiently inclusive of differently sighted users.

Web page viewed with NoCoffee low-vision simulation1
Web page viewed with NoCoffee low-vision simulation. (View large version2)

I am a design consultant on PayPal’s accessibility team. I assess how our product designs measure up to the Web Content Accessibility Guidelines (WCAG) 2.0, and I review our company’s design patterns and best practices.

I created our “Designers’ Accessibility Checklist,” and I will cover one of the most impactful guidelines on the checklist in this article: making sure that there is sufficient color contrast for all content. I’ll share the strategies, tips and tools that I use to help our teams deliver designs that most people can see and use without having to customize the experiences.

Our goal is to make sure that all visual designs meet the minimum color-contrast ratio for normal and large text on a background, as described in the WCAG 2.0, Level AA, “Contrast (Minimum): Understanding Success Criterion 1.4.3523.”

Who benefits from designs that have sufficient contrast? Quoting from the WCAG’s page:

The 4.5:1 ratio is used in this provision to account for the loss in contrast that results from moderately low visual acuity, congenital or acquired color deficiencies, or the loss of contrast sensitivity that typically accompanies aging.

As an accessibility consultant, I’m often asked how many people with disabilities use our products. Website analytics do not reveal this information. Let’s estimate how many people could benefit from designs with sufficient color contrast by reviewing the statistics:

  • 15% of the world’s population have some form of disability4, which includes conditions that affect seeing, hearing, motor abilities and cognitive abilities.
  • About 4% of the population have low vision, whereas 0.6% are blind.
  • 7 to 12% of men have some form of color-vision deficiency (color blindness), and less than 1% of women do.
  • Low-vision conditions increase with age, and half of people over the age of 50 have some degree of low-vision condition.
  • Worldwide, the fastest-growing population is 60 years of age and older5.
  • Over the age of 40, most everyone will find that they need reading glasses or bifocals to clearly see small objects or text, a condition caused by the natural aging process, called presbyopia6.

Let’s estimate that 10% of the world population would benefit from designs that are easier to see. Multiply that by the number of customers or potential customers who use your website or application. For example, out of 2 million online customers, 200,000 would benefit.

Some age-related low-vision conditions7 include the following:

  • Macular degeneration
    Up to 50% of people are affected by age-related vision loss.
  • Diabetic retinopathy
    In people with diabetes, leaking blood vessels in the eyes can cloud vision and cause blind spots.
  • Cataracts
    Cataracts clouds the lens of the eye and decreases visual acuity.
  • Retinitis pigmentosa
    This inherited condition gradually causes a loss of vision.

All of these conditions reduce sensitivity to contrast, and in some cases reduce the ability to distinguish colors.

Color-vision deficiencies, also called color-blindness, are mostly inherited and can be caused by side effects of medication and age-related low-vision conditions.

Here are the types of color-vision deficiencies8:

  • Deuteranopia
    This is the most common and entails a reduced sensitivity to green light.
  • Protanopia
    This is a reduced sensitivity to red light.
  • Tritanopia
    This is a reduced sensitivity to blue light, but not very common.
  • Achromatopsia
    People with this condition cannot see color at all, but it is not very common.

Reds and greens or colors that contain red or green can be difficult to distinguish for people with deuteranopia or protanopia.

Experience Seeing Differently

Creating a checklist and asking your designers to use it is easy, but in practice how do you make sure everyone follows the guidelines? We’ve found it important for designers not only to intellectually understand the why, but to experience for themselves what it is like to see differently. I’ve used a couple of strategies: immersing designers in interactive experiences through our Accessibility Showcase, and showing what designs look like using software simulations.

In mid-2013, we opened our PayPal Accessibility Showcase9 (video). Employees get a chance to experience first-hand what it is like for people with disabilities to use our products by interacting with web pages using goggles and/or assistive technology. We require that everyone who develops products participates in a tour. The user scenarios for designing with sufficient color contrast include wearing goggles that simulate conditions of low or partial vision and color deficiencies. Visitors try out these experiences on a PC, Mac or tablet. For mobile experiences, visitors wear the goggles and use their own mobile devices.

Fun fact: One wall in the showcase was painted with magnetic paint. The wall contains posters, messages and concepts that we want people to remember. At the end of the tour, I demonstrate vision simulators on our tablet. I view the message wall with the simulators to emphasize the importance of sufficient color contrast.

Showcase visitors wear goggles that simulate low-vision and color-blindness conditions
Showcase visitors wear goggles that simulate low-vision and color-blindness conditions.
Some of the goggles used in the Accessibility Showcase
Some of the goggles used in the Accessibility Showcase.

Software Simulators

Mobile Apps

Free mobile apps are available for iOS and Android devices:

  • Chromatic Vision Simulator
    Kazunori Asada’s app simulates three forms of color deficiencies: protanope (protanopia), deuteranope (deuteranopia) and tritanope (tritanopia). You can view and then save simulations using the camera feature, which takes a screenshot in the app. (Available for iOS6210 and Android6311.)
  • VisionSim
    The Braille Institute’s app simulates a variety of low-vision conditions and provides a list of causes and symptoms for each condition. You can view and then save simulations using the camera feature, which takes a screenshot in the app. (Available for iOS6412 and Android.)13

Chromatic Vision Simulator

The following photos show orange and green buttons viewed through the Chromatic Vision Simulator:

Seen through Chromatic Vision Simulator, the green and orange buttons show normal (C), protanope (P), deuteranope (D) and tritanope (T).14
Seen through Chromatic Vision Simulator, the green and orange buttons show normal (C), protanope (P), deuteranope (D) and tritanope (T). (View large version15)

This example highlights the importance of another design accessibility guideline: Do not use color alone to convey meaning. If these buttons were online icons representing a system’s status (such as up or down), some people would have difficulty understanding it because there is no visible text and the shapes are the same. In this scenario, include visible text (i.e. text labels), as shown in the following example:

The green and orange buttons are viewed in Photoshop with deuteranopia soft proof and normal (text labels added).16
The green and orange buttons are viewed in Photoshop with deuteranopia soft proof and normal (text labels added). (View large version17)

Mobile Device Simulations

Checking for sufficient color contrast becomes even more important on mobile devices. Viewing mobile applications through VisionSim or Chromatic Vision Simulator is easy if you have two mobile phones. View the mobile app that you want to test on the second phone running the simulator.

If you only have one mobile device, you can do the following:

  1. Take screenshots of the mobile app on the device using the built-in camera.
  2. Save the screenshots to a laptop or desktop.
  3. Open and view the screenshots on the laptop, and use the simulators on the mobile device to view and save the simulations.

How’s the Weather in Cupertino?

The following example highlights the challenges of using a photograph as a background while making essential information easy to see. Notice that the large text and bold text are easier to see than the small text and small icons.

The Weather mobile app, viewed with Chromatic Vision Simulator, shows normal, deuteranope, protanope and tritanope simulations.18
The Weather mobile app, viewed with Chromatic Vision Simulator, shows normal, deuteranope, protanope and tritanope simulations. (View large version19)

Low-Vision Simulations

Using the VisionSim app, you can simulate macular degeneration, diabetic retinopathy, retinitis pigmentosa and cataracts.

The Weather mobile app is being viewed with the supported condition simulations.20
The Weather mobile app is being viewed with the supported condition simulations. (View large version21)

Adobe Photoshop

PayPal’s teams use Adobe Photoshop to design the look and feel of our user experiences. To date, a color-contrast ratio checker or tester is not built into Photoshop. But designers can use a couple of helpful features in Photoshop to check their designs for sufficient color contrast:

  • Convert designs to grayscale by going to “Select View” → “Image” → “Adjustments” → “Grayscale.”
  • Simulate color blindness conditions by going to “Select View” → “Proof Setup” → “Color Blindness” and choosing protanopia type or deuteranopia type. Adobe provides soft-proofs for color blindness22.


If you’re designing with gradient backgrounds, verify that the color-contrast ratio passes for the text color and background color on both the lightest and darkest part of the gradient covered by the content or text.

In the following example of buttons, the first button has white text on a background with an orange gradient, which does not meet the minimum color-contrast ratio. A couple of suggested improvements are shown:

  • add a drop-shadow color that passes (center button),
  • change the text to a color that passes (third button).

Checking in Photoshop with the grayscale and deuteranopia proof, the modified versions with the drop shadow and dark text are easier to read than the white text.

If you design in sizes larger than actual production sizes, make sure to check how the design will appear in the actual web page or mobile device.

Button with gradients: normal view; view in grayscale; and as a proof, deuteranopia.23
Button with gradients: normal view; view in grayscale; and as a proof, deuteranopia. (View large version24)

In the following example of a form, the body text and link text pass the minimum color-contrast ratio for both the white and the gray background. I advise teams to always check the color contrast of text and links against all background colors that are part of the experience.

Even though the “Sign Up” link passes, if we view the experience in grayscale or with proof deuteranopia, distinguishing that “Sign Up” is a link might be difficult. To improve the affordance of “Sign Up” as a link, underline the link or link the entire phrase, “New to PayPal? Sign Up.”

Form example: normal view; in Photoshop, a view in grayscale; and as a proof, deuteranopia.25
Form example: normal view; in Photoshop, a view in grayscale; and as a proof, deuteranopia. (View large version26)

Because red and green can be more difficult to distinguish for people with conditions such as deuteranopia and protanopia, should we avoid using them? Not necessarily. In the following example, a red minus sign (“-”) indicates purchasing or making a payment. Money received or refunded is indicated by a green plus sign (“+”). Viewing the design with proof, deuteranopia, the colors are not easy to distinguish, but the shapes are legible and unique. Next to the date, the description describes the type of payment. Both shape and content provide context for the information.

Also shown in this example, the rows for purchases and refunds alternate between white and light-gray backgrounds. If the same color text is used for both backgrounds, verify that all of the text colors pass for both white and gray backgrounds.

Normal view and as a proof, deuteranopia: Check the text against the alternating background colors.27
Normal view and as a proof, deuteranopia: Check the text against the alternating background colors. (View large version28)

In some applications, form fields and/or buttons may be disabled until information has been entered by the user. Our design guidance does not require disabled elements to pass, in accordance with the WCAG 2.0’s “Contrast (Minimum): Understanding Success Criterion 1.4.34129:

Incidental: Text or images of text that are part of an inactive user interface component,… have no contrast requirement.

In the following example of a mobile app’s form, the button is disabled until a phone number and PIN have been entered. The text labels for the fields are a very light gray over a white background, which does not pass the minimum color-contrast ratio.

If the customer interprets that form elements with low contrast are disabled, would they assume that the entire form is disabled?

Mobile app form showing disabled fields and button (left) and then enabled (right).30
Mobile app form showing disabled fields and button (left) and then enabled (right). (View large version31)

The same mobile app form is shown in a size closer to what I see on my phone in the following example. At a minimum, the text color needs to be changed or darkened to pass the minimum color-contrast ratio for normal body text and to improve readability.

To help distinguish between labels in fields and user-entered information, try to explore alternative visual treatments of form fields. Consider reversing foreground and background colors or using different font styles for labels and for user-entered information.

Mobile app form example: normal, grayscale and proof deuteranopia.32
Mobile app form example: normal, grayscale and proof deuteranopia. (View large version33)

NoCoffee Vision Simulator for Chrome

NoCoffee Vision Simulator6634 can be used to simulate color-vision deficiencies and low-vision conditions on any pages that are viewable in the Chrome browser. Using the “Color Deficiency” setting “achromatopsia,” you can view web pages in grayscale.

The following example shows the same photograph (featuring a call to action) viewed with some of the simulations available in NoCoffee. The message and call to action are separated from the background image by a practically opaque black container. This improves readability of the message and call to action. Testing the color contrast of the blue color in the headline against solid black passes for large text. Note that the link “Mobile” is not as easy to see because the blue does not pass the color-contrast standard for small body text. Possible improvements could be to change the link color to white and underline it, and/or make the entire phrase “Read more about Mobile” a link.

Simulating achromatopsia (no color), deuteranopia, protanopia using NoCoffee.35
Simulating achromatopsia (no color), deuteranopia, protanopia using NoCoffee. (View large version36)
Simulating low visual acuity, diabetic retinopathy, macular degeneration and low visual acuity plus retinitus pigmentosa, using NoCoffee.37
Simulating low visual acuity, diabetic retinopathy, macular degeneration and low visual acuity plus retinitus pigmentosa, using NoCoffee. (View large version38)

Using Simulators

Simulators are useful tools to visualize how a design might be viewed by people who are aging, have low-vision conditions or have color-vision deficiencies.

For design reviews, I use the simulators to mock up a design in grayscale, and I might use color-blindness filters to show designers possible problems with color contrast. Some of the questions I ask are:

  • Is anything difficult to read?
  • Is the call to action easy to find and read?
  • Are links distinguishable from other content?

After learning how to use simulators to build empathy and to see their designs differently, I ask designers to use tools to check color contrast to verify that all of their designs meet the minimum color-contrast ratio of the WCAG 2.0 AA. The checklist includes a couple of tools they can use to test their designs.

Color-Contrast Ratio Checkers

The tools we cite in the designers’ checklist are these:

There are many tools to check color contrast, including ones that check live products. I’ve kept the list short to make it easy for designers to know what to use and to allow for consistent test results.

Our goal is to meet the WCAG 2.0 AA color-contrast ratio, which is 4.5 to 1 for normal text and 3 to 1 for large text.

What are the minimum sizes for normal text and large text? The guidance provides recommendations on size ratios in the WCAG’s Contrast (Minimum): Understanding Success Criterion 1.4.34129 but not a rule for a minimum size for body text. As noted in the WCAG’s guidance, thin decorative fonts might need to be larger and/or bold.

Testing Color-Contrast Ratio

You should test:

  • early in the design process;
  • when creating a visual design specification for any product or service (this documents all of the color codes and the look and feel of the user experience);
  • all new designs that are not part of an existing visual design guideline.

Test Hexadecimal Color Codes for Web Designs

Let’s use the WebAIM Color Contrast Checker4239 to test sample body-text colors on a white background (#FFFFFF):

  • dark-gray text (#333333).
  • medium-gray text (#666666).
  • light-gray text (#999999).

We want to make sure that body and normal text passes the WCAG 2.0 AA. Note that light gray (#999999) does not pass on a white background (#FFFFFF).

Test dark-gray, medium-gray and light-gray using the WebAim Color Contrast Checker.43
Test dark-gray, medium-gray and light-gray using the WebAim Color Contrast Checker.(View large version44)

In the tool, you can modify the light gray (#999999) to find a color that does pass the AA. Select the “Darken” option to slightly change the color until it passes. By clicking the color field, you will have more options, and you can change color and luminosity, as shown in the second part of this example.

Modify colors to pass45
In the WebAim Color Contrast Checker, modify the light gray using the “Darken” option, or use the color palette to find a color that passes. (View large version46)

Tabular information may be designed with alternating white and gray backgrounds to improve readability. Let’s test medium-gray text (#666666) and light-gray text (#757575) on a gray background (#E6E6E6).

Note that with the same background, the medium text passes, but the lighter gray passes only for large text. In this case, use medium gray for body text instead of white or gray backgrounds. Use the lighter gray only for large text, such as headings on white and gray backgrounds.

Test light-gray and medium-gray text on a gray background.47
Test light-gray and medium-gray text on a gray background. (View large version48)

Test RGB Color Codes

For mobile applications, designers might use RGB color codes to specify visual designs for engineering. You can use the TPG Colour Contrast Checker49. you will need to install either the PC or Mac version and run it side by side with Photoshop.

Let’s use the Colour Contrast Checker to test medium-gray text (102 102 102 in RGB and #666666 in hexadecimal) and light-gray text (#757575 in hexadecimal) on a gray background (230 230 230 in RGB and #E6E6E6 in hexadecimal).

  1. Open the Colour Contrast Checker application.
  2. Select “Options” → “Displayed Color Values” → “RGB.”
  3. Under “Algorithm,” select “Luminosity.”
  4. Enter the foreground and background colors in RGB: 102 102 102 for foreground and 230 230 230 for background. Mouse click or tab past the fields to view the results. Note that this combination passes for both text and large text (AA).
  5. Select “Show details” to view the hexadecimal color values and information about both AA and AAA requirements.
Colour Contrast Analyser, and color wheel to modify colors50
Colour Contrast Analyser, and color wheel to modify colors. (View large version51)

In our example, light-gray text (117 117 117 in RGB) on a gray background (230 230 230 in RGB) does not meet the minimum AA contrast ratio for body text. To modify the colors, view the color wheels by clicking in the “Color” select box to modify the foreground or background. Or you can select “Options” → “Show Color Sliders,” as shown in the example.

Colour Contrast Analyser, with RGB codes. Show color sliders to modify any color that does not meet minimum AA guidelines.
Colour Contrast Analyser, with RGB codes. Show color sliders to modify any color that does not meet minimum AA guidelines.

In most cases, minor adjustments to colors will meet the minimum contrast ratio, and comparisons before and after will show how better contrast enables most people to see and read more easily.

Best Practices

Test for color-contrast ratio, and document the styles and color codes used for all design elements. Create a visual design specification that includes the following:

  • typography for all textual elements, including headings, text links, body text and formatted text;
  • icons and glyphs and text equivalents;
  • form elements, buttons, validation and system error messaging;
  • background color and container styles (making sure text on these backgrounds all pass);
  • the visual treatments for disabled links, form elements and buttons (which do not need to pass a minimum color-contrast ratio).

Documenting visual guidelines for developers brings several benefits:

  • Developers don’t have to guess what the designers want.
  • Designs can be verified against the visual design specification during quality testing cycles, by engineers and designers.
  • A reference point that meets design accessibility guidelines for color contrast can be shared and leveraged by other teams.


If you are a designer, try out the simulators and tools on your next design project. Take time to see differently. One of the stellar designers who reviewed my checklist told me a story about using Photoshop’s color-blindness proofs. On his own, he used the proofs to refine the colors used in a design for his company’s product. When the redesigned product was released, his CEO thanked him because it was the first time he was able to see the design. The CEO shared that he was color-blind. In many cases, you may be unaware that your colleague, leader or customers have moderate low-vision or color-vision deficiencies. If meeting the minimum color-contrast ratio for a particular design element is difficult, take the challenge of thinking beyond color. Can you innovate so that most people can pick up and use your application without having to customize it?

If you are responsible for encouraging teams to build more accessible web or mobile experiences, be prepared to use multiple strategies:

  • Use immersive experiences to engage design teams and gain empathy for people who see differently.
  • Show designers how their designs might look using simulators.
  • Test designs that have low contrast, and show how slight modifications to colors can make a difference.
  • Encourage designers to test, and document visual specifications early and often.
  • Incorporate accessible design practices into reusable patterns and templates both in the code and the design.

Priorities and deadlines make it challenging for teams to deliver on all requests from multiple stakeholders. Be patient and persistent, and continue to engage with teams to find strategies to deliver user experiences that are easier to see and use by more people out of the box.


Low-Vision Goggles and Resources

(hp, al, il, ml)


  1. 1 http://www.smashingmagazine.com/wp-content/uploads/2014/10/nocoffeevis-large.png
  2. 2 http://www.smashingmagazine.com/wp-content/uploads/2014/10/nocoffeevis-large.png
  3. 3 http://www.w3.org/TR/UNDERSTANDING-WCAG20/visual-audio-contrast-contrast.html
  4. 4 http://www.who.int/mediacentre/factsheets/fs352/en/
  5. 5 http://www.un.org/esa/population/publications/worldageing19502050/
  6. 6 http://www.mayoclinic.org/diseases-conditions/presbyopia/basics/causes/con-20032261
  7. 7 https://www.nei.nih.gov/healthyeyes/aging_eye.asp
  8. 8 http://webaim.org/articles/visual/colorblind
  9. 9 https://www.youtube.com/watch?feature=player_embedded&v=7MyHZofcNnk
  10. 10 https://itunes.apple.com/us/app/chromatic-vision-simulator/id389310222?mt=8
  11. 11 https://play.google.com/store/apps/details?id=asada0.android.cvsimulator&hl=en
  12. 12 https://itunes.apple.com/us/app/visionsim-by-braille-institute/id525114829?mt=8
  13. 13 https://play.google.com/store/apps/details?id=com.BrailleIns.VisionSim&hl=en
  14. 14 http://www.smashingmagazine.com/wp-content/uploads/2014/10/CVSbuttonsOG-large.jpg
  15. 15 http://www.smashingmagazine.com/wp-content/uploads/2014/10/CVSbuttonsOG-large.jpg
  16. 16 http://www.smashingmagazine.com/wp-content/uploads/2014/10/textonbuttons-large.png
  17. 17 http://www.smashingmagazine.com/wp-content/uploads/2014/10/textonbuttons-large.png
  18. 18 http://www.smashingmagazine.com/wp-content/uploads/2014/10/weatherCVS-large.png
  19. 19 http://www.smashingmagazine.com/wp-content/uploads/2014/10/weatherCVS-large.png
  20. 20 http://www.smashingmagazine.com/wp-content/uploads/2014/10/weathervisionsim-large.png
  21. 21 http://www.smashingmagazine.com/wp-content/uploads/2014/10/weathervisionsim-large.png
  22. 22 http://help.adobe.com/en_US/creativesuite/cs/using/WS3F71DA01-0962-4b2e-B7FD-C956F8659BB3.html#WS473A333A-7F61-4aba-8F67-5553208E349C
  23. 23 http://www.smashingmagazine.com/wp-content/uploads/2014/10/buttongradients-large.png
  24. 24 http://www.smashingmagazine.com/wp-content/uploads/2014/10/buttongradients-large.png
  25. 25 http://www.smashingmagazine.com/wp-content/uploads/2014/10/logindev-large.png
  26. 26 http://www.smashingmagazine.com/wp-content/uploads/2014/10/logindev-large.png
  27. 27 http://www.smashingmagazine.com/wp-content/uploads/2014/10/rowsandicons-large.png
  28. 28 http://www.smashingmagazine.com/wp-content/uploads/2014/10/rowsandicons-large.png
  29. 29 http://www.w3.org/TR/2014/NOTE-UNDERSTANDING-WCAG20-20140311/visual-audio-contrast-contrast.html
  30. 30 http://www.smashingmagazine.com/wp-content/uploads/2014/10/mobiledisabledfields-large.png
  31. 31 http://www.smashingmagazine.com/wp-content/uploads/2014/10/mobiledisabledfields-large.png
  32. 32 http://www.smashingmagazine.com/wp-content/uploads/2014/10/mobiledisabledfields_bwcc-large.png
  33. 33 http://www.smashingmagazine.com/wp-content/uploads/2014/10/mobiledisabledfields_bwcc-large.png
  34. 34 https://chrome.google.com/webstore/search/NoCoffee%20Vision%20Simulator?hl=en&gl=US
  35. 35 http://www.smashingmagazine.com/wp-content/uploads/2014/10/nocoffeecolorsim-large.png
  36. 36 http://www.smashingmagazine.com/wp-content/uploads/2014/10/nocoffeecolorsim-large.png
  37. 37 http://www.smashingmagazine.com/wp-content/uploads/2014/10/nocoffeevisionsims-large.png
  38. 38 http://www.smashingmagazine.com/wp-content/uploads/2014/10/nocoffeevisionsims-large.png
  39. 39 http://webaim.org/resources/contrastchecker
  40. 40 http://paciellogroup.com/resources/contrastAnalyser
  41. 41 http://www.w3.org/TR/2014/NOTE-UNDERSTANDING-WCAG20-20140311/visual-audio-contrast-contrast.html
  42. 42 http://webaim.org/resources/contrastchecker
  43. 43 http://www.smashingmagazine.com/wp-content/uploads/2014/10/colorcontrastgrays-large.png
  44. 44 http://www.smashingmagazine.com/wp-content/uploads/2014/10/colorcontrastgrays-large.png
  45. 45 http://www.smashingmagazine.com/wp-content/uploads/2014/10/modifylightgray-large.png
  46. 46 http://www.smashingmagazine.com/wp-content/uploads/2014/10/modifylightgray-large.png
  47. 47 http://www.smashingmagazine.com/wp-content/uploads/2014/10/gray_graybackground-large.png
  48. 48 http://www.smashingmagazine.com/wp-content/uploads/2014/10/gray_graybackground-large.png
  49. 49 http://paciellogroup.com/resources/contrastAnalyser
  50. 50 http://www.smashingmagazine.com/wp-content/uploads/2014/10/ccanalysercolorwheel-large.png
  51. 51 http://www.smashingmagazine.com/wp-content/uploads/2014/10/ccanalysercolorwheel-large.png
  52. 52 http://www.w3.org/TR/UNDERSTANDING-WCAG20/visual-audio-contrast-contrast.html
  53. 53 http://www.w3.org/TR/2014/NOTE-UNDERSTANDING-WCAG20-20140311/visual-audio-contrast-contrast.html
  54. 54 https://www.paypal-engineering.com/2014/03/13/get-a-sneak-peek-into-paypal-accessibility-showcase/
  55. 55 http://www.adobe.com/accessibility/products/photoshop.html
  56. 56 http://help.adobe.com/en_US/creativesuite/cs/using/WS3F71DA01-0962-4b2e-B7FD-C956F8659BB3.html#WS473A333A-7F61-4aba-8F67-5553208E349C
  57. 57 http://webaim.org
  58. 58 http://webaim.org/resources/contrastchecker/
  59. 59 http://wave.webaim.org
  60. 60 http://webaim.org/articles/visual/colorblind
  61. 61 http://www.paciellogroup.com/resources/contrastAnalyser/
  62. 62 https://itunes.apple.com/us/app/chromatic-vision-simulator/id389310222?mt=8
  63. 63 https://play.google.com/store/apps/details?id=asada0.android.cvsimulator&hl=en
  64. 64 https://itunes.apple.com/us/app/visionsim-by-braille-institute/id525114829?mt=8
  65. 65 https://play.google.com/store/apps/details?id=com.BrailleIns.VisionSim&hl=en
  66. 66 https://chrome.google.com/webstore/search/NoCoffee%20Vision%20Simulator?hl=en&gl=US
  67. 67 http://accessgarage.wordpress.com/2013/02/09/458/
  68. 68 https://www.nei.nih.gov/healthyeyes/aging_eye.asp
  69. 69 http://www.who.int/mediacentre/factsheets/fs352/en/
  70. 70 http://www.mayoclinic.org/diseases-conditions/presbyopia/basics/causes/con-20032261
  71. 71 http://www.un.org/esa/population/publications/worldageing19502050/
  72. 72 http://www.lowvisionsimulationkit.com
  73. 73 http://www.lowvisionsimulators.com/find-the-right-low-vision-simulator

The post Design Accessibly, See Differently: Color Contrast Tips And Tools appeared first on Smashing Magazine.

Taken from – 

Design Accessibly, See Differently: Color Contrast Tips And Tools


Mobile Design Pattern: Inventory-Based Discrete Slider

Sliders are cool. When they’re done well, customers love to interact with them. When they’re not done well, they can cause a lot of frustration (not to mention lost sales) by standing between your customers and what they want. And getting them wrong is surprisingly easy.

In this article, we will present a solution, including the design and code, for a new type of Android slider to address common problems, along with a downloadable Android mini-app for you to try out. It’s a deep dive into sliders based on a chapter in Android Design Patterns. The experimental inventory-based slider we will look at would be at home in any application that asks for a price, a size, or any other faceted input within a widely distributed range.

Why Sliders?

Sliders are intuitive. They provide affordance, a quality that makes a control right for a particular task. They just feel right for dialing a value within a range. Sliders translate well from the physical world to touchscreens, where they look great and are easy to manipulate, without taking up a lot of space. Dual sliders in particular are great for limiting search filters and form values to a set range.

In the physical world, sliders serve a function similar to twist knobs. However, knobs are hard to “turn” on touchscreens, and they usually take up more space than sliders. For touchscreens, sliders are better.

Types Of Sliders

Sliders come in two types: single and double. Single sliders are best for entering one value. Dual sliders are great for searching within a range of values.

There are also two kinds of adjustments: continuous and discrete. Continuous adjustments are for indeterminate values in a range, such as a price or temperature. Discrete adjustments are for predefined values (such as clothing sizes). Both single sliders and dual sliders can take either kind of adjustment. Let’s look at some examples.

Zillow has customers use two single sliders with continuous adjustment to set a price range.

The Zillow app uses two single sliders, both with continuous adjustment. (View large version2)

This real-estate app Trulia uses two dual sliders with continuous adjustments:

The Trulia app uses two dual sliders with continuous adjustments. (View large version4)

Continuous adjustments for prices make sense. Why? Because price ranges are continuous. But they do allow for more precision than most shoppers care about. (A price difference of one cent is unlikely to make a customer reconsider a purchase.)

Discrete adjustments are different. They let you choose values, but only within predefined (i.e. discrete) increments. Facets like shoe size consist of discrete values; in the US and Western Europe, shoes are typically incremented in half-sizes: 6, 6.5, 7 and so on. You can’t buy shoes in a size of 6.25, so providing a control for this level of precision would not make sense.

One way to understand the difference is that single sliders with a low count of discrete values are similar to stepper controls: You can dial the value you want, but only from a predefined set.

This is an example of a dual slider with discrete adjustment stops. (View large version6)

Experimental Slider Patterns

Sliders with histograms and sliders based on inventory counts are two great experimental patterns that are variations on the standard slider. It’s unfortunate that they are not more common because they solve many of the problems that sliders can cause for users. We explain the problems with regular sliders in the “Caution” section below and detail the experimental solution in the “Super-Sliders” section (right after “Caution”).

So, at this point, you might be thinking, “Sliders sound great. What’s the downside?” Glad you asked.


Even the best patterns can go bad. Like Harvey Dent, the once loyal ally of Batman and Gotham City’s impeccably ethical district attorney, most things have a dark side. There’s a slippery slope between delight and dismay, and much like the Two-Face character who Dent becomes, sliders can be helpful or hateful. It all depends on how they’re implemented.

Here’s how to sidestep slider problems and keep your customers happy.

Make Sure Reasonable Values Can Be Entered Easily

Kayak has a continuous dual slider for filtering hotel prices (see screenshots below). To get a hotel in Los Angeles that you can afford on a humble mobile UX design consultant’s salary, you must place the pegs right on top of one another. This adjustment is anything but precise. For wide ranges, consider using a slider based on inventory counts, as explained in the “Super-Sliders” section coming up.

The continuous price slider fails to dial a reasonable hotel price in Los Angeles on Kayak’s app. (View large version8)

Show the Range

Speaking of range, showing the actual range of prices available in an entire collection is a great idea, as shown in the Kayak screenshots above ($38 to $587), instead of using arbitrary numbers such as $0 and max. Neither Zillow nor Trulia show the true maximum and minimum associated with their local home inventory.

Imagine how useful these sliders would be if they stated from the beginning that they ranged between $476,000 and $3,234,700. Showing the range also helps to avoid dead zones, such as when you’re looking for a home in San Francisco priced below $476,000, which would yield zero results. Be aware of how filtering affects the inventory; setting the range for the overall collection without applying the filters is best.

Don’t Cover the Numbers

As the customer adjusts the slider, the values should appear above the pegs, where the user’s fingers would not cover them. Placing the numbers below or to the side of the slider is not as useful. Kayak’s slider (shown above) is good in this regard: The range is covered while the customer adjusts the slider, but the filter’s actual value is not, which is about the best you can do on a mobile device.

Opt for a Slider With Discrete Positions

Continuous sliders are sexy in principle, because you can dial an exact number and get just the inventory you want. But the reality is that sliders are hard to adjust precisely — both in the physical world and on touch devices. That’s why you almost never see a slider for volume adjustment on a stereo. Ironically, the larger the device, the harder that adjusting the slider precisely seems to be. This is Fitts’ law in action: The time required for an action depends on the distance and size of the target. In other words, adjusting a tiny peg in the middle of a large tablet is difficult.

Regardless of the screen’s size, adjusting a continuous slider precisely while being bumped around on a train is hard. (You have permission to refer to this hereafter as Nudelman’s law if you wish.)

Continuous dual sliders also make it easy to over-constrain the range. For example, creating a continuous slider that enables the customer to dial a price of $45.50 to $46.10 might yield zero results and would not serve the customer well. On the other hand, sliders with discrete positions (i.e. stops) are much easier to adjust. The chance of dialing a range that is too small is also less.

Super-Sliders Save The Day

How can you implement a dual slider so that the user is able to input a price range without running into the dreaded problem of zero results mentioned in the “Caution” section above? Here’s where the experimental patterns discussed earlier come in. These are like regular sliders slightly souped up — super-sliders, if you will. Let us explain.

Regular Slider

A slider could use discrete values arranged according to inventory counts. This type of slider is typically arranged in a linear pattern, which means that a certain distance of movement on the slider’s axis represents an equal absolute change in value. For example, in a five-position slider, the price would go from $0 to $100 in $20 increments:

Each mark on the axis represents an equal absolute change in value on a linear price slider. (View large version10)

Although this is intuitive, the design makes it easy for customers to come up empty-handed, especially if the range is wide and the inventory is not equally distributed.

As explained in the “Caution” section, a customer shopping for superhero capes might select a range for which the inventory is zero — say, $40 to $60 — not knowing that a whole closetful of capes are available in the $62 to $65 range — literally, For a Few Dollars More. (Apologies to Clint Eastwood and Westerns lovers everywhere.)

Super-Slider (With Zero-Results-Fighting Histogram)

This is where a slider with a histogram (as shown below) is helpful. The idea behind this experimental pattern is simple: The 50 to 100 pixels above the fixed-position slider is a histogram that represents the inventory in a particular section of the linear price range. A high bar represents a large numbers of items, and a proportionally short bar represents a smaller number of items. That’s it.

A linear price slider with histogram provides more information. (View large version12)

When using a slider with a histogram, you can still dial the part of the range with low inventory; but making that mistake accidentally is difficult because the inventory counts are clearly shown in the histogram. You can use a slider with a histogram where a standard discrete-position slider would be used; it would take up only a little more vertical space in exchange for a more satisfying customer experience.

No Room for a Histogram?

Another way to implement a slider without using histograms is to arrange the slider’s intervals based on the inventory counts. To do this, divide your entire inventory — say, 100 capes — into five intervals, and you’ll get 20 capes per interval. Now, scan the price range to figure out the price (rounded to the nearest dollar) that corresponds to the approximate inventory count of 20. Suppose the first 19 capes cost between $0 and $60 (remember that we’re assuming no inventory in the $40 to $60 range), the second 21 capes fall in the $61 to $65 range, and so on. Here is what such a slider might look like:

The alternative price slider is based on the inventory counts. (View large version14)

Which implementation should you choose? It depends on the task. Most people don’t mind paying a few dollars outside of their budget, but they absolutely hate getting zero results. An inventory of fewer than 20 items in a given interval is not a satisfying result for most tasks, so use one of the other approaches to provide a better experience. Both a slider with a histogram and a slider based on inventory counts are far superior to the traditional slider. Breaking down the interval according to price is the more flexible approach because it shows the distribution clearly, while never yielding zero results. If the customer’s price range is larger than that of a single 20-item interval, then they can simply select a larger interval using the dual slider.

Both of the experimental sliders out-performed the regular slider in a study we did for a large retailer. Try it yourself. Create a quick prototype and do some “hallway usability.” Ask users to find some items around $70, and compare how they do with the histogram version, the inventory-based version and the regular version.

Tablet Apps

Sliders perform well in tablet apps. Make sure you heed the warnings in the “Caution” section; in particular, opt for a slider with discrete values to ensure accuracy, instead of a continuous slider (adjusting a continuous slider accurately on a large device is harder). Consider device ergonomics and avoid placing sliders in the middle of the screen. Instead, place sliders near the top of the screen, next to the right or left margin, optimized for one-handed operation with the thumb while the fingers hold on to the back of the tablet.

Depending on the design and purpose of your app, experiment by having two sets of sliders on the left and right sides of the screen, to be adjusted by the left and right hands, respectively. This would be especially interesting in apps such as music synthesizers. Finally, experiment with placing sliders vertically along the edge of the tablet (top to bottom), rather than horizontally from left to right, which is the easiest direction to adjust precisely with the thumb, while the fingers hold the back of the tablet.

Try It Out

To see how a slider app feels, a completed slider mini-app is available for you to download and try out. If you’re a developer, you can use it in your own project free of charge (see the “Code” section coming up). To install it, consider using an app installer such as the one made by FunTrigger15, which you can get free on the Play market. Here’s how it works. Connect your Android device to your computer. You should see the Android file-transfer window open automatically. If not, you might need to install software on your computer such as Android File Transfer (Mac users, take note). Download the APK source file 16, and place it in the “App Installer” directory (you might have to create the directory).

Place the APK file in Android’s file-transfer window. (View large version18)

Now, you will be able to launch the app installer on your device. Navigate to the right directory, and tap the icon for the APK file that you want to install.

Use the app launcher to install the app.

After a few customary Android disclaimers, the app will be installed in the normal app area on your device, and you can launch it from there.


We’re providing you with the Java code19 and a demo of a simple dual slider with discrete stops.

This demo has five intervals between the minimum and maximum values, which we’ve arbitrarily set to $47 and $302. It’s arranged in a linear pattern, which means that a certain distance of movement on the slider’s axis represents an equal absolute change in value, making the increment value $51. In a real app, the values would most likely be derived from a database.

private static final int RANGE_MIN_VALUE = 47;
private static final int RANGE_MAX_VALUE = 302;
private static final int[] RANGE_STEPS = new int[] 
47, 98, 149, 200, 251, 302

While five is a good number in principle, you might want to experiment with intervals of seven or nine, depending on the size of the screen.

We recommend that you use the MOD function to determine how many capes need to be in each interval. Then, walk the interval to determine the price breakdown within each range. Finally, if MOD yields a remainder, you can add it to the last interval, or you could get fancier and loop through it to add one or more “excess” capes to each of the intervals. For example, if you have 103 capes, the intervals would be 21, 21, 21, 20, 20. This would more evenly distribute the inventory.

You could use the app as is for your own projects or as a starting point for something fancier. May we suggest a slider with a histogram or an inventory count?

If you do use the code, we’d love to see what you’ve done with it.

This code is provided free of charge and distributed under the GNU General Public License v3. See the README_LICENSE file for details.


  • Say it with us, “Done right, sliders delight.”
    Sliders turn your customers into empowered explorers and instant item locators. Don’t let a good pattern go bad: Remember these rules, and sidestep slider problems.
  • Make sure that reasonable values can be entered easily, and don’t cover the numbers.
    Say no to fat-fingered fumbling with small increments in large ranges. Speaking of ranges…
  • Show the range.
    Stamp out unhelpful labels like “$0” and “No limit.” Instead, show the actual minimum and maximum values that the customer can search within.
  • Be discrete.
    Continuous range sliders aren’t always the best choice. Discrete stops are better for small sets of predefined values, such as shoe sizes (and cape sizes.) And finally…
  • Zap zero results.
    Fight the frustrating fruitless search. Want to give your customers ninja navigational powers? Add a histogram, or use smart intervals based on your inventory.

That’s all there is to it. Working with sliders is no great mystery. You know the patterns. You’ve nabbed the code. Now there’s nothing to stop you from trying a slider.

Want more patterns? Android Design Patterns: Interaction Design Solutions for Developers21 has over 70, including a free design mini-course.

(al, ml)


  1. 1 http://www.smashingmagazine.com/wp-content/uploads/2014/09/01-zillo-app-slider-opt.png
  2. 2 http://www.smashingmagazine.com/wp-content/uploads/2014/09/01-zillo-app-slider-opt.png
  3. 3 http://www.smashingmagazine.com/wp-content/uploads/2014/09/02-trulia-app-slider-opt.png
  4. 4 http://www.smashingmagazine.com/wp-content/uploads/2014/09/02-trulia-app-slider-opt.png
  5. 5 http://www.smashingmagazine.com/wp-content/uploads/2014/09/03-dual-slider-opt.png
  6. 6 http://www.smashingmagazine.com/wp-content/uploads/2014/09/03-dual-slider-opt.png
  7. 7 http://www.smashingmagazine.com/wp-content/uploads/2014/09/04-kayak-app-slider-opt.png
  8. 8 http://www.smashingmagazine.com/wp-content/uploads/2014/09/04-kayak-app-slider-opt.png
  9. 9 http://www.smashingmagazine.com/wp-content/uploads/2014/09/05-linear-price-slider-opt.png
  10. 10 http://www.smashingmagazine.com/wp-content/uploads/2014/09/05-linear-price-slider-opt.png
  11. 11 http://www.smashingmagazine.com/wp-content/uploads/2014/09/06-linear-histogram-slider-opt.png
  12. 12 http://www.smashingmagazine.com/wp-content/uploads/2014/09/06-linear-histogram-slider-opt.png
  13. 13 http://www.smashingmagazine.com/wp-content/uploads/2014/09/07-alternative-price-slider-opt.png
  14. 14 http://www.smashingmagazine.com/wp-content/uploads/2014/09/07-alternative-price-slider-opt.png
  15. 15 https://play.google.com/store/apps/details?id=com.funtrigger.appinstaller
  16. 16 http://provide.smashingmagazine.com/slider-demo-app.zip
  17. 17 http://www.smashingmagazine.com/wp-content/uploads/2014/09/08-apk-file-opt.png
  18. 18 http://www.smashingmagazine.com/wp-content/uploads/2014/09/08-apk-file-opt.png
  19. 19 http://provide.smashingmagazine.com/slider-demo-app.zip
  20. 20 http://provide.smashingmagazine.com/slider-demo-app.zip
  21. 21 http://www.androiddesignbook.com/

The post Mobile Design Pattern: Inventory-Based Discrete Slider appeared first on Smashing Magazine.

Original article: 

Mobile Design Pattern: Inventory-Based Discrete Slider


An Introduction To Unit Testing In AngularJS Applications

AngularJS1 has grown to become one of the most popular single-page application frameworks. Developed by a dedicated team at Google, the outcome is substantial and widely used in both community and industry projects.

One of the reasons for AngularJS’ success is its outstanding ability to be tested. It’s strongly supported by Karma112 (the spectacular test runner written by Vojta Jína) and its multiple plugins. Karma, combined with its fellows Mocha173, Chai184 and Sinon205, offers a complete toolset to produce quality code that is easy to maintain, bug-free and well documented.

“Well, I’ll just launch the app and see if everything works. We’ve never had any problem doing that – No one ever.”

The main factor that made me switch from “Well, I just launch the app and see if everything works” to “I’ve got unit tests!” was that, for the first time, I could focus on what matters and on what I enjoy in programming: creating smart algorithms and nice UIs.

I remember a component that was supposed to manage the right-click menu in an application. Trust me, it was a complex component. Depending on dozens of mixed conditions, it could show or hide buttons, submenus, etc. One day, we updated the application in production. I can remember how I felt when I launched the app, opened something, right-clicked and saw no contextual menu — just an empty ugly box that was definitive proof that something had gone really wrong. After having fixed it, re-updated the application and apologized to customer service, I decided to entirely rewrite this component in test-driven development style. The test file ended up being twice as long as the component file. It has been improved a lot since, especially its poor performance, but it never failed again in production. Rock-solid code.

A Word About Unit Testing

Unit testing has become a standard in most software companies. Customer expectations have reached a new high, and no one accepts getting two free regressions for the price of one update anymore.

If you are familiar with unit testing, then you’ll already know how confident a developer feels when refactoring tested code. If you are not familiar, then imagine getting rid of deployment stress, a “code-and-pray” coding style and never-ending feature development. The best part of? It’s automatic.

Unit testing improves code’s orthogonality. Fundamentally, code is called “orthogonal” when it’s easy to change. Fixing a bug or adding a feature entails nothing but changing the code’s behavior, as explained in The Pragmatic Programmer: From Journeyman to Master6. Unit tests greatly improve code’s orthogonality by forcing you to write modular logic units, instead of large code chunks.

Unit testing also provides you with documentation that is always up to date and that informs you about the code’s intentions and functional behavior. Even if a method has a cryptic name — which is bad, but we won’t get into that here — you’ll instantly know what it does by reading its test.

Unit testing has another major advantage. It forces you to actually use your code and detect design flaws and bad smells. Take functions. What better way to make sure that functions are uncoupled from the rest of your code than by being able to test them without any boilerplate code?

Furthermore, unit testing opens the door to test-driven development. While it’s not this article’s topic, I can’t stress enough that test-driven development is a wonderful and productive way to write code.

What and What Not to Test

Tests must define the code’s API. This is the one principle that will guide us through this journey. An AngularJS application is, by definition, composed of modules. The elementary bricks are materialized by different concepts related to the granularity at which you look at them. At the application level, these bricks are AngularJS’ modules. At the module level, they are directives, controllers, services, filters and factories. Each one of them is able to communicate with another through its external interface.

Everything is bricks, regardless of the level you are at.7
Everything is bricks, regardless of the level you are at. (View large version8)

All of these bricks share a common attribute. They behave as black boxes, which means that they have a inner behavior and an outer interface materialized by inputs and outputs. This is precisely what unit tests are for: to test bricks’ outer interfaces.

Black box model (well, this one is gray, but you get the idea).9
Black box model (well, this one is gray, but you get the idea) (View large version10)

Ignoring the internals as much as possible is considered good practice. Unit testing — and testing in general — is a mix of stimuli and reactions.

Bootstrapping A Test Environment For AngularJS

To set up a decent testing environment for your AngularJS application, you will need several npm modules. Let’s take a quick glance at them.

Karma: The Spectacular Test Runner

Karma112 is an engine that runs tests against code. Although it has been written for AngularJS, it’s not specifically tied to it and can be used for any JavaScript application. It’s highly configurable through a JSON file and the use of various plugins.

If you don’t see this at some point in this process, then you might have missed something.12
If you don’t see this at some point in the process, then you might have missed something. (View large version13)

All of the examples in this article can be found in the dedicated GitHub project14, along with the following configuration file for Karma.

// Karma configuration
// Generated on Mon Jul 21 2014 11:48:34 GMT+0200 (CEST)
module.exports = function(config) 

    // base path used to resolve all patterns (e.g. files, exclude)
    basePath: '',

    // frameworks to use
    frameworks: ['mocha', 'sinon-chai'],

    // list of files / patterns to load in the browser
    files: [

    // list of files to exclude
    exclude: [],

    // preprocess matching files before serving them to the browser
      'src/*.js': ['coverage']

      type: 'text-summary',
      dir: 'coverage/'

    // test results reporter to use
    reporters: ['progress', 'coverage'],

    // web server port
    port: 9876,

    // enable / disable colors in the output (reporters and logs)
    colors: true,

    // level of logging
    logLevel: config.LOG_INFO,

    // enable / disable watching file and executing tests on file changes
    autoWatch: true,

    // start these browsers
    browsers: ['PhantomJS'],

    // Continuous Integration mode
    // if true, Karma captures browsers, runs the tests and exits
    singleRun: false

This file can be automagically generated by typing karma init in a terminal window. The available keys are described in Karma’s documentation15.

Notice how sources and test files are declared. There is also a newcomer: ngMock16 (i.e. angular-mocks.js). ngMock is an AngularJS module that provides several testing utilities (more on that at the end of this article).


Mocha173 is a testing framework for JavaScript. It handles test suites and test cases, and it offers nice reporting features. It uses a declarative syntax to nest expectations into cases and suites. Let’s look at the following example (shamelessly stolen from Mocha’s home page):

describe('Array', function() 
  describe('#indexOf()', function() 
    it('should return -1 when the value is not present', function() 
      assert.equal(-1, [1,2,3].indexOf(5));
      assert.equal(-1, [1,2,3].indexOf(0));

You can see that the whole test is contained in a describe call. What is interesting about nesting function calls in this way is that the tests follow the code’s structure. Here, the Array suite is composed of only one subsuite, #indexOf. Others could be added, of course. This subsuite is composed of one case, which itself contains two assertions and expectations. Organizing test suites into a coherent whole is essential. It ensures that test errors will be reported with meaningful messages, thus easing the debugging process.


We have seen how Mocha provides test-suite and test-case capabilities for JavaScript. Chai184, for its part, offers various ways of checking things in test cases. These checks are performed through what are called “assertions” and basically mark a test case as failed or passed. Chai’s documentation has more19 on the different assertions styles.


Sinon205 describes itself as “standalone test spies, stubs and mocks for JavaScript.” Spies, stubs and mocks all answer the same question: How do you efficiently replace one thing with another when running a test? Suppose you have a function that takes another one in a parameter and calls it. Sinon provides a smart and concise way to monitor whether the function is called and much more (with which arguments, how many times, etc.).

Unit Testing At The Application Level

The point of the external interface of a module in an AngularJS application is its ability to be injected into another module — that it exists and has a valid definition.


This is enough and will throw an error if myAwesomeModule is nowhere to be found.

Unit Testing At The Module Level

An AngularJS module can declare several types of objects. Some are services, while others are more specialized. We will go over each of them to see how they can be bootstrapped in a controlled environment and then tested.

Filters, Services and Factories: A Story of Dependency Injection

Filters, services and factories (we will refer to these as services in general) can be compared to static objects or singletons in a traditional object-oriented framework. They are easy to test because they need very few things to be ready, and these things are usually other services.

AngularJS links services to other services or objects using a very expressive dependency-injection model, which basically means asking for something in a method’s arguments.

What is great about AngularJS’ way of injecting dependencies is that mocking a piece of code’s dependencies and injecting things into test cases are super-easy. In fact, I am not even sure it could be any simpler. Let’s consider this quite useful factory:

angular.module('factories', [])
.factory('chimp', ['$log', function($log) 
    ook: function() 

See how $log is injected, instead of the standard console.warn? While AngularJS will not print $log statements in Karma’s console, avoid side effects in unit tests as much as possible. I once reduced by half the duration of an application’s unit tests by mocking the tracking HTTP requests — which were all silently failing in a local environment, obviously.

describe('factories', function() 


  var chimp;
  var $log;

  beforeEach(inject(function(_chimp_, _$log_) 
    chimp = _chimp_;
    $log = _$log_;
    sinon.stub($log, 'warn', function() );

  describe('when invoked', function() 


    it('should say Ook', function() 

The pattern for testing filters, services or other injectables is the same. Controllers can be a bit trickier to test, though, as we will see now.


Testing a controller could lead to some confusion. What do we test? Let’s focus on what a controller is supposed to do. You should be used to considering any tested element as a black box by now. Remember that AngularJS is a model-view-whatever (MVW) framework, which is kind of ironic because one of the few ways to define something in an AngularJS application is to use the keyword controller. Still, any kind of decent controller usually acts as a proxy between the model and the view, through objects in one way and callbacks in the other.

The controller usually configures the view using some state objects, such as the following (for a hypothetical text-editing application):

angular.module('textEditor', [])

.controller('EditionCtrl', ['$scope', function($scope) 
  $scope.state = toolbarVisible: true, documentSaved: true;
  $scope.document = text: 'Some text';

  $scope.$watch('document.text', function(value) 
    $scope.state.documentSaved = false;
  , true);

  $scope.saveDocument = function() 
    $scope.state.documentSaved = true;

  $scope.sendHTTP = function(content) 
    // payload creation, HTTP request, etc.

Chances are that the state will be modified by both the view and the controller. The toolbarVisible attribute will be toggled by, say, a button and a keyboard shortcut. Unit tests are not supposed to test interactions between the view and the rest of the universe; that is what end-to-end tests are for.

The documentSaved value will be mostly handled by the controller, though. Let’s test it.

describe('saving a document', function() 

  var scope;
  var ctrl;


  beforeEach(inject(function($rootScope, $controller) 
    scope = $rootScope.$new();
    ctrl = $controller('EditionCtrl', $scope: scope);

  it('should have an initial documentSaved state', function()

  describe('documentSaved property', function() 
      // We don't want extra HTTP requests to be sent
      // and that's not what we're testing here.
      sinon.stub(scope, 'sendHTTP', function() );

      // A call to $apply() must be performed, otherwise the
      // scope's watchers won't be run through.
      scope.$apply(function () 
        scope.document.text += ' And some more text';

    it('should watch for document.text changes', function() 

    describe('when calling the saveDocument function', function() 

      it('should be set to true again', function() 


An interesting side effect of this code chunk is that it not only tests changes on the documentSaved property, but also checks that the sendHTTP method actually gets called and with the proper arguments (we will see later how to test HTTP requests). This is why it’s a separated method published on the controller’s scope. Decoupling and avoiding pseudo-global states (i.e. passing the text to the method, instead of letting it read the text on the scope) always eases the process of writing tests.


A directive is AngularJS’ way of teaching HTML new tricks and of encapsulating the logic behind those tricks. This encapsulation has several contact points with the outside that are defined in the returned object’s scope attribute. The main difference with unit testing a controller is that directives usually have an isolated scope, but they both act as a black box and, therefore, will be tested in roughly the same manner. The test’s configuration is a bit different, though.

Let’s imagine a directive that displays a div with some string inside of it and a button next to it. It could be implemented as follows:

angular.module('myDirectives', [])
.directive('superButton', function() 
    scope: label: '=', callback: '&onClick',
    replace: true,
    restrict: 'E',
    link: function(scope, element, attrs) 

    template: '<div>' +
      '<div>label}</div>' +
      '<button ng-click="callback()">Click me!</button>' +

We want to test two things here. The first thing to test is that the label gets properly passed to the first div’s content, and the second is that something happens when the button gets clicked. It’s worth saying that the actual rendering of the directive belongs slightly more to end-to-end and functional testing, but we want to include it as much as possible in our unit tests simply for the sake of failing fast. Besides, working with test-driven development is easier with unit tests than with higher-level tests, such as functional, integration and end-to-end tests.

describe('directives', function() 


  var element;
  var outerScope;
  var innerScope;

  beforeEach(inject(function($rootScope, $compile) 
    element = angular.element('<super-button label="myLabel" on-click="myCallback()"></super-button>');

    outerScope = $rootScope;

    innerScope = element.isolateScope();


  describe('label', function() 
        outerScope.myLabel = "Hello world.";

    it('should be rendered', function() 
      expect(element[0].children[0].innerHTML).to.equal('Hello world.');

  describe('click callback', function() 
    var mySpy;

      mySpy = sinon.spy();
        outerScope.myCallback = mySpy;

    describe('when the directive is clicked', function() 
        var event = document.createEvent("MouseEvent");
        event.initMouseEvent("click", true, true);

      it('should be called', function() 

This example has something important. We saw that unit tests make refactoring easy as pie, but we didn’t see how exactly. Here, we are testing that when a click happens on the button, the function passed as the on-click attribute is called. If we take a closer look at the directive’s code, we will see that this function gets locally renamed to callback. It’s published under this name on the directive’s isolated scope. We could write the following test, then:

describe('click callback', function() 
  var mySpy;

    mySpy = sinon.spy();
    innerScope.callback = mySpy;

  describe('when the directive is clicked', function() 
      var event = document.createEvent("MouseEvent");
      event.initMouseEvent("click", true, true);

    it('should be called', function() 

And it would work, too. But then we wouldn’t be testing the external aspect of our directive. If we were to forget to add the proper key to the directive’s scope definition, then no test would stop us. Besides, we actually don’t care whether the directive renames the callback or calls it through another method (and if we do, then it will have to be tested elsewhere anyway).


This is the toughest of our little series. What is a provider exactly? It’s AngularJS’ own way of wiring things together before the application starts. A provider also has a factory facet — in fact, you probably know the $routeProvider and its little brother, the $route factory. Let’s write our own provider and its factory and then test them!

angular.module('myProviders', [])

.provider('coffeeMaker', function() 
  var useFrenchPress = false;
  this.useFrenchPress = function(value) 
    if (value !== undefined) 
      useFrenchPress  = !!value;

    return useFrenchPress;

  this.$get = function () 
      brew: function() 
        return useFrenchPress ? 'Le café.': 'A coffee.';

There’s nothing fancy in this super-useful provider, which defines a flag and its accessor method. We can see the config part and the factory part (which is returned by the $get method). I won’t go over the provider’s whole implementation and use cases, but I encourage you to look at AngularJS’ official documentation about providers21.

To test this provider, we could test the config part on the one hand and the factory part on the other. This wouldn’t be representative of the way a provider is generally used, though. Let’s think about the way that we use providers. First, we do some configuration; then, we use the provider’s factory in some other objects or services. We can see in our coffeeMaker that its behavior depends on the useFrenchPress flag. This is how we will proceed. First, we will set this flag, and then we’ll play with the factory to see whether it behaves accordingly.

describe('coffee maker provider', function() 
  var coffeeProvider = undefined;

    // Here we create a fake module just to intercept and store the provider
    // when it's injected, i.e. during the config phase.
    angular.module('dummyModule', function() )
      .config(['coffeeMakerProvider', function(coffeeMakerProvider) 
        coffeeProvider = coffeeMakerProvider;

    module('myProviders', 'dummyModule');

    // This actually triggers the injection into dummyModule

  describe('with french press', function() 

    it('should remember the value', function() 

    it('should make some coffee', inject(function(coffeeMaker) 
      expect(coffeeMaker.brew()).to.equal('Le café.');

  describe('without french press', function() 

    it('should remember the value', function() 

    it('should make some coffee', inject(function(coffeeMaker) 
      expect(coffeeMaker.brew()).to.equal('A coffee.');

HTTP Requests

HTTP requests are not exactly on the same level as providers or controllers. They are still an essential part of unit testing, though. If you do not have a single HTTP request in your entire app, then you can skip this section, you lucky fellow.

Roughly, HTTP requests act like inputs and outputs at any of your application’s level. In a RESTfully designed system, GET requests give data to the app, and PUT, POST and DELETE methods take some. That is what we want to test, and luckily AngularJS makes that easy.

Let’s take our factory example and add a POST request to it:

angular.module('factories_2', [])
.factory('chimp', ['$http', function($http) 
    sendMessage: function() 
      $http.post('http://chimps.org/messages', message: 'Ook.');

We obviously do not want to test this on the actual server, nor do we want to monkey-patch the XMLHttpRequest constructor. That is where $httpBackend enters the game.

describe('http', function() 


  var chimp;
  var $httpBackend;

  beforeEach(inject(function(_chimp_, _$httpBackend_) 
    chimp = _chimp_;
    $httpBackend = _$httpBackend_;

  describe('when sending a message', function() 
      $httpBackend.expectPOST('http://chimps.org/messages', message: 'Ook.')
      .respond(200, message: 'Ook.', id: 0);


    it('should send an HTTP POST request', function() 

You can see that we’ve defined which calls should be issued to the fake server and how to respond to them before doing anything else. This is useful and enables us to test our app’s response to different requests’ responses (for example, how does the application behave when the login request returns a 404?). This particular example simulates a standard POST response.

The two other lines of the beforeEach block are the function call and a newcomer, $httpBackend.flush(). The fake server does not immediately answer each request; instead, it lets you check any intermediary state that you may have configured. It waits for you to explicitly tell it to respond to any pending request it might have received.

The test itself has two methods calls on the fake server (verifyNoOutstandingExpectation and verifyNoOutstandingRequest). AngularJS’ $httpBackend does not enforce strict equality between what it expects and what it actually receives unless you’ve told it to do so. You can regard these lines as two expectations, one of the number of pending requests and the other of the number of pending expectations.

ngMock Module

The ngMock module22 contains various utilities to help you smooth over JavaScript and AngularJS’ specifics.

$timeout, $log and the Others

Using AngularJS’ injectable dependencies is better than accessing global objects such as console or window. Let’s consider console calls. They are outputs just like HTTP requests and might actually matter if you are implementing an API for which some errors must be logged. To test them, you can either monkey-patch a global object — yikes! — or use AngularJS’ nice injectable.

The $timeout dependency also provides a very convenient flush() method, just like $httpBackend. If we create a factory that provides a way to briefly set a flag to true and then restore it to its original value, then the proper way to test it’s to use $timeout.

angular.module('timeouts', [])

.factory('waiter', ['$timeout', function($timeout) 
    brieflySetSomethingToTrue: function(target, property) 
      var oldValue = target[property];

      target[property] = true;

        target[property] = oldValue;
      , 100);

And the test will look like this:

describe('timeouts', function() 


  var waiter;
  var $timeout;

  beforeEach(inject(function(_waiter_, _$timeout_) 
    waiter = _waiter_;
    $timeout = _$timeout_;

  describe('brieflySetSomethingToTrue method', function() 
    var anyObject;

      anyObject = foo: 42;
      waiter.brieflySetSomethingToTrue(anyObject, 'foo');

    it('should briefly set something to true', function() 

Notice how we’re checking the intermediary state and then flush()’ing the timeout.

module() and inject()

The module()23 and inject()24 functions help to retrieve modules and dependencies during tests. The former enables you to retrieve a module, while the latter creates an instance of $injector, which will resolve references.

it('should say Ook.', inject(function($log) 
  sinon.stub($log, 'warn', function() );



In this test case, we’re wrapping our test case function in an inject call. This call will create an $injector instance and resolve any dependencies declared in the test case function’s arguments.

Dependency Injection Made Easy

One last trick is to ask for dependencies using underscores around the name of what we are asking for. The point of this is to assign a local variable that has the same name as the dependencies. Indeed, the $injector used in our tests will remove surrounding underscores if any are found. StackOverflow has a comment25 on this.


Unit testing in AngularJS applications follows a fractal design. It tests units of code. It freezes a unit’s behavior by providing a way to automatically check its response to a given input. Note that unit tests do not replace good coding. AngularJS’ documentation is pretty clear on this point: “Angular is written with testability in mind, but it still requires that you do the right thing.”

Getting started with writing unit tests — and coding in test-driven development — is hard. However, the benefits will soon show up if you’re willing to fully test your application, especially during refactoring operations.

Tests also work well with agile methods. User stories are almost tests; they’re just not actual code (although some approaches, such as “design by contract26,” minimize this difference).

Further Resources

(al, ml)


  1. 1 https://angularjs.org
  2. 2 http://karma-runner.github.io
  3. 3 http://visionmedia.github.io/mocha/
  4. 4 http://chaijs.com
  5. 5 http://sinonjs.org
  6. 6 https://pragprog.com/the-pragmatic-programmer
  7. 7 http://www.smashingmagazine.com/wp-content/uploads/2014/09/01-bricks-opt.png
  8. 8 http://www.smashingmagazine.com/wp-content/uploads/2014/09/01-bricks-opt.png
  9. 9 http://www.smashingmagazine.com/wp-content/uploads/2014/09/02-blackbox-opt.png
  10. 10 http://www.smashingmagazine.com/wp-content/uploads/2014/09/02-blackbox-opt.png
  11. 11 http://karma-runner.github.io
  12. 12 http://www.smashingmagazine.com/wp-content/uploads/2014/09/03-karma-success-opt.png
  13. 13 http://www.smashingmagazine.com/wp-content/uploads/2014/09/03-karma-success-opt.png
  14. 14 https://github.com/lorem–ipsum/smashing-article
  15. 15 http://karma-runner.github.io/0.8/config/configuration-file.html
  16. 16 https://docs.angularjs.org/api/ngMock
  17. 17 http://visionmedia.github.io/mocha/
  18. 18 http://chaijs.com
  19. 19 http://chaijs.com/guide/styles/
  20. 20 http://sinonjs.org
  21. 21 https://docs.angularjs.org/guide/providers
  22. 22 https://docs.angularjs.org/api/ngMock
  23. 23 https://docs.angularjs.org/api/ngMock/function/angular.mock.module
  24. 24 https://docs.angularjs.org/api/ngMock/function/angular.mock.inject
  25. 25 http://stackoverflow.com/a/15318137/863119
  26. 26 http://en.wikipedia.org/wiki/Design_by_contract
  27. 27 https://pragprog.com/the-pragmatic-programmer
  28. 28 https://docs.angularjs.org/guide/unit-testing
  29. 29 https://github.com/lorem–ipsum/smashing-article

The post An Introduction To Unit Testing In AngularJS Applications appeared first on Smashing Magazine.

Link to original: 

An Introduction To Unit Testing In AngularJS Applications


Efficiently Simplifying Navigation, Part 3: Interaction Design

Having addressed the information architecture1 and the various systems2 of navigation in the first two articles of this series, the last step is to efficiently simplify the navigation experience — specifically, by carefully designing interaction with the navigation menu.

When designing interaction with any type of navigation menu, we have to consider the following six aspects:

  • symbols,
  • target areas,
  • interaction event,
  • layout,
  • levels,
  • functional context.

It is possible to design these aspects in different ways. Designers often experiment with new techniques3 to create a more exciting navigation experience. And looking for new, more engaging solutions is a very good thing. However, most users just want to get to the content with as little fuss as possible. For those users, designing the aforementioned aspects to be as simple, predictable and comfortable as possible is important.


Users often rely on small visual clues, such as icons and symbols, to guide them through a website’s interface. Creating a system of symbolic communication throughout the website that is unambiguous and consistent is important.

The first principle in designing a drop-down navigation menu is to make users aware that it exists in the first place.

The Triangle Symbol

A downward triangle next to the corresponding menu label is the most familiar way to indicate a drop-down menu and distinguish it from regular links.

A downward triangle next to the menu label is the most reliable way to indicate a drop-down. (Source: CBS5) (View large version6)

If a menu flies out, rather than drops down, then the triangle or arrow should point in the right direction. The website below is exemplary because it also takes into account the available margin and adjusts the direction in which the menu unfolds accordingly.

A triangle or arrow pointing in the right direction is the most reliable way to indicate a fly-out menu. (Source: Currys8) (View large version9)

The Plus Symbol

Another symbol that is used for opening menus is the plus symbol (“+”). Notice that the website below mixes symbols: an arrow for the top navigation menu and a “+” for the dynamic navigation menu to the left (although an arrow is used to further expand the dynamic menu — for example, to show “More sports”).

Some websites use a “+” to drop down or fly out menus. (Source: Nike11) (View large version12)

Mixing symbols can be problematic, as we’ll see below. So, if you ever add functionality that enables users to add something (such as an image, a cart or a playlist), then “+” would not be ideal for dropping down or flying out a menu because it typically represents adding something.

The Three-Line Symbol

A third symbol often used to indicate a navigation menu, especially on responsive websites, is three horizontal lines.

Three horizontal lines is frequently used for responsive navigation menus. (Source: Nokia14) (View large version15)

Note a couple of things. First, three lines, like a grid16 and a bullet list17 icon, communicate a certain type of layout — specifically, a vertical stack of entries. The menu’s layout should be consistent with the layout that the icon implies. The website below, for example, lists items horizontally, thus contradicting the layout indicated by the menu symbol.

Three lines do not work well if the menu items are not stacked vertically. (Source: dConstruct 201219) (View large version20)

The advantage of the more inclusive triangle symbol and the label “Menu” is that they suit any layout, allowing you to change the layout without having to change the icon.

Secondly, even though three lines are becoming more common, the symbol is still relatively new, and it is more ambiguous, possibly representing more than just a navigation menu. Therefore, a label would clarify its purpose for many users.

An accompanying label would clarify the purpose of the three lines. (Source: Kiwibank22) (View large version23)

Consistent Use Of Symbols

While finding symbols that accurately represent an element or task is important, also carefully plan their usage throughout the website to create a consistent appearance and avoid confusion.

Notice the inconsistent use of symbols in the screenshot below. The three lines in the upper-right corner drop down the navigation menu. The three lines in the center indicate “View nutrition info.” The “Location” selector uses a downward triangle, while the “Drinks” and “Food” menus, which drop down as well, use a “+” symbol.

Inconsistent symbols lead to confusion. (Source: Starbucks25) (View large version26)

While using multiple symbols for a drop-down menu is inconsistent, using arrows for anything other than a drop-down menu causes problems, too. As seen below, all options load a new page, rather than fly out or drop down a menu.

Using a triangle or arrow for anything other than a drop-down or fly-out menu can cause confusion. (Source: Barista Prima28) (View large version29)

This leads to a couple of problems. First, using arrows for regular links — whether to create the illusion of space30 or for other reasons — puts pressure on you to consistently do the same for all links. Otherwise, users could be surprised, not knowing when to expect a link to load a simple menu or a new page altogether. Secondly, a single-level item, such as “Products”, could conceivably be expanded with subcategories in the future. A triangle could then be added to indicate this and distinguish it from single-level entries, such as the “About” item.

Users generally interpret an arrow to indicate a drop-down or fly-out menu. And they don’t have any problem following a link with no arrow, as long as it looks clickable. It is best not to mix these two concepts.


  1. 1 http://www.smashingmagazine.com/2013/12/03/efficiently-simplifying-navigation-information-architecture/
  2. 2 http://www.smashingmagazine.com/2014/05/09/efficiently-simplifying-navigation-systems/
  3. 3 http://www.smashingmagazine.com/2013/07/11/innovative-navigation-designs/
  4. 4 http://www.smashingmagazine.com/wp-content/uploads/2014/07/1-large-opt.jpg
  5. 5 http://www.cbs.com/shows/bad-teacher/
  6. 6 http://www.smashingmagazine.com/wp-content/uploads/2014/07/1-large-opt.jpg
  7. 7 http://www.smashingmagazine.com/wp-content/uploads/2014/07/2-large-opt.jpg
  8. 8 http://www.currys.co.uk/gbuk/index.html
  9. 9 http://www.smashingmagazine.com/wp-content/uploads/2014/07/2-large-opt.jpg
  10. 10 http://www.smashingmagazine.com/wp-content/uploads/2014/07/3-large-opt.jpg
  11. 11 http://www.nike.com/us/en_us/
  12. 12 http://www.smashingmagazine.com/wp-content/uploads/2014/07/3-large-opt.jpg
  13. 13 http://www.smashingmagazine.com/wp-content/uploads/2014/07/4-large-opt.png
  14. 14 http://nokia.com
  15. 15 http://www.smashingmagazine.com/wp-content/uploads/2014/07/4-large-opt.png
  16. 16 http://www.smashingmagazine.com/wp-content/uploads/2013/08/grid.jpg
  17. 17 http://www.smashingmagazine.com/wp-content/uploads/2013/08/bullet_list.jpg
  18. 18 http://www.smashingmagazine.com/wp-content/uploads/2014/07/5-large-opt.jpg
  19. 19 http://2012.dconstruct.org
  20. 20 http://www.smashingmagazine.com/wp-content/uploads/2014/07/5-large-opt.jpg
  21. 21 http://www.smashingmagazine.com/wp-content/uploads/2014/07/6-large-opt.jpg
  22. 22 http://kiwibank.co.nz/
  23. 23 http://www.smashingmagazine.com/wp-content/uploads/2014/07/6-large-opt.jpg
  24. 24 http://www.smashingmagazine.com/wp-content/uploads/2014/08/7-large-opt.png
  25. 25 http://www.starbucks.com/menu/catalog/product?drink=bottled-drinks#view_control=product
  26. 26 http://www.smashingmagazine.com/wp-content/uploads/2014/08/7-large-opt.png
  27. 27 http://www.smashingmagazine.com/wp-content/uploads/2014/08/8-large-opt.png
  28. 28 http://www.baristaprima.ca/
  29. 29 http://www.smashingmagazine.com/wp-content/uploads/2014/08/8-large-opt.png
  30. 30 http://baymard.com/blog/ux-illusion-of-space

The post Efficiently Simplifying Navigation, Part 3: Interaction Design appeared first on Smashing Magazine.

Taken from:

Efficiently Simplifying Navigation, Part 3: Interaction Design