Tag Archives: large

Thumbnail

Don’t Use The Placeholder Attribute




Don’t Use The Placeholder Attribute

Eric Bailey



Introduced as part of the HTML5 specification, the placeholder attribute “represents a short hint (a word or short phrase) intended to aid the user with data entry when the control has no value. A hint could be a sample value or a brief description of the expected format.”

This seemingly straightforward attribute contains a surprising amount of issues that prevent it from delivering on what it promises. Hopefully, I can convince you to stop using it.

Technically Correct

Inputs are the gates through which nearly all e-commerce has to pass. Regardless of your feelings on the place of empathy in design, unusable inputs leave money on the table.

The presence of a placeholder attribute won’t be flagged by automated accessibility checking software. However, this doesn’t necessarily mean it’s usable. Ultimately, accessibility is about people, not standards, so it is important to think about your interface in terms beyond running through a checklist.

Call it remediation, inclusive design, universal access, whatever. The spirit of all these philosophies boils down to making things that people—all people—can use. Viewed through this lens, placeholder simply doesn’t hold up.

The Problems

Translation

Browsers with auto-translation features such as Chrome skip over attributes when a request to translate the current page is initiated. For many attributes, this is desired behavior, as an updated value may break underlying page logic or structure.

One of the attributes skipped over by browsers is placeholder. Because of this, placeholder content won’t be translated and will remain as the originally authored language.

If a person is requesting a page to be translated, the expectation is that all visible page content will be updated. Placeholders are frequently used to provide important input formatting instructions or are used in place of a more appropriate label element (more on that in a bit). If this content is not updated along with the rest of the translated page, there is a high possibility that a person unfamiliar with the language will not be able to successfully understand and operate the input.

This should be reason enough to not use the attribute.

While we’re on the subject of translation, it’s also worth pointing out that location isn’t the same as language preference. Many people set their devices to use a language that isn’t the official language of the country reported by their browser’s IP address (to say nothing of VPNs), and we should respect that. Make sure to keep your content semantically described—your neighbors will thank you!

Interoperability

Interoperability is the practice of making different systems exchange and understand information. It is a foundational part of both the Internet and assistive technology.

Semantically describing your content makes it interoperable. An interoperable input is created by programmatically associating a label element with it. Labels describe the purpose of an input field, providing the person filling out the form with a prompt that they can take action on. One way to associate a label with an input, is to use the for attribute with a value that matches the input’s id.

Without this for/id pairing, assistive technology will be unable to determine what the input is for. The programmatic association provides an API hook that software such as screen readers or voice recognition can utilize. Without it, people who rely on this specialized software will not be able to read or operate inputs.


A diagram demonstrating how code gets converted into a rendered input, and how the code’s computed properties get read by assistive technology. The code is a text input with a label that reads Your Name. The listed computed properties are the accessible name, which is Your Name, and a role of textbox.


How semantic markup is used for both visual presentation and accessible content. (Large preview)

The reason I am mentioning this is that placeholder is oftentimes used in place of a label element. Although I’m personally baffled by the practice, it seems to have gained traction in the design community. My best guess for its popularity is the geometrically precise grid effect it creates when placed next to other label-less input fields acts like designer catnip.


Facebook’s signup form. A heading reads, “Sign Up. It’s free and always will be.” Placeholders are being used as labels, asking for your first name, last name, mobile number or email, and to create a new password for your account Screenshot.


An example of input grid fetishization from a certain infamous blue website. (Large preview)

The floating label effect, a close cousin to this phenomenon, oftentimes utilizes the placeholder attribute in place of a label, as well.

A neat thing worth pointing out is that if a label is programmatically associated with an input, clicking or tapping on the label text will place focus on the input. This little trick provides an extra area for interacting with the input, which can be beneficial to people with motor control issues. Placeholders acting as labels, as well as floating labels, cannot do that.

Cognition

The 2016 United States Census lists nearly 15 million people who report having cognitive difficulty — and that’s only counting individuals who choose to self-report. Extrapolating from this, we can assume that cognitive accessibility concerns affect a significant amount of the world’s population.

Self-reporting is worth calling out, in that a person may not know, or feel comfortable sharing that they have a cognitive accessibility condition. Unfortunately, there are still a lot of stigmas attached to disclosing this kind of information, as it oftentimes affects things like job and housing prospects.

Cognition can be inhibited situationally, meaning it can very well happen to you. It can be affected by things like multitasking, sleep deprivation, stress, substance abuse, and depression. I might be a bit jaded here, but that sounds a lot like conditions you’ll find at most office jobs.

Recall

The umbrella of cognitive concerns covers conditions such as short-term memory loss, traumatic brain injury, and Attention Deficit Hyperactivity Disorder. They can all affect a person’s ability to recall information.

When a person enters information into an input, its placeholder content will disappear. The only way to restore it is to remove the information entered. This creates an experience where guiding language is removed as soon as the person attempting to fill out the input interacts with it. Not great!

An input called “Your Birthdate” being filled out. The placeholder reads, “MM/DD/YYY” and the animation depicts the person filling it out getting to the year portion and having to delete the text to be able to go back and review what the proper formatting is.
Did they want MM/DD/YY, or MM/DD/YYYY? (Large preview)

When your ability to recall information is inhibited, it makes following these disappearing rules annoying. For inputs with complicated requirements to satisfy—say creating a new password—it transcends annoyance and becomes a difficult barrier to overcome.

An input called “Create a Password” being filled out. The placeholder reads, “8-15 characters, including at least 3 numbers and 1 symbol.” and the animation depicts the person filling it out having to delete the text to be able to go back and review what the password requirements are.
Wait—what’s the minimum length? How many numbers do they want again? (Large preview)

While more technologically-sophisticated people may have learned clever tricks such as cutting entered information, reviewing the placeholder content to refresh their memory, then re-pasting it back in to edit, people who are less technologically literate may not understand why the help content is disappearing or how to bring it back.

Digital Literacy

Considering that more and more of the world’s population is coming online, the onus falls on us as responsible designers and developers to make these people feel welcomed. Your little corner of the Internet (or intranet!) could very well be one of their first experiences online — assuming that the end user “will just know” is simple arrogance.

For US-based readers, a gentle reminder that new may not mean foreign. Access is on the rise for older Americans. While digital literacy will become more commonplace among older populations as time marches on, accessibility issues will as well.

For someone who has never encountered it before, placeholder text may look like entered content, causing them to skip over the input. If it’s a required field, form submission will create a frustrating experience where they may not understand what the error is, or how to fix it. If it’s not a required field, your form still runs the unnecessary risk of failing to collect potentially valuable secondary information.

Utility

Placeholder help content is limited to just a string of static text, and that may not always be sufficient to communicate the message. It may need to have additional styling applied to it, or contain descriptive markup, attributes, images, and iconography.

This is especially handy in mature design systems. The additional styling options created by moving the string of text out of the input element means it can take advantage of the system’s design tokens, and all the benefits that come with using them.

Placeholder text’s length is also limited to the width of the input it is contained in. In our responsive, mobile-first world, there stands a very good chance that important information could be truncated:


An input called Your YAMA Code, with a truncated placeholder that reads, You can find this code on the ba-


I guess I’ll never know where that code is. (Large preview)

Vision

Color Contrast

The major browsers’ default styles for placeholder content use a light gray color to visually communicate that it is a suggestion. Many custom input designs follow this convention by taking the color of input content and lightening it.

Unfortunately, this technique is likely to run afoul of color contrast issues. Color contrast is a ratio determined by comparing the luminosity of the text and background color values; in this case, it’s the color of the placeholder text over the input’s background.

See the Pen Default browser placeholder contrast ratios by Eric Bailey (@ericwbailey) on CodePen.

If the placeholder content has a contrast ratio that is too low to be perceived, it means that information critical to filling out a form successfully may not be able to be seen by people experiencing low vision conditions. For most common input font sizing, the ratio is 4.5:1.

Like all accessibility concerns, low vision conditions can be permanent or temporary, biological or environmental, or a combination. Biological disabilities include conditions like farsightedness, color blindness, dilated pupils, and cataracts. Environmental conditions include circumstances such as the glare of the mid-day sun, a battery-saving low brightness setting, privacy screens, grease and makeup left on your screen by your last phone call, and so on.

This ratio isn’t some personal aesthetic preference that I’m trying to force onto others arbitrarily. It’s part of a set of painstakingly-developed rules that help ensure that the largest possible swath of people can operate digital technology, regardless of their ability or circumstance. Consciously ignoring these rules is to be complicit in practicing exclusion.

And here’s the rub: In trying to make placeholder attributes inclusive, the updated higher contrast placeholder content color may become dark enough to be interpreted as entered input, even by more digitally literate people. This swings the issue back into cognitive concerns land.


The email address field on GoFundMe’s password reset page has a placeholder that reads email@address.com and is set to a dark black color that makes it look like entered input. Screenshot.


The placeholder text color on GoFundMe’s password reset page makes it appear like entered input. Additionally, the checkmark icon on the Request New Password button makes it seem like the request has already been processed. (Large preview)

High Contrast Mode

The Windows operating system contains a feature called High Contrast Mode. When activated, it assigns new colors to interface elements from a special high contrast palette that uses a limited number of color options. Here’s an example of what it may look like:


An input field with a label that reads “Donation amount” and a placeholder that reads “$25.00.” The screenshot is taken with Windows High Contrast mode active, so the placeholder element looks like entered text content. Screenshot.


Windows 10 set to use the High Contrast Mode 1 theme running Internet Explorer 11. (Large preview)

In High Contrast Mode, placeholder content is assigned one of those high contrast colors, making it look like pre-filled information. As discussed earlier, this could prevent people from understanding that the input may need information entered into it.

You may be wondering if it’s possible to update the styling in High Contrast Mode to make a placeholder more understandable. While it is possible to target High Contrast Mode in a media query, I implore you not to do so. Front-end developer Hugo Giraudel said it best:

“High contrast mode is not about design anymore but strict usability. You should aim for highest readability, not color aesthetics.”

The people that rely on High Contrast Mode use it because of how predictable it is. Unduly altering how it presents content may interfere with the only way they can reliably use a computer. In the case of lightening the color of placeholder content to make it appear like its non-High Contrast Mode treatment, you run a very real risk of making it impossible for them to perceive.

A Solution

To recap, the placeholder attribute:

  • Can’t be automatically translated;
  • Is oftentimes used in place of a label, locking out assistive technology;
  • Can hide important information when content is entered;
  • Can be too light-colored to be legible;
  • Has limited styling options;
  • May look like pre-filled information and be skipped over.

Eesh. That’s not great. So what can we do about it?

Design

Move the placeholder content above the input, but below the label:


An input with a label that reads, Your employee ID number, and help content below the label that reads, Can be found on your employee intranet profile. Example: a1234567-89. The example ID has been styled using a monospaced font.


Large preview

This approach:

  • Communicates a visual and structural hierarchy:
    • What this input is for,
    • Things you need to know to use the input successfully, and
    • the input itself.
  • Can be translated.
  • Won’t look like pre-filled information.
  • Can be seen in low vision circumstances.
  • Won’t disappear when content is entered into the input.
  • Can include semantic markup and be styled via CSS.

Additionally, the help content will be kept in view when the input is activated on a device with a software keyboard. If placed below the input, the content may be obscured when an on-screen keyboard appears at the bottom of the device viewport:


iOS’ on-screen keyboard is obscuring information about password requirements on a “Set a password” input. Screenshot.


Content hidden by an on-screen keyboard. 3rd party keyboards with larger heights may have a greater risk of blocking important content. (Large preview)

Development

Here’s how to translate our designed example to code:

<div class="input-wrapper">
  <label for="employee-id">
    Your employee ID number
  </label>
  <p
    id="employee-id-hint"
    class="input-hint">
    Can be found on your employee intranet profile. Example: <samp>a1234567-89</samp>.
  </p>
  <input
    id="employee-id"
    aria-describedby="employee-id-hint"
    name="id-number"
    type="text" />
</div>

This isn’t too much of a departure from a traditional accessible for/id attribute pairing: The label element is programmatically associated with the input via its id declaration of “employee-id”. The p element placed between the label and input elements acts as a replacement for a placeholder attribute.

“So,” you may be wondering. “Why don’t we just put all that placeholder replacement content in the label element? It seems like it’d be a lot less work!” The answer is that developer convenience shouldn’t take priority over user experience.

By using aria-describedby to programmatically associate the input with the p element, we are creating a priority of information for screen readers that has parity with what a person browsing without a screen reader would experience. aria-describedby ensures that the p content will be described last, after the label’s content and the kind of input it is associated with.

In other words, it’s what content the input is asking for, what type of input it is, then additional help if you need it — exactly what someone would experience if they look at form input.

User experience encompasses all users, including those who navigate with the aid of screen readers. The help content is self-contained and easy to navigate to and from, should the person using a screen reader need to re-reference it. As it is a self-contained node, it can also be silenced (typically with the Control key) without risking muting other important information.

Including the help content as part of the label makes it unnecessarily verbose. labels should be meaningful, but also concise. Adding too much information to a label may have the opposite of the desired effect, making it too long to recall or simply too frustrating to listen to all the way through. In fact, the Web Content Accessibility Guidelines has rules that specifically address this: Success Criteria 2.4.6 and 3.3.2.

Example

Here is the solution implemented in live code:

See the Pen Don’t use the placeholder attribute by Eric Bailey (@ericwbailey) on CodePen.

And here’s a video demonstrating how popular screen readers handle it:

A Better Solution

“The less an interface requires of its users, the more accessible it is.”

Alice Boxhall

A final thought: Do you even need that additional placeholder information?

Good front-end solutions take advantage of special input attributes and accommodating validation practices to prevent offloading the extra work onto the person who simply just wants to use your site or app with as little complication as possible.

Good copywriting creates labels that clearly and succinctly describe the input’s purpose. Do a good enough job here and the label cuts through the ambiguity, especially if you test it beforehand.

Good user experience is all about creating intelligent flows that preempt people’s needs, wants, and desires by capitalizing on existing information to remove as many unnecessary questions as possible.

Accommodating the people who use your website or web app means taking a critical eye at what you take for granted when you browse the Internet. By not making assumptions about other people’s circumstances — including the technology they use — you can do your part to help prevent exclusion.

Take some time to review your design and code and see what doesn’t stand up to scrutiny — checking to see if you use the placeholder attribute might be a good place to start.

Standing on the shoulders of giants. Thanks to Roger Johansson, Adam Silver, Scott O’Hara, and Katie Sherwin for their writing on the subject.

Smashing Editorial
(rb, ra, yk, il)


More here: 

Don’t Use The Placeholder Attribute

Thumbnail

UX Your Life: Applying The User-Centered Process To Your Life (And Stuff)




UX Your Life: Applying The User-Centered Process To Your Life (And Stuff)

JD Jordan



Everything is designed, whether we make time for it or not. Our smartphones and TVs, our cars and houses, even our pets and our kids are the products of purposeful creativity.

So why not our lives?

A great many of us are, currently, in a position where we might look at our jobs — or even our relationships — and wonder, “Why have I stayed here so long? Is this really where I want or even need to be. Am I in a position where I can do something about it?”

The simple — and sometimes harsh — the answer is that we don’t often make intentional decisions about our lives and our careers like we do in our work for clients and bosses. Instead, having once made the decision to accept a position or enter a relationship, inertia takes over. We become reactive rather than active participants in our own lives and, like legacy products, are gradually less and less in touch with the choices and the opportunities that put us there in the first place.

Or, in UX terms: We stop doing user research, we stop iterating, and we stop meeting our own needs. And our lives and careers come less usable and enjoyable as a result of this negligence.

Thankfully, all the research, design, and testing tools we need to intentionally design our lives are easily acquired and learned. And you don’t need special training or a trust fund to do it. All you need is the willingness to ask yourself difficult questions and risk change.

You might just end up doing the work you want, having the life-work balance you need, and both of those with the time you need for what’s most important to you.

I’d be remiss if I didn’t admit, the idea of applying UX tools to my life didn’t come quickly. UX design principles are applicable to a much wider range of projects than the discipline typically concerns itself, but it was only through some dramatic personal trials that I was finally compelled to test these methods against my own life and those of my family. That is to say, though, I’m not just an evangelist for these methods, I also use them.


Palo Duro on a weekday


What does your office look like? This was a Tuesday — a workday! — after my wife and I redesigned our lives and careers and became business partners. (Large preview)

So how do you UX your life?

Below, I’m going to introduce you to four tools and techniques you can use to get started:

  1. Your Life In Weeks
    A current state audit of your past.
  2. Eisenhower Charts
    A usability assessment for your present and your priorities.
  3. Affinity Mapping
    A qualitative method for identifying — and later retrospecting on — your success metrics (KPIs).
  4. Prototyping Life
    Because you’ve got to try it before you live it.

But first…

Business As Usual: The User-Centered Design Process

Design thinking and its deliberate creative and experimental process provides an excellent blueprint for how to perform user research on yourself, create the life you need, and test the results.

This user-centered design process is nothing new. In many ways, people have been practicing this iterative process since our ancestors first talked to each other and sketched on cave walls. Call it design thinking, UX, or simply problem-solving — it’s much the same from agency to agency, department to department, regardless of the proprietary frame.

Design process
Look familiar? The design process in its simplest form.
Credit: Christopher Holm-Hansen, thenounproject.com
. (Large preview)

The user-centered design process is, most simply:

  1. Phase 1: Research
    The first step to finding any design solution is to talk to users and stakeholders and validate the problem (and not just respond to the reported symptoms). This research is also used to align user and business needs with what’s technically and economically feasible. This first step in the process is tremendously freeing — you don’t need to toil in isolation. Your user knows what they need, and this research will help you infer it.
  2. Phase 2: Design
    Don’t just make things beautiful — though beauty is joyful! Focus on creating solutions for the specific needs, pain-points, and opportunities your research phase identified. And remember, design is both a noun and a verb. Yes, you deliver designs for your clients, but design is — first and foremost — a process of insight, trial, and error. And once you have a solution in mind…
  3. Phase 3: Testing
    Test early and test often. When your solutions are still low-fi (before they go to development) and absolutely before they go to market, put them in front of real users to make sure you’re solving the right problems. Become an expert in making mistakes and iterating on the lessons those mistakes teach you. It’s key to producing the best solutions.
  4. Repeat

Most design-thinking literature illustrates how the design process is applied to products, software, apps, or web design. At our agency, J+E Creative, we also apply this process to graphic design, content creation, education, and filmmaking. And it’s for that reason we don’t call it the UX design process. We drop the abbreviated adjective because, in our experience, the process works just as well for presentations and parenting as it does for enterprise software.

The process is about problem-solving. We just have to turn the process on ourselves.

Expanding The Scope: User-Centered Parenting

As creatives and as the parents of five elementary-aged kiddos, one of the first places we tried to apply the design process to our lives was to the problems of parenting.


A rare picture of a shark stepping on a Lego


Talk about a pain point. Using UX basics to solve a parenting problem opened the door to a wider application of the process and — mercifully — saved our tender feet. (Large preview)

In our case, the kids didn’t clean up their Legos. Like, ever. And stepping on a Lego might just be the most painful thing that can happen to you in your own home. They’re all right angles, unshatterable plastic, and invariably in places where you otherwise feel safe, like the kitchen or the bathroom.

But how can you research, design, and test a parenting issue — such as getting kids to pick up their Legos — using the user-centered design process?

Research

We’re far from the first parents to struggle with the painful reality of stepping on little plastic knives. And like most parents, we’d learned threats and consequences were inadequate to the task of changing our kids’ behavior.

So we started with a current-state contextual analysis: The kid’s legos were kept in square canvas boxes in square Ikea bookcases in a room with a carpeted floor. Typically, the kids would pour the Legos out on the carpet — for the benefit of sorting through the small pieces while simultaneously incurring the pain-point that Legos are notoriously hard to clean up off the carpet.


Lego slippers


For reals. If your product requires me to protect myself against it in my own home, the problem might be the product. Credit: BRAND STATION/LEGO/Piwee. (Large preview)

We also did a competitive analysis and were surprised to learn that, back in 2015, Lego appeared to acknowledge this problem and teamed up Brand Station to create some Lego-safe slippers. But, sadly, this was both a limited run and an impractical solution.


Five kids, five users


All users, great and small. It’s tempting to think users are paying customer or website visitors. But once you widen your perspective, users are everywhere. Even in your own home. (Large preview)

Lastly, we conducted user interviews. We knew the stakeholder perspective: We wanted the Legos to stay in their bins or — failing that — for the kids to pick them up after they were played with. But we didn’t assume we knew what the users wanted. So we talked to each of them in turn (no focus groups!) and what we found was eye opening. Of course, the kids didn’t want to pick up their Legos. It was inconvenient for play and difficult because of the carpet. But we were surprised to learn that the kids had also considered the Lego problem — they didn’t like discipline, after all — and they already had a solution in mind. If anything, like good users, they were frustrated we hadn’t asked sooner.

Design

Remember when I said, your user knows what they need?

One of our users asked us, “What about the train table with the big flat top and the large flat drawer underneath.”

Eureka.


Ikea boxes and train tables


Repurposing affordances. What works for one interaction often works for another. And with a little creativity and flexibility, some solutions present themselves. (Large preview)

By swapping the contents of the Lego bins with the train table, we solved nearly all stakeholder and user pain points in one change of platform:

  • Legos of all sizes were easy to find in the broad flat drawer.
  • The large flat surface of the train table was a better surface for assembling and cleaning up Legos than was the carpet.
  • Clean up was easy — just roll the drawer closed!
  • Opportunity bonus: It painlessly let us retire the train toys the kids had already outgrown.

Testing

No solution is ever perfect, and this was no exception. Despite its simplicity, iteration was quickly necessary. For instance, each kid claimed the entire surface of the top deck. And the lower drawer was rarely pushed in without a reminder.

But you know what? We haven’t stepped on a Lego in years. #TrustTheProcess.

The Ultimate Experience: User-Centered Living

Knowing how to apply the design process to our professional work, and emboldened from UXing our kids, we began to apply the process to something bigger — perhaps the biggest something of all.

Our lives.


Don’t do yoga on a mountaintop


This is not a plan. This is bullsh*t. (Large preview)

The Internet is full of advice on this topic. And it’s easy to confuse its ubiquitous inspirational messages for a path to self-improvement and a mindful life. But I’d argue such messages — effective, perhaps for short-term encouragement — are damaging. Why?

They feature:

  • Vague phrases or platitudes.
  • Disingenuous speakers, often without examples.
  • The implication of attainable or achieved perfection.
  • Calls for sudden, uninformed optimism.

But most damning, these messages are often too-high-level, include privileged and entitled narratives masquerading as lessons, or present life as a zero-sum pursuit reminiscent of Cortés burning his ships.

In short, they’re bullsh!t.

What we need are practical tools we can learn from and apply to our own experiences. People don’t want to find the thing they’re most passionate about, then do it on nights and weekends for the rest of their lives. They want an intentional life they’re in control of. Full time. And still make rent.

So let’s take deliberate control of our lives using the same tools and techniques we use for client work or for getting the kids to pick up their damn legos.

Content Auditing Your Past: Your Life In Weeks

The best way I’ve found to get started designing your life is to take a look back at how you’ve lived your life so far. It’s the ultimate content audit, and it’s one of the most eye-opening acts of introspection you can do.

Tim Urban introduced the concept of looking at your life in weeks on his occasional blog, Wait But Why. It’s a reflective audit of your past reduced to a graph featuring 52 boxes per row, with each box representing a week and each row, a year. And combined with a Social Security Administration death estimate, it presents a total look at the life you’ve lived and the time you have left.

You can get started right now by downloading a Your Life In Weeks template and by following along with my historical audit.


Life in weeks


My life, circa Spring Break. Grey is unstructured time, green in education, and blue is my career (each color in tints to represent changes in schools or employers). White dots represent positive events, black dots represent negative ones. Orange dots are opportunities I can predict. Empty dots are weeks yet lived. (Large preview)

Your Life In Weeks maps the high points and low points in your life. How it’s been spent so far and what lies ahead.

  • What were the big events in your life?
  • How have you spent your time so far?
  • What events can you forecast?
  • How do you want to spend your time left?

This audit is an analog for quantifiable user and usability research techniques such as website analytics, conversion rates, or behavior surveys. The result is a snapshot of one user’s unique life and career. Yours.

Start by looking back…
  • Where and when did you go to school?
  • When did you turn 18, 21, 40?
  • When did you get your first job? When did your career begin?
  • When and where were your favorite trips?
  • When and where did you move?
  • When were your major career changes or professional events?
  • What about relationships, weddings, or breakups?
  • When were your kids born?
  • And don’t forget major personal events: health issues, traumas, success, or other impactful life changes.

Life in weeks, education


Youth is wasted on the young. I spent the first few years of my life with mostly unstructured time (grey) before attending a variety of schools (shades of green) in North Carolina, Georgia, Virginia, France, and Scotland. I also moved a few times (white circles). Annotations are in the margins. (Large preview)


Life in weeks, career


Adulting is hard. My first summer and salaried jobs led to founding my first company and the inevitable quarter-life crisis. After graduate school, life got more complicated: I closed my company, got divorced, and dealt with a few health crises (black dots) but also had kids, got remarried, and published my first novel (white dots). (Large preview)

What can you look forward to…
  • Where do you want your career to go and by when?
  • What are your personal goals?
  • Got kids? When is your last Spring Break with them? When do they move out?
  • When might you retire?
  • When might you die?

Life in weeks, forecast


Maximize the future. Looking forward, I can forecast four remaining Spring Breaks with all my kids (as a divorcee, they’re with me every other year). I also know when the last summer vacation with all them is and when they’ll start moving out to college. (Large preview)


Life in weeks, death


How full is your progress bar? Social Security Administration helps forecast your death date. But don’t worry. The older you already are, the longer you’ll make it. (Large preview)

The perspective this audit reveals can be humbling but it’s better than keeping your head in the sand. Or in the cubicle. Realizing your 40th really is your midlife might be the incentive you need for real change, knowing your kids will move out in a few years might help you re-prioritize, or seeing how much time you spent working on someone else’s dream might give you the motivation to start working for your own.

When I audited myself, I was shocked by how much time I’d spent at jobs that were poor fits for me. And at how little time I had left to do something else. I was also shocked to see how little time I had left with my kids at home, even as young as they are. Suddenly, the pain of sitting in traffic or spending an evening away at work took on new meaning. I didn’t resent my past — what’s done is done and there’s no way to change it — but I did let it color how I saw my present and my future.

Usability Testing The Present: Eisenhower Charts

Once you’ve looked back at your past, it’s time to look at how you’re spending your present.

An Eisenhower chart — cleverly named for the US president and general that saved the world — is a simple quadrant graph that juxtaposes urgency (typically, the Y-axis) with importance (typically the X-axis). It helps to identify your priorities to help you focus on using your time well, not just filling it.

Put simply, this tool helps you:

  1. Figure out what’s important to you.
  2. Prioritize it.

Most of us struggle every day (or in even smaller units of time) to figure out the most important thing we need to do right now. We take inventories of what people expect from us, of what we’ve promised to do for others, or of what feels like needs tackling right away. Then we prioritize our schedules around these needs.


Eisenhower chart


What’s important to you? It’s easy to get caught up in urgency — or perceived urgency — and disregard what’s important. But I often find that the most important things aren’t particularly urgent and, therefore, must be consciously prioritized. (Large preview)

Like a feature prioritization exercise for a piece of software, this analytical tool helps separate the must-haves and should-haves from the could- and would-haves. It does this by challenging inertia and assumption — by making us validate the activities that eat up the only commodity we’ll never get more of — time.

You can download a blank Eisenhower matrix and start sorting your present as I take you through my own.

Start by listing everything you do — and everything you wish you were doing — on Post-Its and honestly measure how urgent and important those activities are to you right now. Then take a moment. Look at it. This might be the first time you’ve let yourself acknowledge the fruitless things that keep you busy or the priorities unfulfilled inside you.

What’s important and urgent?
  • Deadlines
  • Health crises
  • Taxes (at the end of each quarter or around April 15)
  • Rent (at least once a month)
What’s important but not urgent?
  • Something you’re passionate about but which doesn’t have a deadline
  • A long-term project — can you delegate parts of it?
  • Telling your loved ones that you love them
  • Family time
  • Planning
  • Self-care
What’s urgent but not important?
  • Phone calls
  • Texts and Slacks
  • Most emails
  • Unscheduled favors
Neither important or urgent
  • TV (yes, even Netflix)
  • Social media
  • Video games

Eisenhower chart, sorted


Do it once. Do it often. We regularly include Eisenhower charts in our weekly business and family planning. The busier you are, the more valuable it becomes. (Large preview)

The goal is to identify what’s important, not just what’s urgent. To identify your priorities. And as you repeat this activity over the course of weeks or even years, it makes you conscious of how you spend your time and can have a tremendous impact on how well that time is spent. Because the humbling fact is, no one else is going to prioritize what’s important to you. Your loving partner, your supportive family, your boss and your clients — they all have their own priorities. They each have something that’s most important to them. And those priorities don’t necessarily align with yours.

Because the things that are important to each of us — not necessarily urgent — need time in our schedules if they’re going to provide us with genuine and lasting self-actualization. These are our priorities. And you know what you’re supposed to do with priorities.

Prioritize them.


Schedule your priorities


Get sh!t done. “The key is not to prioritize your schedule but to schedule your priorities.” — Stephen Covey, Seven Habits of Highly Effective People. (Large preview)

Identifying what your priorities are is critical to getting them into your schedule. Because, if you want to paint or travel or spend time with the kids or start a business, no one else is going to put that first. You have to. It is up to you to identify what’s important and then find time for it. And if time isn’t found for your priorities, you only have one person to blame.

We do these charts regularly, both for family and business planning. And one of the things I often take away from this exercise is the reminder to schedule blocks of time for the kids. And to schedule time for the thing I’m most passionate about — writing. I am a designer who writes but I aspire to become a writer who designs. And I’ll only get there if I prioritize it.

Success Metrics For The Future: Affinity Mapping

If you’ve ever seen a police procedural, you’ve seen an affinity map.

Affinity maps are a simple way to find patterns in qualitative data. UXers often use them to make sense of user interviews and survey data, to find patterns that inform personae or user requirements, and to tease out that most elusive gap.

In regards to designing your life, an affinity map is a powerful technique for individuals, partners, and teams to determine what they want and need out of their lives, to synthesize that information into actionable and measurable requirements, and to create a vision of what their life might look like in the future.


Affinity map


Great minds think alike. Team affinity mapping can help you and your family, or you and your business partners, align your priorities. My wife and I did this activity when we started our business to make sure we were on the same page. And we’ve looked back at it, regularly, to measure if we’re staying on target. (Large preview)

You don’t need a template to get started affinity mapping. Just a lot of Post-It notes and a nice big wall, window, or table.

How to affinity map your life (alone or with your life/business partners)
  • Write down any important goal you want to achieve on its own Post-it.
  • Write down important values or activities you want to prioritize on its own Post-it.
  • Categorize the insights under “I” statements to keep the analysis from the user’s (your!) point of view.
  • Organize that data by the insights it suggests. For instance, notes reading “I want to spend more time with my kids” and “I don’t want to commute for an hour each way” might fall under the heading “I want to work close to home.”
  • Timebox the exercise. You can easily spend all day on this one. Set a timer to make sure you don’t spend it overthinking (technical term: navel gazing).

This is a shockingly quick and easy technique to synthesize the insights from Your Life In Weeks and your Eisenhower chart. And by framing the results in “I” statements, your aggregate research begins speaking back to you — as a pseudo personae of yourself or of your partnership with others.

Insights such as “I want to work close to home” and “I want to work with important causes” become your life’s requirements and the success metrics (KPIs). They’ll form the basis for testing and retrospectives.

Speaking of testing…

Prototype Or Dive Right In

Now that you’ve audited, validated, and created a vision for the life you want to live, what do you do with this information?

Design a solution!

Maybe you only need to change one thing. Maybe you need to change everything! Maybe you need to save up some runway money if the change impacts your income or your expenses. Maybe you need to dramatically cut your expenses. No change is without consequence, and your life’s requirements are different from anyone else’s.

When my wife sat down and did these activities, we determined we wanted to:

  • Work together
  • Work from home, so we don’t have to commute
  • Start our work day early, so we’re done by the time the kids come home from school
  • Not check email or slack after hours or on weekends
  • Make time for our priorities and our passion projects.

J+E services


All about the pies. Aligning our priorities helped define the services or business offers and the delicious return on the investment our clients can expect. (Large preview)

Central to this vision of the life we wanted was a new business — one that met the functional and reliability needs of income, insurance, and career while also satisfying the usability and joy requirements of interest, collaboration, and self-actualization. And, in the process, these activities also helped us identify what services that business would offer. Design, content, education, and friendship became the verticals we wanted to give our time to and take fulfillment from.

But we didn’t just jump in, heedless or without regard to the impact a shift in employment and income might have on our family. Instead, we prototyped what this new business might look like before committing it to the market.


Prototyping life and business


Prototyping is serious business. We took advantage of a local hackathon to test working together and with a team before quitting our day jobs. (Large preview)

Using after-hours freelance client work and hackathons, we tested various workstyles, teams, and tools while also assessing more abstract but critical business and lifestyle concerns like hourly rates, remote collaboration, and shifted office hours. And with each successive prototype, we:

  1. Observed (research)
  2. Iterated (design)
  3. Retrospected (testing).

Some of the solutions that emerged from this were:

  • A remote-work team model based on analogue synchronous communication and digital statuses (eg. phone calls and Slack stand-ups).
  • No dedicated task management system — everyone has their preferred accountability method. My wife and I, for instance, prefer pen and paper lists and talking to each other instead of process automation tools (we learned we really hate Trello!).
  • Our URL — importantshit.co — is a screener to filter clients for personality and humor compatibility.
  • Google Friday-style passion project time, built into our schedules to help us prioritize what’s important to each of us.

And some of the problems we identified:

  • We both hate bookkeeping — there’s a lot to learn.
  • Scaling a remote team requires much more deliberate management.
  • New business development is hard — we might need to hire someone to help with that.

So when we finally launched J+E Creative full time, we already had a sense of what worked for us and what challenges required further learning and iteration. And because we prototyped, first, we had the confidence and a few clients in place so that we didn’t have to save too much money before making the change.

The ROI For Designing Your Life

Superficially, we designed a new business for ourselves. More deeply, though, we took control of variables and circumstances that let us meet our self-identified lifestyle goals: spending more time with the kids, prioritizing our marriage and our family above work, giving ourselves time to practice and grow our passions, and better control our financial futures.

The return on investment for designing your life is about as straightforward as design solutions get. As Bill Burnett and Dave Evans put it, “A well-designed life is a life that is generative — it is constantly creative, productive, changing, evolving, and there is always the possibility of surprise. You get out of it more than you put in.”

Hopefully you’ll see how a Your Life In Weeks audit can help you learn from your past, how an Eisenhower chart can help you prioritize the present, and how a simple affinity mapping exercise for your wants and needs can help you see beyond money-based decisions and assess if you’re making the right decisions regarding family, clients, and project.


Life-Career balance


Live and work, by design. Mindfully designing our lives and our careers allowed us to pursue our own business (J+E Creative) and our separate passions (elliedecker.com and o-jd.com) (Large preview)

It’s always a give and a take. We frequently have to go back to our affinity map results to make sure we’re still on target. Or re-prioritize with an Eisenhower chart — especially in a challenging week. And, sometimes, the urgent trumps the important. It’s life, after all. But always with the understanding that we are each on the hook when our lives aren’t working out the way we want. And that we have the tools and the insights necessary to fix it.

So schedule a kickoff and set a deadline. You’ve got a new project.

Down For More?

Ready to start designing a more mindful life and career? Here are a couple links to help you get started:

Smashing Editorial
(cc, ra, yk, il)


Jump to original – 

UX Your Life: Applying The User-Centered Process To Your Life (And Stuff)

Thumbnail

A Reference Guide For Typography In Mobile Web Design




A Reference Guide For Typography In Mobile Web Design

Suzanna Scacca



With mobile taking a front seat in search, it’s important that websites are designed in a way that prioritize the best experience possible for their users. While Google has brought attention to elements like pop-ups that might disrupt the mobile experience, what about something as seemingly simple as choice of typography?

The answer to the typography question might seem simple enough: what works on desktop should work on mobile so long as it scales well. Right?

While that would definitely make it a lot easier on web designers, that’s not necessarily the case. The problem in making that statement a decisive one is that there haven’t been a lot of studies done on the subject of mobile typography in recent years. So, what I intend to do today is give a brief summary of what it is we know about typography in web design, and then see what UX experts and tests have been able to reveal about using typography for mobile.

Understanding The Basics Of Typography In Modern Web Design

Look, I know typography isn’t the most glamorous of subjects. And, being a web designer, it might not be something you spend too much time thinking about, especially if clients bring their own style guides to you prior to beginning a project.

That said, with mobile-first now here, typography requires additional consideration.

Typography Terminology

Let’s start with the basics: terminology you’ll need to know before digging into mobile typography best practices.

Typography: This term refers to the technique used in styling, formatting, and arranging “printed” (as opposed to handwritten) text.

Typeface: This is the classification system used to label a family of characters. So, this would be something like Arial, Times New Roman, Calibri, Comic Sans, etc.

Typefaces in Office 365


A typical offering of typefaces in word processing applications. (Source: Google Docs) (Large preview)

Font: This drills down further into a website’s typeface. The font details the typeface family, point size, and any special stylizations applied. For instance, 11-point Arial in bold.

3 essential elements to define a font


An example of the three elements that define a font. (Source: Google Docs) (Large preview)

Size: There are two ways in which to refer to the size (or height) of a font: the word processing size in points or the web design size in pixels. For the purposes of talking about mobile web design, we use pixels.

Here is a line-by-line comparison of various font sizes:

An example of font sizes


An example of how the same string of text appears at different sizes. (Source: Google Docs) (Large preview)

As you can see in WordPress, font sizes are important when it comes to establishing hierarchy in header text:

An example of font size choices in WordPress


Header size defaults available with a WordPress theme. (Source: WordPress) (Large preview)

Weight: This is the other part of defining a typeface as a font. Weight refers to any special styles applied to the face to make it appear heavier or lighter. In web design, weight comes into play in header fonts that complement the typically weightless body text.

Here is an example of options you could choose from in the WordPress theme customizer:

An example of font weight choices


Sample font weights available with a WordPress theme. (Source: WordPress) (Large preview)

Kerning: This pertains to the space between two letters. It can be adjusted in order to create a more aesthetically pleasing result while also enhancing readability. You will need a design software like Photoshop to make these types of adjustments.

Tracking: Tracking, or letter-spacing, is often confused with kerning as it too relates to adding space in between letters. However, whereas kerning adjusts spacing between two letters in order to improve appearances, tracking is used to adjust spacing across a line. This is used more for the purposes of fixing density issues while reading.

To give you a sense for how this differs, here’s an example from Mozilla on how to use tracking to change letter-spacing:

Normal tracking example


This is what normal tracking looks like. (Source: Mozilla) (Large preview)

-1px tracking example


This is what (tighter) -1px tracking looks like. (Source: Mozilla) (Large preview)

1px tracking example


This is what (looser) 1px tracking looks like. (Source: Mozilla) (Large preview)

Leading: Leading, or line spacing, is the amount of distance granted between the baselines of text (the bottom line upon which a font rests). Like tracking, this can be adjusted to fix density issues.

If you’ve been using word processing software for a while, you’re already familiar with leading. Single-spaced text. Double-spaced text. Even 1.5-spaced text. That’s leading.

The Role Of Typography In Modern Web Design

As for why we care about typography and each of the defining characteristics of it in modern web design, there’s a good reason for it. While it would be great if a well-written blog post or super convincing sales jargon on a landing page were enough to keep visitors happy, that’s not always the case. The choices you make in terms of typography can have major ramifications on whether or not people even give your site’s copy a read.

These are some of the ways in which typography affects your end users:

Reinforce Branding
Typography is another way in which you create a specific style for your web design. If images all contain clean lines and serious faces, you would want to use an equally buttoned-up typeface.

Set the Mood
It helps establish a mood or emotion. For instance, a more frivolous and light-bodied typeface would signal to users that the brand is fun, young and doesn’t take itself seriously.

Give It a Voice
It conveys a sense of personality and voice. While the actual message in the copy will be able to dictate this well, using a font that reinforces the tone would be a powerful choice.

Encourage Reading
As you can see, there are a number of ways in which you can adjust how type appears on a screen. If you can give it the right sense of speed and ease, you can encourage more users to read through it all.

Allow for Scanning
Scanning or glancing (which I’ll talk about shortly) is becoming more and more common as people engage with the web on their smart devices. Because of this, we need ways to format text to improve scannability and this usually involves lots of headers, pull quotes and in-line lists (bulleted, numbered, etc.).

Improve Accessibility
There is a lot to be done in order to design for accessibility. Your choice of font plays a big part in that, especially as the mobile experience has to rely less on big, bold designs and swatches of color and more on how quickly and well you can get visitors to your message.

Because typography has such a diverse role in the user experience, it’s a matter that needs to be taken seriously when strategizing new designs. So, let’s look at what the experts and tests have to say about handling it for mobile.

Typography For Mobile Web Design: What You Need To Know

Too small, too light, too fancy, too close together… You can run into a lot of problems if you don’t strike the perfect balance with your choice of typography in design. On mobile, however, it’s a bit of a different story.

I don’t want to say that playing it safe and using the system default from Google or Apple is the way to go. After all, you work so hard to develop unique, creative and eye-catching designs for your users. Why would you throw in the towel at this point and just slap Roboto all over your mobile website?

We know what the key elements are in defining and shaping a typeface and we also know how powerful fonts are within the context of a website. So, let’s drill down and see what exactly you need to do to make your typography play well with mobile.

1. Size

In general, the rule of thumb is that font size needs to be 16 pixels for mobile websites. Anything smaller than that could compromise readability for visually impaired readers. Anything too much larger could also make reading more difficult. You want to find that perfect Goldilocks formula and, time and time again, it comes back to 16 pixels.

In general, that rule is a safe one to play by when it comes to the main body text of your mobile website. However, what exactly are you allowed to do for header text? After all, you need to be able to distinguish your main headlines from the rest of the text. Not just for the sake of calling attention to bigger messages, but also for the purposes of increasing scannability of a mobile web page.

The Nielsen Norman Group reported on a study from MIT that covered this exact question. What can you do about text that users only have to glance at? In other words, what sort of sizing can you use for short strings of header text?

Here is what they found:

Short, glanceable strings of text lead to faster reading and greater comprehension when:

  • They are larger in size (specifically, 4mm as opposed to 3mm).
  • They are in all caps.
  • Lettering width is regular (and not condensed).

In sum:

Lowercase lettering required 26% more time for accurate reading than uppercase, and condensed text required 11.2% more time than regular. There were also significant interaction effects between case and size, suggesting that the negative effects of lowercase letters are exacerbated with small font sizes.

I’d be interested to see how the NerdWallet website does, in that case. While I do love the look of this, they have violated a number of these sizing and styling suggestions:

The NerdWallet home page


NerdWallet’s use of all-caps and smaller font sizes on mobile. (Source: NerdWallet) (Large preview)

Having looked at this a few times now, I do think the choice of a smaller-sized font for the all-caps header is an odd choice. My eyes are instantly drawn to the larger, bolder text beneath the main header. So, I think there is something to MIT’s research.

Flywheel Sports, on the other hand, does a great job of exemplifying this point.

The Flywheel Sports home page


Flywheel Sports’ smart font choices for mobile. (Source: Flywheel Sports) (Large preview)

There’s absolutely no doubt where the visitors’ attention needs to go: to the eye-catching header. It’s in all caps, it’s larger than all the other text on the page, and, although the font is incredibly basic, its infusion with a custom handwritten-style type looks really freaking cool here. I think the only thing I would fix here is the contrast between the white and yellow fonts and the blue background.

Just remember: this only applies to the sizing (and styling) of header text. If you want to keep large bodies of text readable, stick to the aforementioned sizing best practices.

2. Color and Contrast

Color, in general, is an important element in web design. There’s a lot you can convey to visitors by choosing the right color palette for designs, images and, yes, your text. But it’s not just the base color of the font that matters, it’s also the contrast between it and the background on which it sits (as evidenced by my note above about Flywheel Sports).

For some users, a white font on top of a busy photo or a lighter background may not pose too much of an issue. But “too much” isn’t really acceptable in web design. There should be no issues users encounter when they read text on a website, especially from an already compromised view of it on mobile.

Which is why color and contrast are top considerations you have to make when styling typography for mobile.

The Web Content Accessibility Guidelines (WCAG) has clear recommendations regarding how to address color contrast in section 1.4.3. At a minimum, the WCAG suggests that a contrast of 4.5 to 1 should be established between the text and background for optimal readability. There are a few exceptions to the rule:

  • Text sized using 18-point or a bold 14-point only needs a contrast of 3 to 1.
  • Text that doesn’t appear in an active part of the web page doesn’t need to abide by this rule.
  • The contrast of text within a logo can be set at the designer’s discretion.

If you’re unsure of how to establish that ratio between your font’s color and the background upon which it sits, use a color contrast checking tool like WebAIM.

WebAIM color contrast checker


An example of how to use the WebAIM color contrast checker tool. (Source: WebAIM) (Large preview)

The one thing I would ask you to be mindful of, however, is using opacity or other color settings that may compromise the color you’ve chosen. While the HEX color code will check out just fine in the tool, it may not be an accurate representation of how the color actually displays on a mobile device (or any screen, really).

To solve this problem and ensure you have a high enough contrast for your fonts, use a color eyedropper tool built into your browser like the ones for Firefox or Chrome. Simply hover the eyedropper over the color of the background (or font) on your web page, and let it tell you what the actual color code is now.

Here is an example of this in action: Dollar Shave Club.

This website has a rotation of images in the top banner of the home page. The font always remains white, but the background rotates.

Dollar Shave Club grey banner


Dollar Shave Club’s home page banner with a grey background. (Source: Dollar Shave Club) (Large preview)

Dollar Shave Club beige banner


Dollar Shave Club’s home page banner with a beige/taupe background. (Source: Dollar Shave Club) (Large preview)

Dollar Shave Club purple banner


Dollar Shave Club’s home page banner with a purple background. (Source: Dollar Shave Club) (Large preview)

Based on what we know now, the purple is probably the only one that will pass with flying colors. However, for the purposes of showing you how to work through this exercise, here is what the eyedropper tool says about the HEX color codes for each of the backgrounds:

  • Grey: #9a9a9a
  • Beige/taupe: #ffd0a8
  • Purple: #4c2c59.

Here is the contrast between these colors and the white font:

  • Grey: 2.81 to 1
  • Beige/taupe: 1.42 to 1
  • Purple: 11.59 to 1.

Clearly, the grey and beige backgrounds are going to lend themselves to a very poor experience for mobile visitors.

Also, if I had to guess, I’d say that “Try a risk-free Starter Set now.” is only a 10-point font (which is only about 13 pixels). So, the size of the font is also working against the readability factor, not to mention the poor choice of colors used with the lighter backgrounds.

The lesson here is that you should really make some time to think about how color and contrast of typography will work for the benefit of your readers. Without these additional steps, you may unintentionally be preventing visitors from moving forward on your site.

3. Tracking

Plain and simple: tracking in mobile web design needs to be used in order to control density. The standard recommendation is that there be no more than between 30 and 40 characters to a line. Anything more or less could affect readability adversely.

While it does appear that Dove is pushing the boundaries of that 40-character limit, I think this is nicely done.

The Dove home page


Dove’s use of even tracking and (mostly) staying within the 40-character limit. (Source: Dove) (Large preview)

The font is so simple and clean, and the tracking is evenly spaced. You can see that, by keeping the amount of words on a line relegated to the recommended limits, it gives this segment of the page the appearance that it will be easy to read. And that’s exactly what you want your typography choices to do: to welcome visitors to stop for a brief moment, read the non-threatening amount of text, and then go on their way (which, hopefully, is to conversion).

4. Leading

According to the NNG, content that appears above the fold on a 30-inch desktop monitor equates to five swipes on a 4-inch mobile device. Granted, this data is a bit old as most smartphones are now between five and six inches:

Average smartphone screen sizes


Average smartphone screen sizes from 2015 to 2021. (Source: TechCrunch) (Large preview)

Even so, let’s say that equates to three or four good swipes of the smartphone screen to get to the tip of the fold on desktop. That’s a lot of work your mobile visitors have to do to get to the good stuff. It also means that their patience will already be wearing thin by the time they get there. As the NNG pointed out, a mobile session, on average, usually lasts about only 72 seconds. Compare that to desktop at 150 seconds and you can see why this is a big deal.

This means two things for you:

  1. You absolutely need to cut out the excess on mobile. If this means creating a completely separate and shorter set of content for mobile, do it.
  2. Be very careful with leading.

You’ve already taken care to keep optimize your font size and width, which is good. However, too much leading and you could unintentionally be asking users to scroll even more than they might have to. And with every scroll comes the possibility of fatigue, boredom, frustration, or distraction getting in the way.

So, you need to strike a good balance here between using line spacing to enhance readability while also reigning in how much work they need to do to get to the bottom of the page.

The Hill Holliday website isn’t just awesome inspiration on how to get a little “crazy” with mobile typography, but it also has done a fantastic job in using leading to make larger bodies of text easier to read:

The Hill Holliday home page


Hill Holliday uses the perfect ratio of leading between lines and paragraphs. (Source: Hill Holliday) (Large preview)

Different resources will give you different guidelines on how to create spacing for mobile devices. I’ve seen suggestions for anywhere between 120% to 150% of the font’s point size. Since you also need to consider accessibility when designing for mobile, I’m going to suggest you follow WCAG’s guidelines:

  • Spacing between lines needs to be 1.5 (or 150%, whichever ratio works for you).
  • Spacing between paragraphs then needs to be 2.5 (or 250%).

At the end of the day, this is about making smart decisions with the space you’re given to work with. If you only have a minute to hook them, don’t waste it with too much vertical space. And don’t turn them off with too little.

5. Acceptable Fonts

Before I break down what makes for an acceptable font, I want to first look at what Android’s and Apple’s typeface defaults are. I think there’s a lot we can learn just by looking at these choices:

Android
Google uses two typefaces for its platforms (both desktop and mobile): Roboto and Noto. Roboto is the primary default. If a user visits a website in a language that doesn’t accept Roboto, then Noto is the secondary backup.

This is Roboto:

The Roboto character set


A snapshot of the Roboto character set. (Source: Roboto) (Large preview)

It’s also important to note that Roboto has a number of font families to choose from:

The Roboto families


Other options of Roboto fonts to choose from. (Source: Roboto) (Large preview)

As you can see, there are versions of Roboto with condensed kerning, a heavier and serifed face as well as a looser, serif-like option. Overall, though, this is just a really clean and simply stylized typeface. You’re not likely to stir up any real emotions when using this on a website, and it may not convey much of a personality, but it’s a safe, smart choice.

Apple
Apple has its own set of typography guidelines for iOS along with its own system typeface: San Francisco.

The San Francisco font


The San Francisco font for Apple devices. (Source: San Francisco) (Large preview)

For the most part, what you see is what you get with San Francisco. It’s just a basic sans serif font. If you look at Apple’s recommended suggestions on default settings for the font, you’ll also find it doesn’t even recommend using bold stylization or outlandish sizing, leading or tracking rules:

San Francisco default settings


Default settings and suggestions for the San Francisco typeface. (Source: San Francisco) (Large preview)

Like with pretty much everything else Apple does, the typography formula is very basic. And, you know what? It really works. Here it is in action on the Apple website:

The Apple home page


Apple makes use of its own typography best practices. (Source: Apple) (Large preview)

Much like Google’s system typeface, Apple has gone with a simple and classic typeface. While it may not help your site stand out from the competition, it will never do anything to impair the legibility or readability of your text. It also would be a good choice if you want your visuals to leave a greater impact.

My Recommendations

And, so, this now brings me to my own recommendations on what you should use in terms of type for mobile websites. Here’s the verdict:

  1. Don’t be afraid to start with a system default font. They’re going to be your safest choices until you get a handle on how far you can push the limits of mobile typography.
  2. Use only a sans serif or serif font. If your desktop website uses a decorative or handwritten font, ditch it for something more traditional on mobile.

    That said, you don’t have to ignore decorative typefaces altogether. In the examples from Hill Holliday or Flywheel Sports (as shown above), you can see how small touches of custom, non-traditional type can add a little flavor.

  3. Never use more than two typefaces on mobile. There just isn’t enough room for visitors to handle that many options visually.

    Make sure your two typefaces complement one another. Specifically, look for faces that utilize a similar character width. The design of each face may be unique and contrast well with the other, but there should still be some uniformity in what you present to mobile visitors’ eyes.

  4. Avoid typefaces that don’t have a distinct set of characters. For instance, compare how the uppercase “i”, lowercase “l” and the number “1” appear beside one another. Here’s an example of the Myriad Pro typeface from the Typekit website:

    Myriad Pro characters


    Myriad Pro’s typeface in action. (Source: Typekit) (Large preview)

    While the number “1” isn’t too problematic, the uppercase “i” (the first letter in this sequence) and the lowercase “l (the second) are just too similar. This can create some unwanted slowdowns in reading on mobile.

    Also, be sure to review how your font handles the conjunction of “r” and “n” beside one another. Can you differentiate each letter or do they smoosh together as one indistinguishable unit? Mobile visitors don’t have time to stop and figure out what those characters are, so make sure you use a typeface that gives each character its own space.

  5. Use fonts that are compatible across as many devices as possible. Your best bets will be: Arial, Courier New, Georgia, Tahoma, Times New Roman, Trebuchet MS and Verdana.

    Default typefaces on mobile


    A list of system default typefaces for various mobile devices. (Source: tinytype) (Large preview)

    Android-supported typefaces


    Another view of the table that includes some Android-supported typefaces. (Source: tinytype) (Large preview)

    I think the Typeform website is a good example of one that uses a “safe” typeface choice, but doesn’t prevent them from wowing visitors with their message or design.

    The Typeform home page


    Typeform’s striking typeface has nothing to do with the actual font. (Source: Typeform) (Large preview)

    It’s short, to the point, perfectly sized, well-positioned, and overall a solid choice if they’re trying to demonstrate stability and professionalism (which I think they are).

  6. When you’re feeling comfortable with mobile typography and want to branch out a little more, take a look at this list of the best web-safe typefaces from WebsiteSetup. You’ll find here that most of the choices are your basic serif and sans serif types. It’s definitely nothing exciting or earth-shattering, but it will give you some variation to play with if you want to add a little more flavor to your mobile type.

Wrapping Up

I know, I know. Mobile typography is no fun. But web design isn’t always about creating something exciting and cutting edge. Sometimes sticking to practical and safe choices is what will guarantee you the best user experience in the end. And that’s what we’re seeing when it comes to mobile typography.

The reduced amount of real estate and the shorter times-on-site just don’t lend themselves well to the experimental typography choices (or design choices, in general) you can use on desktop. So, moving forward, your approach will have to be more about learning how to reign it in while still creating a strong and consistent look for your website.

Smashing Editorial
(lf, ra, yk, il)


Link: 

A Reference Guide For Typography In Mobile Web Design

Thumbnail

Creating The Feature Queries Manager DevTools Extension




Creating The Feature Queries Manager DevTools Extension

Ire Aderinokun



Within the past couple of years, several game-changing CSS features have been rolled out to the major browsers. CSS Grid Layout, for example, went from 0 to 80% global support within the span of a few months, making it an incredibly useful and reliable tool in our arsenal. Even though the current support for a feature like CSS Grid Layout is relatively great, not all recent or current browsers support it. This means it’s very likely that you and I will currently be developing for a browser in which it is not supported.

The modern solution to developing for both modern and legacy browsers is feature queries. They allow us to write CSS that is conditional on browser support for a particular feature. Although working with feature queries is almost magical, testing them can be a pain. Unlike media queries, we can’t easily simulate the different states by just resizing the browser. That’s where the Feature Queries Manager comes in, an extension to DevTools to help you easily toggle your feature query conditions. In this article, I will cover how I built this extension, as well as give an introduction to how developer tools extensions are built.

Working With Unsupported CSS

If a property-value pair (e.g. display: grid), is not supported by the browser the page is viewed in, not much happens. Unlike other programming languages, if something is broken or unsupported in CSS, it only affects the broken or unsupported rule, leaving everything else around it intact.

Let’s take, for example, this simple layout:

The layout in a supporting browser


Large preview

We have a header spanning across the top of the page, a main section directly below that to the left, a sidebar to the right, and a footer spanning across the bottom of the page.

Here’s how we could create this layout using CSS Grid:

See the Pen layout-grid by Ire Aderinokun (@ire) on CodePen.

In a supporting browser like Chrome, this works just as we want. But if we were to view this same page in a browser that doesn’t support CSS Grid Layout, this is what we would get:

The layout in an unsupporting browser


Large preview

It is essentially the same as if we had not applied any of the grid-related styles in the first place. This behavior of CSS was always intentional. In the CSS specification, it says:

In some cases, user agents must ignore part of an illegal style sheet, [which means to act] as if it had not been there

Historically, the best way to handle this has been to make use of the cascading nature of CSS. According to the specification, “the last declaration in document order wins.” This means that if there are multiple of the same property being defined within a single declaration block, the latter prevails.

For example, if we have the follow declarations:

body 
  display: flex;
  display: grid;

Assuming both Flexbox and Grid are supported in the browser, the latter — display: grid — will prevail. But if Grid is not supported by the browser, then that rule is ignored, and any previous valid and supported rules, in this case display: flex, are used instead.

body 
  display: flex;
  display: grid;

Cascading Problems

Using the cascade as a method for progressive enhancement is and has always been incredibly useful. Even today, there is no simpler or better way to handle simple one-liner fallbacks, such as this one for applying a solid colour where the rgba() syntax is not supported.

div 
    background-color: rgb(0,0,0);
    background-color: rgba(0,0,0,0.5);

Using the cascade, however, has one major limitation, which comes into play when we have multiple, dependent CSS rules. Let’s again take the layout example. If we were to attempt to use this cascade technique to create a fallback, we would end up with competing CSS rules.

See the Pen layout-both by Ire Aderinokun (@ire) on CodePen.

In the fallback solution, we need to use certain properties such as margins and widths, that aren’t needed and in fact interfere with the “enhanced” Grid version. This makes it difficult to rely on the cascade for more complex progressive enhancement.

Feature Queries To The Rescue!

Feature queries solve the problem of needing to apply groups of styles that are dependent on the support of a CSS feature. Feature queries are a “nested at-rule” which, like the media queries we are used to, allow us to create a subset of CSS declarations that are applied based on a condition. Unlike media queries, whose condition is dependent on device and screen specs, feature query conditions are instead based on if the browser supports a given property-value pair.

A feature query is made up of three parts:

  1. The @supports keyword
  2. The condition, e.g. display: flex
  3. The nested CSS declarations.

Here is how it looks:

@supports (display: grid) 
    body  display: grid; 
}

If the browser supports display: grid, then the nested styles will apply. If the browser does not support display: grid, then the block is skipped over entirely.

The above is an example of a positive condition within a feature query, but there are four flavors of feature queries:

  1. Positive condition, e.g. @supports (display grid)

  2. Negative condition, e.g. @supports not (display: grid)

  3. Conjunction, e.g. @supports (display:flex) and (display: grid)

  4. Disjunction, e.g. @supports (display:-ms-grid) or (display: grid)

Feature queries solve the problem of having separate fallback and enhancement groups of styles. Let’s see how we can apply this to our example layout:

See the Pen Run bunny run by Ire Aderinokun (@ire) on CodePen.

Introducing The Feature Queries Manager

When we write media queries, we test them by resizing our browser so that the styles at each breakpoint apply. So how do we test feature queries?

Since feature queries are dependent on whether a browser supports a feature, there is no easy way to simulate the alternative state. Currently, the only way to do this would be to edit your code to invalidate/reverse the feature query.

For example, if we wanted to simulate a state in which CSS Grid is not supported, we would have to do something like this:

/* fallback styles here */

@supports (display: grrrrrrrrid) 
    /* enhancement styles here */

This is where the Feature Queries Manager comes in. It is a way to reverse your feature queries without ever having to manually edit your code.

(Large preview)

It works by simply negating the feature query as it is written. So the following feature query:

@supports (display: grid) 
    body  display: grid; 
}

Will become the following:

@supports not (display: grid) 
    body  display: grid; 
}

Fun fact, this method works for negative feature queries as well. For example, the following negative feature query:

@supports not (display: grid) 
    body  display: block; 
}

Will become the following:

@supports not (not (display: grid)) 
    body  display: block; 
}

Which is actually essentially the same as removing the “not” from the feature query.

@supports (display: grid) 
    body  display: block; 
}

Building The Feature Queries Manager

FQM is an extension to your browser’s Developer Tools. It works by registering all the CSS on a page, filtering out the CSS that is nested within a feature query, and giving us the ability to toggle the normal or “inverted” version of that feature query.

Creating A DevTools Panel

Before I go on to how I specifically built the FQM, let’s cover how to create a new DevTools panel in the first place. Like any other browser extension, we register a DevTools extension with the manifest file.


  "manifest_version": 2,
  "name": "Feature Queries Manager",
  "short_name": "FQM",
  "description": "Manage and toggle CSS on a page behind a @supports Feature Query.",
  "version": "0.1",
  "permissions": [
    "tabs",
    "activeTab",
    "<all_urls>"
  ],
  "icons": 
    "128": "images/icon@128.png",
    "64": "images/icon@64.png",
    "16": "images/icon@16.png",
    "48": "images/icon@48.png"
  
}

To create a new panel in DevTools, we need two files — a devtools_page, which is an HTML page with an attached script that registers the second file, panel.html, which controls the actual panel in DevTools.

The devtools script creates the panel page


Large preview

First, we add the devtools_page to our manifest file:


  "manifest_version": 2,
  "name": "Feature Queries Manager",
  ...
  "devtools_page": "devtools.html",

Then, in our devtools.html file, we create a new panel in DevTools:

<!DOCTYPE html>
<html>
<head>
  <meta charset="utf-8"></head>
<body>
<!-- Note: I’m using the browser-polyfill to be able to use the Promise-based WebExtension API in Chrome -->
<script src="../browser-polyfill.js"></script>

<!-- Create FQM panel -->
<script>
browser.devtools.panels.create("FQM", "images/icon@64.png", "panel.html");
</script>
</body>
</html

Finally, we create our panel HTML page:

<!DOCTYPE html>
<html>
<head>
  <meta charset="utf-8"></head>
<body>
  <h1>Hello, world!</h1>
</body>
</html>

If we open up our browser, we will see a new panel called “FQM” which loads the panel.html page.

A new panel in browser DevTools showing the “Hello, World” text


Large preview

Reading CSS From The Inspected Page

In the FQM, we need to access all the CSS referenced in the inspected document in order to know which are within feature queries. However, our DevTools panel doesn’t have direct access to anything on the page. If we want access to the inspected document, we need a content script.

The content script reads CSS from the HTML document


Large preview

A content script is a javascript file that has the same access to the html page as any other piece of javascript embedded within it. To register a content script, we just add it to our manifest file:


      "manifest_version": 2,
      "name": "Feature Queries Manager",
      ...
      "content_scripts": [
        "matches": [""],
        "js": ["browser-polyfill.js", "content.js"]
      ],
    }

In our content script, we can then read all the stylesheets and css within them by accessing document.styleSheets:

Array.from(document.styleSheets).forEach((stylesheet) => 
      let cssRules;
      
      try 
        cssRules = Array.from(stylesheet.cssRules);
       catch(err) 
        return console.warn(`[FQM] Can't read cssRules from stylesheet: $ stylesheet.href `);
      }
      
      cssRules.forEach((rule, i) => 
      
        /* Check if css rule is a Feature Query */
        if (rule instanceof CSSSupportsRule) 
          /* do something with the css rule */
        
        
      });
    });

Connecting The Panel And The Content Scripts

Once we have the rules from the content script, we want to send them over to the panel so they can be visible there. Ideally, we would want something like this:

The content script passes information to the panel and the panel sends instructions to modify CSS back to the content


Large preview

However, we can’t exactly do this, because the panel and content files can’t actually talk directly to each other. To pass information between these two files, we need a middleman — a background script. The resulting connection looks something like this:

The content and panel scripts communicate via a background script


Large preview

As always, to register a background script, we need to add it to our manifest file:


  "manifest_version": 2,
  "name": "Feature Queries Manager",
  ...
  "background": 
    "scripts": ["browser-polyfill.js", "background.js"]
  ,
}

The background file will need to open up a connection to the panel script and listens for messages coming from there. When the background file receives a message from the panel, it passes it on to the content script, which is listening for messages from the background. The background script waits for a response from the content script and relays that message back to the panel.

Here’s a basic of example of how that works:

// Open up a connection to the background script
const portToBackgroundScript = browser.runtime.connect();

// Send message to content (via background)
portToBackgroundScript.postMessage("Hello from panel!");

// Listen for messages from content (via background)
portToBackgroundScript.onMessage.addListener((msg) => 
  console.log(msg);
  // => "Hello from content!"
);
// backrgound.js

// Open up a connection to the panel script
browser.runtime.onConnect.addListener((port) => 
  
  // Listen for messages from panel
  port.onMessage.addListener((request) => 
  
    // Send message from panel.js -> content.js
    // and return response from content.js -> panel.js
    browser.tabs.sendMessage(request.tabId, request)
      .then((res) => port.postMessage(res));
  );
});
// content.js

// Listen for messages from background
browser.runtime.onMessage.addListener((msg) => 

  console.log(msg)
  // => "Hello from panel!"
  
  // Send message to panel
  return Promise.resolve("Hello from content!");
);

Managing Feature Queries

Lastly, we can get to the core of what the extension does, which is to “toggle” on/off the CSS related to a feature query.

If you recall, in the content script, we looped through all the CSS within feature queries. When we do this, we also need to save certain information about the CSS rule:

  1. The rule itself
  2. The stylesheet it belongs to
  3. The index of the rule within the stylesheet
  4. An “inverted” version of the rule.

This is what that looks like:

cssRules.forEach((rule, i) => 
  
  const cssRule = rule.cssText.substring(rule.cssText.indexOf(""));
  const invertedCSSText = `@supports not ( $ rule.conditionText  ) $ cssRule `;
  
  FEATURE_QUERY_DECLARATIONS.push( 
    rule: rule,
    stylesheet: stylesheet,
    index: i, 
    invertedCSSText: invertedCSSText
  );
  
});

When the content script receives a message from the panel to invert all declarations relating to the feature query condition, we can easily replace the current rule with the inverted one (or vice versa).

function toggleCondition(condition, toggleOn) 
  FEATURE_QUERY_DECLARATIONS.forEach((declaration) => 
    if (declaration.rule.conditionText === condition) 
      
      // Remove current rule
      declaration.stylesheet.deleteRule(declaration.index);
      
      // Replace at index with either original or inverted declaration
      const rule = toggleOn ? declaration.rule.cssText : declaration.invertedCSSText;
      declaration.stylesheet.insertRule(rule, declaration.index);
        
  });
}

And that is essentially it! The Feature Query Manager extension is currently available for Chrome and Firefox.

Limitations Of The FQM

The Feature Queries Manager works by “inverting” your feature queries, so that the opposite condition applies. This means that it cannot be used in every scenario.

Fallbacks

If your “enhancement” CSS is not written within a feature query, then the extension cannot be used as it is dependent on finding a CSS supports rule.

Unsupported Features

You need to take note of if the browser you are using the FQM in does or does not support the feature in question. This is particularly important if your original feature query is a negative condition, as inverting it will turn it into a positive condition. For example, if you wrote the following CSS:

div  background-color: blue; 

@supports not (display: grid) 
  div  background-color: pink; 
}

If you use the FQM to invert this condition, it will become the following:

div  background-color: blue; 

@supports (display: grid) 
  div  background-color: pink; 
}

For you to be able to actually see the difference, you would need to be using a browser which does in fact support display: grid.

I built the Feature Queries Manager as a way to more easily test the different CSS as I develop, but it isn’t a replacement for testing layout in the actual browsers and devices. Developer tools only go so far, nothing beats real device testing.

Smashing Editorial
(ra, yk, il)


Continued: 

Creating The Feature Queries Manager DevTools Extension

Thumbnail

Contributing To MDN Web Docs




Contributing To MDN Web Docs

Rachel Andrew



MDN Web Docs has been documenting the web platform for over twelve years and is now a cross-platform effort with contributions and an Advisory Board with members from Google, Microsoft and Samsung as well as those representing Firefox. Something that is fundamental to MDN is that it is a huge community effort, with the web community helping to create and maintain the documentation. In this article, I’m going to give you some pointers as to the places where you can help contribute to MDN and exactly how to do so.

If you haven’t contributed to an open source project before, MDN is a brilliant place to start. Skills needed range from copyediting, translating from English to other languages, HTML and CSS skills for creating Interactive Examples, or an interest in browser compatibility for updating Browser Compatibility data. What you don’t need to do is to write a whole lot of code to contribute. It’s very straightforward, and an excellent way to give back to the community if you have ever found these docs useful.

Contributing To The Documentation Pages

The first place you might want to contribute is to the MDN docs themselves. MDN is a wiki, so you can log in and start to help by correcting or adding to any of the documentation for CSS, HTML, JavaScript or any of the other parts of the web platform covered by MDN.

To start editing, you need to log in using GitHub. As is usual with a wiki, any editors of a page are listed, and this section will use your GitHub username. If you look at any of the pages on MDN contributors are listed at the bottom of the page, the below image shows the current contributors to the page on CSS Grid Layout.


A list showing names of people who contributed to this page


The contributors to the CSS Grid Layout page. (Large preview)

What Might You Edit?

Things that you might consider as an editor are fixing obvious typos and grammatical errors. If you are a good proofreader and copyeditor, then you may well be able to improve the readability of the docs by fixing any spelling or other errors that you spot.

You might also spot a technical error, or somewhere the specs have changed and where an update or clarification would be useful. With the huge range of web platform features covered by MDN and the rate of change, it is very easy for things to get out of date, if you spot something – fix it!

You may be able to use some specific knowledge you have to add additional information. For example, Eric Bailey has been adding Accessibility Concerns sections to many pages. This is a brilliant effort to highlight the things we should be thinking about when using a certain thing.


A screenshot of the Accessibility Concerns section


This section highlights the things we should be aware of when using background-color. (Large preview)

Another place you could add to a page is in adding “See also” links. These could be links to other parts of MDN, or to external resources. When adding external resources, these should be highly relevant to the property, element or technique being described by that document. A good candidate would be a tutorial which demonstrates how to use that feature, something which would give a reader searching for information a valuable next step.

How To Edit A Document?

Once you are logged in you will see a link to Edit on pages in MDN, clicking this will take you into a WYSIWYG editor for editing content. Your first few edits are likely to be small changes, in which case you should be able to follow your nose and edit the text. If you are making extensive edits, then it would be worth taking a look at the style guide first. There is also a guide to using the WYSIWYG Editor.

After making your edit, you can Preview and then Publish. Before publishing it is a good idea to explain what you added and why using the Revision Comment field.


Screenshot of this field in the edit form


Add a comment using the Revision Comment field. (Large preview)

Language Translations

Those of us with English as a first language are incredibly fortunate when it comes to information on the web, being able to get pretty much all of the information that we could ever want in our own language. If you are able to translate English language pages into other languages, then you can help to translate MDN Web Docs, making all of this information available to more people.


A screenshot showing the drop-down translations list


Translations available for the background-color page. (Large preview)

If you click on the language icon on any page, you can see which languages that information has been translated into, and you can add your own translations following the information on the page Translating MDN Pages.

Interactive Examples

The Interactive Examples on MDN, are the examples that you will see at the top of many pages of MDN, such as this one for the grid-area property.


Screenshot of an Interactive Example


The Interactive Example for the grid-area property. (Large preview)

These examples allow visitors to MDN to try out various values for CSS properties or try out a JavaScript function, right there on MDN without needing to head into a development environment to do so. The project to add these examples has been in progress for around a year, you can read about the project and progress to date in the post Bringing Interactive Examples to MDN.

The content for these Interactive Examples is held in the Interactive Examples GitHub repository. For example, if you wanted to locate the example for grid-area, you would find it in that repo under live-examples/css-examples/grid. Under that folder, you will find two files for grid-area, an HTML and a CSS file.

grid-area.html


<section id="example-choice-list" class="example-choice-list large" data-property="grid-area">
    <div class="example-choice" initial-choice="true">
        <pre><code class="language-css">grid-area: a;</code></pre>
        <button type="button" class="copy hidden" aria-hidden="true">
        <span class="visually-hidden">Copy to Clipboard</span>
        </button>
    </div>
    
    <div class="example-choice">
        <pre><code class="language-css">grid-area: b;</code></pre>
        <button type="button" class="copy hidden" aria-hidden="true">
        <span class="visually-hidden">Copy to Clipboard</span>
        </button>
    </div>
    
    <div class="example-choice">
        <pre><code class="language-css">grid-area: c;</code></pre>
        <button type="button" class="copy hidden" aria-hidden="true">
        <span class="visually-hidden">Copy to Clipboard</span>
        </button>
    </div> 
    
    <div class="example-choice">
        <pre><code class="language-css">grid-area: 2 / 1 / 2 / 4;</code></pre>
        <button type="button" class="copy hidden" aria-hidden="true">
        <span class="visually-hidden">Copy to Clipboard</span>
        </button>
    </div> 
</section>
    
<div id="output" class="output large hidden">
    <section id="default-example" class="default-example">
        <div class="example-container">
            <div id="example-element" class="transition-all">Example</div>
        </div>
    </section>
</div>

grid.area.css


.example-container 
    background-color: #eee;
    border: .75em solid;
    padding: .75em;
    display: grid;
    grid-template-columns: 1fr 1fr 1fr;
    grid-template-rows: repeat(3, minmax(40px, auto));
    grid-template-areas: 
    "a a a"
    "b c c"
    "b c c";
    grid-gap: 10px;
    width: 200px;
    
    
    .example-container > div 
    background-color: rgba(0, 0, 255, 0.2);
    border: 3px solid blue;
    
    
    example-element 
    background-color: rgba(255, 0, 200, 0.2);
    border: 3px solid rebeccapurple;
    

An Interactive Example is just a small demo, which uses some standard classes and IDs in order that the framework can pick up the example and make it interactive, where the values can be changed by a visitor to the page who wants to quickly see how it works. To add or edit an Interactive Example, first fork the Interactive Examples repo, clone it to your machine and follow the instructions on the Contributing page to install the required packages from npm and be able to build and test examples locally.

Then create a branch and edit or create your new example. Once you are happy with it, send a Pull Request to the Interactive Examples repo to ask for your example to be reviewed. In order to keep the examples consistent, reviews are fairly nitpicky but should point out the changes you need to make in a clear way, so you can update your example and have it approved, merged and added to an MDN page.


Screenshot of a tweet asking for help with HTML examples


MDN looking for help with HTML Interactive Examples. (Large preview)

With pretty much all of CSS now covered (in addition to the JavaScript examples), MDN is now looking for help to build the HTML examples. There are instructions as to how to get started in a post on the MDN Discourse Forum. Check out that post as it gives links to a Google doc that you can use to indicate that you are working on a particular example, as well as some other useful information.

The Interactive Examples are incredibly useful for people exploring the web platform, so adding to the project is an excellent way to contribute. Contributing to CSS or HTML examples requires knowledge of CSS and HTML, plus the ability to think up a clear demonstration. This last point is often the hardest part, I’ve created a lot of CSS Interactive Examples and spent more time thinking up the best example for each property than I do actually writing the code.

Browser Compat Data

Fairly recently the browser compatibility data listed on MDN Pages has begun to be updated through the Browser Compatibility Project. This project is developing browser compat data in JSON format, which can display the compatibility tables on MDN but also be useful data for other purposes.


An example screenshot of the old tables on MDN


The Old Browser Compat Tables on MDN. (Large preview)


An example screenshot of the new tables on MDN


The New Browser Compat Tables on MDN. (Large preview)

The Browser Compatibility Data is on GitHub, and if you find a page that has incorrect information or is still using the old tables, you can submit a Pull Request. The repository contains contribution information, however, the simplest way to start is to edit an existing example. I recently updated the information for the CSS shape-outside property. The property already had some data in the new format, but it was incomplete and incorrect.

To edit this data, I first forked the Browser Compat Data so that I had my own fork. I then cloned that to my machine and created a new branch to make my changes in.

Once I had my new branch, I found the JSON file for shape-outside and was able to make my edits. I already had a good idea about browser support for the property; I also used the live example on the shape-outside MDN page to test to see support when I wasn’t sure. Therefore making the edits was a case of working through the file, checking the version numbers listed for support of the property and updating those which were incorrect.

As the file is in JSON format is pretty straightforward to edit in any text editor. The .editorconfig file explains the simple formatting rules for these documents. There are also some helpful tips in this checklist.

Once you have made your edits, you can commit your changes, push your branch to your fork and then make a Pull Request to the Browser Compat Data repository. It’s likely that, as with the live examples, the reviewer will have some changes for you to make. In my PR for the Shapes data I had a few errors in how I had flagged the data and needed to make some changes to links. These were simple to make, and then my PR was merged.

Get Started

You can get started simply by picking something to add to and starting work on it in many cases. If you have any questions or need some help with any of this, then the MDN Discourse forum is a good place to post. MDN is the place I go to look up information, the place I send new developers and experienced developers alike, and its strength is the fact that we can all work to make it better.

If you have never made a Pull Request on a project before, it is a very friendly place to make that first PR and, as I hope I have shown, there are ways to contribute that don’t require writing any code at all. A very valuable skill for any documentation project is that of writing, editing and translating as these skills can help to make technical documentation easier to read and accessible to more people around the world.

Smashing Editorial
(il)


Read this article:

Contributing To MDN Web Docs

Thumbnail

Measuring Websites With Mobile-First Optimization Tools




Measuring Websites With Mobile-First Optimization Tools

Jon Raasch



Performance on mobile can be particularly challenging: underpowered devices, slow networks, and poor connections are some of those challenges. With more and more users migrating to mobile, the rewards for mobile optimization are great. Most workflows have already adopted mobile-first design and development strategies, and it’s time to apply a similar mindset to performance.

In this article, we’ll take a look at studies linking page speed to real-world metrics, and discuss the specific ways mobile performance impacts your site. Then we’ll explore benchmarking tools you can use to measure your website’s mobile performance. Finally, we’ll work with tools to help identify and remove the code debt that bloats and weighs down your site.

Responsive Configurators

How would you design a responsive car configurator? How would you deal with accessibility, navigation, real-time previews, interaction and performance? Let’s figure it out. Read article →

Why Performance Matters

The benefits of performance optimization are well-documented. In short, performance matters because users prefer faster websites. But it’s more than a qualitative assumption about user experience. There are a variety of studies that directly link reduced load times to increased conversion and revenue, such as the now decade-old Amazon study that showed each 100ms of latency led to a 1% drop in sales.

Page Speed, Bounce Rate & Conversion

In the data world, poor performance leads to an increased bounce rate. And in the mobile world that bounce rate may occur sooner than you think. A recent study shows that 53% of mobile users abandon a site that takes more than 3 seconds to load.

That means if your site loads in 3.5 seconds, over half of your potential users are leaving (and most likely visiting a competitor). That may be tough to swallow, but it is as much a problem as it is an opportunity. If you can get your site to load more quickly, you are potentially doubling your conversion. And if your conversion is even indirectly linked to profits, you’re doubling your revenue.

SEO And Social Media

Beyond reduced conversion, slow load times create secondary effects that diminish your inbound traffic. Search engines already use page speed in their ranking algorithms, bubbling faster sites to the top. Additionally, Google is specifically factoring mobile speed for mobile searches as of July 2018.

Social media outlets have begun factoring page speed in their algorithms as well. In August 2017, Facebook announced that it would roll out specific changes to the newsfeed algorithm for mobile devices. These changes include page speed as a factor, which means that slow websites will see a decline in Facebook impressions, and in turn a decline in visitors from that source.

Search engines and social media companies aren’t punishing slow websites on a whim, they’ve made a calculated decision to improve the experience for their users. If two websites have effectively the same content, wouldn’t you rather visit one that loads faster?

Many websites depend on search engines and social media for a large portion of their traffic. The slowest of these will have an exacerbated problem, with a reduced number of visitors coming to their site, and over half of those visitors subsequently abandoning.

If the prognosis sounds alarming, that’s because it is! But the good news is that there are a few concrete steps you can take to improve your page speeds. Even the slowest sites can get “sub three seconds” with a good strategy and some work.

Profiling And Benchmarking Tools

Before you begin optimizing, it’s a good idea to take a snapshot of your website’s performance. With profiling, you can determine how much progress you will need to make. Later, you can compare against this benchmark to quantify the speed improvements you make.

There are a number of tools that assess a website’s performance. But before you get started, it’s important to understand that no tool provides a perfect measurement of client-side performance. Devices, connection speeds, and web browsers all impact performance, and it is impossible to analyze all combinations. Additionally, any tool that runs on your personal device can only approximate the experience on a different device or connection.

In one sense, whichever tool you use can provide meaningful insights. As long as you use the same tool before and after, the comparison of each should provide a decent snapshot of performance changes. But certain tools are better than others.

In this section, we’ll walk through two tools that provide a profile of how well your website performs in a mobile environment.

Note: If can be difficult to benchmark an entire site, so I recommend that you choose one or two of your most important pages for benchmarking.

Lighthouse

Lighthouse audit tab


Lighthouse in the Google’s Web Developer Tools. (Large preview)

One of the more useful tools for profiling mobile performance is Google’s Lighthouse. It’s a nice starting point for optimization since it not only analyzes page performance but also provides insights into specific performance issues. Additionally, Lighthouse provides high-level suggestions for speed improvements.

Lighthouse is available in the Audits tab of the Chrome Developer Tools. To get started, open the page you want to optimize in Chrome Dev Tools and perform an audit. I typically perform all the audits, but for our purposes, you only need to check the ‘Performance’ checkbox:

Lighthouse audit selection


All the audits are useful, but we’ll only need the Performance audit. (Large preview)

Lighthouse focuses on mobile, so when you run the audit, Lighthouse will pop your page into the inspector’s responsive viewer and throttle the connection to simulate a mobile experience.

Lighthouse Reports

When the audit finishes, you’ll see an overall performance score, a timeline view of how the page rendered over time, as well as a variety of metrics:

Lighthouse performance audit results


In the performance audit, pay attention to the first meaningful paint. (Large preview)

It’s a lot of information, but one report to emphasize is the first meaningful paint, since that directly influences user bounce rates. You may notice that the tool doesn’t even list the total load time, and that’s because it rarely matters for user experience.

Mobile users expect a first view of the page very quickly, and it may be some time before they scroll to the lower content. In the timeline above, the first paint occurs quickly at 1.3s, then a full above-the-fold content paint occurs at 3.9s. The user can now engage with the above-the-fold content, and anything below-the-fold can take a few seconds longer to load.

Lighthouse’s first meaningful paint is a great metric for benchmarking, but also take a look at the opportunities section. This list helps to identify the key problem areas of your site. Keep these recommendations on your radar, since they may provide your biggest improvements.

Lighthouse Caveats

While Lighthouse provides great insights, it is important to bear in mind that it only simulates a mobile experience. The device is simulated in Chrome, and a mobile connection is simulated with throttling. Actual experiences will vary.

Additionally, you may notice that if you run the audit multiple times, you will get different reports. That’s again because it is simulating the experience, and variances in your device, connection, and the server will impact the results. That said, you can still use Lighthouse for benchmarking, but it is important that you run it several times. It is more relevant as a range of values than a single report.

WebPageTest

In order to get an idea of how quickly your page loads in an actual mobile device, use WebPageTest. One of the nice things about WebPageTest is that it tests on a variety of real devices. Additionally, it will perform the test a number of times and take the average to provide a more accurate benchmark.

To get started, navigate to WebPageTest.org, enter the URL for the page you want to test and then select the mobile device you’d like to use for testing. Also, open up the advanced settings and change the connection speed. I like testing at Fast 3G, because even when users are connected to LTE the connection speed is rarely LTE (#sad):

WebPageTest advanced settings form


WebPageTest provides actual mobile devices for profiling. (Large preview)

After submitting the test (and waiting for any queue), you’ll get a report on the speed of the page:

WebPageTest profiling results


In WebPageTest’s results, pay attention to the start render and first byte. (Large preview)

The summary view consists of a short list of metrics and links to timelines. Again, the value of the start render is more important than the load time. The first byte is useful for analyzing the server response speed. You can also dig into the more in-depth reports for additional insights.

Benchmarking

Now that you’ve profiled your page in Lighthouse and WebPageTest, it’s time to record the values. These benchmarks will provide a useful comparison as you optimize your page. If the metrics improve, your changes are worthwhile. If they stay static (or worse decline), you’ll need to rethink your strategy.

Lighthouse results are simulated which makes it less useful for benchmarking and more useful for in-depth reports and optimization suggestions. However, Lighthouse’s performance score and first meaningful paint are nice benchmarks so run it a few times and take the median for each.

WebPageTest’s values are more reliable for benchmarking since it tests on real devices, so these will be your primary benchmarks. Record the value for the first byte, start to render, and overall load time.

Bloat Reduction

Now that you’ve assessed the performance of your site, let’s take a look at a tool that can help reduce the size of your files. This tool can identify extra, unnecessary pieces of code that bloat your files and cause resources to be larger than they should.

In a perfect world, users would only download the code that they actually need. But the production and maintenance process can lead to unused artifacts in the codebase. Even the most diligent developers can forget to remove pieces of old CSS and JavaScript while making changes. Over time these bits of dead code accumulate and become unnecessary bloat.

Additionally, certain resources are intended to be cached and then used throughout multiple pages, such as a site-wide stylesheet. Site-wide resources often make sense, but how can you tell when a stylesheet is mostly underused?

The Coverage Tab

Fortunately, Chrome Developer Tools has a tool that helps assess the bloat in files: The Coverage tab. The Coverage tab analyzes code coverage as you navigate your site. It provides an interface that shows how much code in a given CSS or JS file is actually being used.

To access the Coverage tab, open up Chrome Developer Tools, and click on the three dots in the top right. Navigate to More Tools > Coverage.

Access the Coverage tab by clicking on More tools and then Coverage


The Coverage tab is a bit hidden in the web developer tools console. (Large preview)

Next, start instrumenting coverage by clicking the reload button on the right. That will reload the page and begin the code coverage analysis. It brings up a report similar to this:

The Coverage report identifies unused code


An example of a Coverage report. (Large preview)

Here, pay attention to the unused bytes:

The coverage report UI shows a breakdown of used and unused bytes


The unused bytes are represented by red lines. (Large preview)

This UI shows the amount of code that is currently unused, colored red. In this particular page, the first file shown is 73% bloat. You may see significant bloat at first, but it only represents the current render. Change your screen size and you should see the CSS coverage go up as media queries get applied. Open any interactive elements like modals and toggles, and it should go up further.

Once you’ve activated every view, you will have an idea of how much code you are actually using. Next, you can dig into the report further to find out just which pieces of code are unused, simply click on one of the resources and look in the main window:

Detail view of a file in the Coverage report, showing which pieces of code aren’t being used


Click on a file in the Coverage report to see the specific portions of unused code. (Large preview)

In this CSS file, look at the highlights to the left of each ruleset; green indicates used code and red indicates bloat. If you are building a single page app or using specialized resources for this particular page, you may be inclined to go in and remove this garbage. But don’t be too hasty. You should definitely remove dead code, but be careful to make sure that you haven’t missed a breakpoint or interactive element.

Next Steps

In this article, we’ve shown the quantitative benefits of optimizing page speed. I hope you’re convinced, and that you have the tools you need to convince others. We’ve also set a minimum goal for mobile page speed: sub three seconds.

To hit this goal, it’s important that you prioritize the highest impact optimizations first. There are a lot of resources online that can help define this roadmap, such as this checklist. Lighthouse can also be a great tool for identifying specific issues in your codebase, so I encourage you to tackle those bottlenecks first. Sometimes the smallest optimizations can have the biggest impact.

Smashing Editorial
(da, lf, ra, yk, il)


Visit link: 

Measuring Websites With Mobile-First Optimization Tools

Thumbnail

Redesigning A Digital Interior Design Shop (A Case Study)




Redesigning A Digital Interior Design Shop (A Case Study)

Boyan Kostov



Good products are the result of a continual effort in research and design. And, as it usually turns out, our designs don’t solve the problems they were meant to right away. It’s always about constant improvement and iteration.

I have a client called Design Cafe (let’s call it DC). It’s an innovative interior design shop founded by a couple of very talented architects. They produce bespoke designs for the Indian market and sell them online.

DC approached me two years ago to design a few visual mockups for their website. My scope then was limited to visuals, but I didn’t have the proper foundation upon which to base those visuals, and since I didn’t have an ongoing collaboration with the development team, the final website design did not accurately capture the original design intent and did not meet all of the key user needs.

A year and a half passed and DC decided to come back to me. Their website wasn’t providing the anticipated stream of leads. They came back because my process was good, but they wanted to expand the scope to give it space to scale. This time, I was hired to do the research, planning, visual design and prototyping. This would be a makeover of the old design based on user input and data, and prototyping would allow for easy communication with the development team. I assembled a small team of two: me and a fellow designer, Miroslav Kirov, to help run proper research. In less than two weeks, we were ready to start.

Kick-Off

Useful tip: I always kick off a project by talking to the stakeholders. For smaller projects with one or two stakeholders, you can blend the kick-off and the interview into one. Just make sure it’s no longer than an hour.

Stakeholder Interviews

Our two stakeholders are both domain experts. They have a brick-and-mortar store in the center of Bangalore that attracts a lot of people. Once in there, people are delighted by the way the designs look and feel. Our clients wanted to have a website that conveys the same feeling online and that would make its visitors want to go to the store.

Their main pain points:

  • The website wasn’t responsive.

  • There wasn’t a clear distinction between new, returning and potential clients.

  • DC’s selling points weren’t clearly communicated.

They had future plans for transforming the website into a hub for interior design ideas. And, last but not least, DC wanted to attract fresh design talent.

Defining the Goals

We shortlisted all of our goals for the project. Our main goal was to explain in a clear and appealing manner what DC does for existing and potential clients in a way that engages them to contact DC and go to the store. Some secondary goals were:

  • lower the drop-off rate,

  • capture some customer data,

  • clarify the brand’s message,

  • make the website responsive,

  • explain budgets better,

  • provide decision-making assistance and become an information influencer.

Key Metrics

Our number-one key metric was to convert users to leads who visit the store, which measures the main goal. We needed to improve that by at least 5% initially — a realistic number we decided on with our stakeholders. In order to do that, we needed to:

  • shorten the conversion time (time needed for a user to get in touch with DC),

  • increase the form application rate,

  • increase the overall satisfaction users get from the website.

We would track these metrics by setting up Google Analytics Events once the website is online and by talking with leads who come into the store through the website.

Useful tip: Don’t focus on too many metrics. A handful of your most important ones are enough. Measuring too many things will dilute the results.

Discovery

In order for us to gain the best possible insights, our user interviews had to target both previous and potential clients, but we had to go minimal, so we picked two potential and three existing clients. They were mostly from the IT sector — DC’s main target group. Given our pretty tight schedule, we started with desk research while we waited for all five user interviews to be scheduled.

Useful tip: You need to know who you are designing for and what research has been done before. Stakeholders tell you their story, but you need to compare it to data and to users’ opinions, expectations and needs.

Data

We could reference some Google Analytics data from the website:

  • Most users went to the kitchen, then to the bedroom, then to the living room.

  • The high bounce rate of 80%+ was probably due to a misunderstanding of the brand message and unclear flows and calls to action (CTAs).

  • Traffic was mostly mobile.

  • Most users landed on the home page, 70% of them from ads and 16% directly (mostly returning customers), and the rest were equally divided between Facebook and Google Search.

  • 90% of social media traffic came from Facebook. Expanding brand awareness to Instagram and Twitter could be beneficial.

Competitors

There’s a lot of local competition in the sector. Here were some repeating patterns:

  • video spots and elaborate galleries showing the completed designs with clients discussing their services;

  • attractive design presentations with high-quality photos;

  • targeting of group’s appropriate messages;

  • quizzes for picking styles;

  • big bold typography, less text and more visuals.


Large preview

Users

DC’s customers are mostly aged between 28 and 40, with a secondary set in the higher bracket of 38 and 55 who come for their second home. They are IT or business professionals with a mid to high budget. They value good customer experience but are price-conscious and very practical. Because they are mostly families, very often the wives are the hidden dominant decision-maker.

We talked with five users (three existing and two potential customers) and sent out a survey to 20 more (mixing existing and potential customers; see Design Cafe Questionnaire).

User Interviews

Useful tip: Be sure to schedule all of your interviews ahead of time, and plan for more people than you need. Include extreme users along with the mainstreams. Chances are that if something works for an extreme user, it will work for the rest as well. Extremes will also give you insight about edge cases that mainstreams just don’t care about.

All users were confused about the main goal of the website. Some of their opinions:

  • “It lacks a proper flow.”

  • “I need more clarity in the process, especially in terms of timelines.”

  • “I need more educational information about interior design.”

Everyone was pretty well informed about the competition. They had tried other companies before DC. All found out about DC by either a reference, Google, ads or by physically passing by the store. And, boy, did they love the store! They treated it like an Apple Store for interior design. Turns out that DC really did a great job with that.

Useful tip: Negative feedback helps us find opportunities for improvement. But positive feedback is also pretty useful because it helps you identify which parts of the product are worth retaining and building upon.

Personal touch, customer service, prices and quality of materials were their main motivations for choosing DC. People insisted on being able to see the price of every element on a page at any time (the previous design didn’t have prices on the accessories).

We made an interesting but somehow expected discovery about device usage. Mobile devices were used mostly for consumption and browsing, but when it came to ordering, most people opened their laptops.

Surveys

The survey results mostly overlapped with the interviews:

  • Users found DC through different channels, but mainly through referrals.

  • They didn’t quite understand the current state of the website. Most of them had searched for or used other services before DC.

  • All of the surveyed users ordered kitchen designs. Almost all had difficulty choosing the right design style.

  • Most users found the process of designing their own interior hard and were interested in features that could make their choice easier.

Useful tip: Writing good survey questions takes time. Work with a researcher to write them, and schedule double the time you think you’ll need.


Large preview

Planning

User Journeys Overview

Talking with customers helped us gain useful insight about which scenarios would be most important to them. We made an affinity diagram with everything we collected and started prioritizing and combining items in chunks.

Useful tip: Use a white board to download all of your team’s knowledge, and saturate the board with it. Group everything until you spot patterns. These patterns will help you establish themes and find out the most important pain points.

The result was seven point-of-view problem statements that we decided to design for:

  1. A new customer needs more information about DC because they need proof of credibility.
  2. A returning customer needs quick access to the designs because they don’t want to waste time.
  3. All customers need to be able to browse the designs at any time.
  4. All customers want to browse designs relevant to their tastes, because that will shorten their search time.
  5. Potential leads need a way to get in touch with DC in order to purchase a design.
  6. All customers, once they’ve ordered, need to stay up to date with their order status, because they need to know what they are paying for and when they will be getting it.
  7. All customers want to read case studies about successful projects, because that will reassure them that DC knows its stuff.

Using this list, we came up with design solutions for every journey.


Large preview

Onboarding

The previous home page of Design Cafe was confusing. It needed to present more information about the business. The lack of information caused confusion and people were unsure what DC is about. We divided the home page into several sections and designed it so that every section could satisfy the needs of one of our target groups:

  1. For new visitors (the purple flow), we included a short trip through the main unique selling points (USPs) of the service, the way it works, some success stories and an option to start the style quiz.

  2. For returning visitors (the blue flow), who will most likely skip the home page or use it as a waypoint, the hero section and the navigation pointed a way out to browsing designs.

  3. We left a small part at the end of the page (the orange flow) for potential employees, describing what there is to love about DC and a CTA that goes to the careers page.


Large preview

The whole point of the onboarding process was to capture the customer’s attention so that they could continue forward, either directly to the design catalog or through a feature we called the style quiz.

Browsing designs

We made the style quiz to help users narrow down their results.

DC previously had a feature called a 3D builder that we decided to remove. It allowed you to set your room size and then drag-and-drop furniture, windows and doors into the mix. In theory, this sounds good, but in reality people treated it much like a game and expected it to function like a minified version of The Sims’ Build Mode.


The Sims’ Build Mode, by Electronic Arts. (Large preview)

Everything made with the 3D builder was ending up completely modified by the designers. The tool was giving people a lot of design power and too many choices. On top of that, supporting it was a huge technical endeavor because it was a whole product on its own.

Compared to it, the style quiz was a relatively simple feature:

  1. It starts out by asking about colors, textures and designs you like.

  2. It continues to ask about room type.

  3. Eventually, it displays a curated list of designs based on your answers.


Large preview

The whole quiz wizard extends to only four steps and takes less than a minute to complete. But it makes people invest a tad bit of their time, thus creating engagement. The result: We’re improving conversion time and overall satisfaction.

Alternatively, users can skip the style quiz and go directly to the design catalog, then use the filters to fine-tune the results. The page automatically shows kitchen designs, what most people are looking for. And for the price-conscious, we made a small feature that allows them to input their room’s size, and all prices are recalculated.


Large preview

If people don’t like anything from the catalog, chances are they are not DC’s target customer and there’s not much we can do to keep them on the website. But if they do like a design, they could decide to go forward and get in touch with DC, which brings us to the next step in the process.

Getting in Touch

Contacting DC needed to be as simple as possible. We implemented three ways to do that:

  • through the chat, shown on every page — the quickest way;

  • by opening the contact page and filling out the form or by just calling DC on the phone;

  • by clicking “Book a consultation” in the header, which asks for basic information and requests an appointment (upon submission, the next steps are shown to let users know what exactly is going to happen).


Large preview

The rest of this journey continues offline: Potential customers meet a DC designer and, after some discussions and planning, place an order. DC notifies them of any progress via email and sends them a link to the progress tracker.

Order Status

The progress tracker is in a user menu in the top-right corner of the design. Its goal is to show a timeline of the order. Upon an update, an “unread” notification pops out. Most users, however, will usually find out about order updates through email, so the entry point for the whole flow will be external.


Large preview

Once the interior design order is installed and ready, users will have the completed order on the website for future reference. Their project could be featured on the home page and become part of the case studies.

Case Studies

One of DC’s long-term goals is for its website to become an influencer hub for interior design, filled with case studies, advice and tips. It’s part of a commitment to providing quality content. But DC doesn’t have that content yet. So, we decided to start that section with minimal effort and introduce it as a blog. The client would gradually fill it up with content and detailed process walkthroughs. These would be later expanded and featured on the home page. Case studies are a feature that could significantly increase brand awareness, though they would take time.


Large preview

Preparing for Visual Design

With the critical user journeys all figured out and wireframed, we were ready to delve into visual design.

Data showed that most people open the website on their phones, but interviews proved that most of them were more willing to buy through a computer, rather than a mobile device. Also, desktop and laptop users were more engaged and loyal. So, we decided to design for desktop-first and work down to the smaller (mobile) resolutions from it in code.

Visual Design

We started collecting visual ideas, words and images. Initially, we had a simple word sequence based on our conversations with the client and a mood board with relevant designs and ideas. The main visual features we were after were simplicity, bold typography, nice photos and clean icons.

Useful tip: Don’t follow a certain trend just because everybody else is doing it. Create a thorough mood board of relevant reference designs that approximate the look and feel you’re going after. This look should be in line with your goals and target audience.

Simple, elegant, easy, modern, hip, edgy, brave, quality, understanding, fresh, experience, classy.


Mood board. (Large preview)

Our client had already started working on a photo shoot, and the results were great. Stock photography would have ruined everything personal about this website. The resulting photos blended with the big type pretty well and helped with that simple language we were after.

Typography

Initially, we went with a combination of Raleway and Roboto for the typography. Raleway is a great font but a bit overused. The second iteration was Abril Fatface and Raleway for the copy. Abril Fatface resembles the splendor of Didot and made the whole page a lot more heavy and pretentious. It was an interesting direction to explore, but it didn’t resonate with the modern techy feel of DC. The last iteration was Nexa for the titles, which turned out to be the best choice due to its modern and edgy feel, with Lato — both a great fit.

Useful tip: Play around with type variations. List them side by side to see how they compare. Go to Typewolf, MyFonts or a similar website to get inspired. Look for typefaces that make sense for your product. Consider readability and accessibility. Don’t go overboard with your type scale; keep it as minimal as possible. Check out Butterick’s summary of key rules if in doubt.


Large preview

Colors

DC already had a color scheme, but they gave us the freedom to experiment. The main colors were tints of cyan, golden and plum (or, rather, a strange kind of bordeaux), but the original hues were too faded and didn’t blend with each other well enough.

Useful tip: If the brand already has colors, test slight variations to see how they fit the overall design. Or remove some of the colors and use only one or two. Try designing your layout in monochrome and then test different color combinations on an already mocked-up design. Check out some other great tips by Wojciech Zieliński in his article “How to Use Colors in UI Design: Practical Tips and Tools”.

Here’s what we decided on in the end:


Large preview

The way we presented all of those type variants and colors was through iterations on the home page.

Initial Mockups

We focused the first visual iteration on getting the main information clearly visible and squeezing the most out of the testimonials and style quiz sections. After some discussion, we figured it was too plain and needed improvement. We made changes to the fonts and icons and modified some sections, shown in iterations 2 and 3 in the image below.

We didn’t have the time to design custom icons, but the NounProject came to the rescue. With the SVG file format, it’s very simple to change whatever you need and mix it with something else. This sped up our work immensely, and with visual iteration number 4, we signed off on the design of the home page. This allowed us to focus on components and use them as LEGO blocks to build the templates.


Large preview

Components System

I listed most components (see PDF) in a Sketch artboard to keep them accessible. Whenever the design needed a new pattern, we’d come back to this page and look for ways to reuse elements. Having a visual system in place, even for a small project like this, kept things consistent and simple.

Useful tip: Components, atoms, blocks — no matter what you call them, they are all part of systematic thinking about your design. Design systems help you gain a deeper understanding of your product by urging you to focus on patterns, design principles and design language. If you’re new to this approach, check out Brad Frost’s Atomic Design or Alla Kholmatova’s Design Systems.


Part of the pattern library. (Large preview)

Prototyping With Code

Useful tip: Work on a prototype first. You can make a prototype using basic HTML, CSS and JavaScript. Or you can use InVision, Marvel, Adobe XD or even the Sketch app, or your favorite prototyping tool. It doesn’t really matter. The important thing is to realize that only when you prototype will you see how your design will function.

For our prototype, we decided to use code and set up a simple build process to speed up our work.

Picking tools and processes

Gulp automated everything. If you haven’t heard of it, check out Callum Macrae’s awesome guide. Gulp enabled us to handle all of the styles, scripts and templates, and it outputs a ready-to-use minified production version of the code.

Some of the more important Gulp plugins we used were:

  • gulp-postcss
    This allows you to use PostCSS. You can bundle it with plugins like cssnext to get a pretty robust and versatile setup.
  • browser-sync
    This sets up a server and automatically updates the view on every change. You can set it to fire up upon starting “gulp watch”, and everything will be synced up on hitting “Save”.
  • gulp-compile-handlebars
    This is a Handlebars implementation for Gulp. It’s a quick way to create templates and reuse them. Imagine you have a button that stays the same throughout the whole design. It would be a symbol in Sketch. It’s basically the same concept but wrapped in HTML. Whenever you want to use that button, you just include the button template. If you change something in the master template, it propagates the changes to every other button in the design. You do that for everything in the design system, and thus you’re using the same paradigm for both visual design and code. No more static page mockups!
Components and templates

We had to mix atomic CSS with module-based CSS to get the most of both worlds. Atomic CSS handled all of the general styles, while the CSS modules handled edge cases.

In atomic CSS, atoms are immutable CSS classes that do just one thing. We used Tachyons, an atomic toolkit. In Tachyons, every class you apply is a single CSS property. For instance, .b stands for font-weight: bold, and .ttu stands for text-transform: uppercase. A paragraph with bold uppercase text would look like this:

<p class="b ttu">Paragraph</p>

Useful tip: Once you get familiar with atomic CSS, it becomes a blazingly fast way to prototype stuff — and a very systematic one, because it urges you to constantly think about reusability and optimization.

A major benefit of prototyping with code is that you can demo complex interactions. We coded most of our critical journeys this way.

Designing micro-interactions in the browser

Our prototype was so high-fidelity that it became the front-end basis for the actual product — DC used our code and integrated it in their workflow. You can check out the prototype on http://beta.boyankostov.com/2017/designcafe/html (or live on http://designcafe.com).

Useful tip: With HTML prototypes, you will have to decide the level of fidelity you want to achieve. That might get pretty time-consuming if you go too deep. But you can’t really go wrong with that either because as you go deeper and deeper into the code and fine-tune every possible detail, at some point you’ll start delivering the actual product.

Sign-off

Clients, especially small B2C companies, love when you deliver a design solution that they can use immediately. We shipped just that.

Unfortunately, you can’t always predict a project’s pace, and it took several months for our code to be integrated in DC’s workflow. In its current state, this code is ready for testing, and what’s better is that it’s pretty easy to modify. So, if DC decides to conduct some user tests in the future, any changes will be easy to make.

Takeaways

  • Collaborate with other designers whenever possible. When two people are thinking about the same problem, they will deliver better ideas. Take turns in taking notes during interviews, and brainstorm goals, ideas and visuals together.

  • Having a developer on the team is beneficial because everyone gets to do what they are best at. A good developer will spend as little as a few minutes on a JavaScript issue that I would probably need hours to resolve.

  • We shipped a working version of the website, and the client was able to use it right away. If you aren’t able to sign off on the code, try to get as close to the final product as possible, and communicate that visually to your client’s team. Document your design — it’s a deliverable that will be used and abused by everyone, from developers to marketers to in-house designers. Set aside some time to make sure all of your ideas are properly understood by everyone.

  • Scheduling interviews and writing good surveys can be time-consuming. You have to plan ahead and recruit more people than you think you will need. Hire an experienced researcher to work with you on these tasks, and spend some time with your team to identify your goals. Be careful when sourcing participants. Your client can help you find the right people, but you’ll need to stick to participants who meet the right demographics.

  • Schedule enough time for planning. Project goals, processes, and responsibilities should be clear to everyone on your team. You need time to allow for multiple iterations on prototypes, because prototypes improve products quickly. If you don’t want to mess with code, there are various ways to prototype. But even if you do, you don’t need to write flawless code — just write designer’s code. Or, as Alan Cooper once said, “Sometimes the best way for a designer to communicate their vision is to code something up so that their colleagues can interact with the proposed behavior, rather than just see still images. The goal of such code is not the same as the goal of the code that coders write. The code isn’t for deployment, but for design [and] its purpose is different.”

  • Don’t focus on a unique design per se, unless that’s the main feature of your product. Better to spend time on things that matter more. Use frameworks, icons and visual assets where possible, or outsource them to another designer and focus on your core product goals and metrics.

Smashing Editorial
(mb, ra, al,yk, il)


Link:  

Redesigning A Digital Interior Design Shop (A Case Study)

Thumbnail

How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial




How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial

Oleh Mryhlod



React Native is a young technology, already gaining popularity among developers. It is a great option for smooth, fast, and efficient mobile app development. High-performance rates for mobile environments, code reuse, and a strong community: These are just some of the benefits React Native provides.

In this guide, I will share some insights about the high-level capabilities of React Native and the products you can develop with it in a short period of time.

We will delve into the step-by-step process of creating a video/audio recording app with React Native and Expo. Expo is an open-source toolchain built around React Native for developing iOS and Android projects with React and JavaScript. It provides a bunch of native APIs maintained by native developers and the open-source community.

After reading this article, you should have all the necessary knowledge to create video/audio recording functionality with React Native.

Let’s get right to it.

Brief Description Of The Application

The application you will learn to develop is called a multimedia notebook. I have implemented part of this functionality in an online job board application for the film industry. The main goal of this mobile app is to connect people who work in the film industry with employers. They can create a profile, add a video or audio introduction, and apply for jobs.

The application consists of three main screens that you can switch between with the help of a tab navigator:

  • the audio recording screen,
  • the video recording screen,
  • a screen with a list of all recorded media and functionality to play back or delete them.

Check out how this app works by opening this link with Expo.

First, download Expo to your mobile phone. There are two options to open the project :

  1. Open the link in the browser, scan the QR code with your mobile phone, and wait for the project to load.
  2. Open the link with your mobile phone and click on “Open project using Expo”.

You can also open the app in the browser. Click on “Open project in the browser”. If you have a paid account on Appetize.io, visit it and enter the code in the field to open the project. If you don’t have an account, click on “Open project” and wait in an account-level queue to open the project.

However, I recommend that you download the Expo app and open this project on your mobile phone to check out all of the features of the video and audio recording app.

You can find the full code for the media recording app in the repository on GitHub.

Dependencies Used For App Development

As mentioned, the media recording app is developed with React Native and Expo.

You can see the full list of dependencies in the repository’s package.json file.

These are the main libraries used:

  • React-navigation, for navigating the application,
  • Redux, for saving the application’s state,
  • React-redux, which are React bindings for Redux,
  • Recompose, for writing the components’ logic,
  • Reselect, for extracting the state fragments from Redux.

Let’s look at the project’s structure:


Large preview

  • src/index.js: root app component imported in the app.js file;
  • src/components: reusable components;
  • src/constants: global constants;
  • src/styles: global styles, colors, fonts sizes and dimensions.
  • src/utils: useful utilities and recompose enhancers;
  • src/screens: screens components;
  • src/store: Redux store;
  • src/navigation: application’s navigator;
  • src/modules: Redux modules divided by entities as modules/audio, modules/video, modules/navigation.

Let’s proceed to the practical part.

Create Audio Recording Functionality With React Native

First, it’s important to сheck the documentation for the Expo Audio API, related to audio recording and playback. You can see all of the code in the repository. I recommend opening the code as you read this article to better understand the process.

When launching the application for the first time, you’ll need the user’s permission for audio recording, which entails access to the microphone. Let’s use Expo.AppLoading and ask permission for recording by using Expo.Permissions (see the src/index.js) during startAsync.

Await Permissions.askAsync(Permissions.AUDIO_RECORDING);

Audio recordings are displayed on a seperate screen whose UI changes depending on the state.

First, you can see the button “Start recording”. After it is clicked, the audio recording begins, and you will find the current audio duration on the screen. After stopping the recording, you will have to type the recording’s name and save the audio to the Redux store.

My audio recording UI looks like this:


Large preview

I can save the audio in the Redux store in the following format:

audioItemsIds: [‘id1’, ‘id2’],
audioItems: 
 ‘id1’: 
    id: string,
    title: string,
    recordDate: date string,
    duration: number,
    audioUrl: string,
 
},

Let’s write the audio logic by using Recompose in the screen’s container src/screens/RecordAudioScreenContainer.

Before you start recording, customize the audio mode with the help of Expo.Audio.set.AudioModeAsync (mode), where mode is the dictionary with the following key-value pairs:

  • playsInSilentModeIOS: A boolean selecting whether your experience’s audio should play in silent mode on iOS. This value defaults to false.
  • allowsRecordingIOS: A boolean selecting whether recording is enabled on iOS. This value defaults to false. Note: When this flag is set to true, playback may be routed to the phone receiver, instead of to the speaker.
  • interruptionModeIOS: An enum selecting how your experience’s audio should interact with the audio from other apps on iOS.
  • shouldDuckAndroid: A boolean selecting whether your experience’s audio should automatically be lowered in volume (“duck”) if audio from another app interrupts your experience. This value defaults to true. If false, audio from other apps will pause your audio.
  • interruptionModeAndroid: An enum selecting how your experience’s audio should interact with the audio from other apps on Android.

Note: You can learn more about the customization of AudioMode in the documentation.

I have used the following values in this app:

interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX, — Our record interrupts audio from other apps on IOS.

playsInSilentModeIOS: true,

shouldDuckAndroid: true,

interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX — Our record interrupts audio from other apps on Android.

allowsRecordingIOS Will change to true before the audio recording and to false after its completion.

To implement this, let’s write the handler setAudioMode with Recompose.

withHandlers(
 setAudioMode: () => async ( allowsRecordingIOS ) => 
   try 
     await Audio.setAudioModeAsync(
       allowsRecordingIOS,
       interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX,
       playsInSilentModeIOS: true,
       shouldDuckAndroid: true,
       interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX,
     );
   } catch (error) 
     console.log(error) // eslint-disable-line
   
 },
}),

To record the audio, you’ll need to create an instance of the Expo.Audio.Recording class.

const recording = new Audio.Recording();

After creating the recording instance, you will be able to receive the status of the Recording with the help of recordingInstance.getStatusAsync().

The status of the recording is a dictionary with the following key-value pairs:

  • canRecord: a boolean.
  • isRecording: a boolean describing whether the recording is currently recording.
  • isDoneRecording: a boolean.
  • durationMillis: current duration of the recorded audio.

You can also set a function to be called at regular intervals with recordingInstance.setOnRecordingStatusUpdate(onRecordingStatusUpdate).

To update the UI, you will need to call setOnRecordingStatusUpdate and set your own callback.

Let’s add some props and a recording callback to the container.

withStateHandlers(
    recording: null,
    isRecording: false,
    durationMillis: 0,
    isDoneRecording: false,
    fileUrl: null,
    audioName: '',
  , 
    setState: () => obj => obj,
    setAudioName: () => audioName => ( audioName ),
   recordingCallback: () => ( durationMillis, isRecording, isDoneRecording ) =>
      ( durationMillis, isRecording, isDoneRecording ),
  }),

The callback setting for setOnRecordingStatusUpdate is:

recording.setOnRecordingStatusUpdate(props.recordingCallback);

onRecordingStatusUpdate is called every 500 milliseconds by default. To make the UI update valid, set the 200 milliseconds interval with the help of setProgressUpdateInterval:

recording.setProgressUpdateInterval(200);

After creating an instance of this class, call prepareToRecordAsync to record the audio.

recordingInstance.prepareToRecordAsync(options) loads the recorder into memory and prepares it for recording. It must be called before calling startAsync(). This method can be used if the recording instance has never been prepared.

The parameters of this method include such options for the recording as sample rate, bitrate, channels, format, encoder and extension. You can find a list of all recording options in this document.

In this case, let’s use Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY.

After the recording has been prepared, you can start recording by calling the method recordingInstance.startAsync().

Before creating a new recording instance, check whether it has been created before. The handler for beginning the recording looks like this:

onStartRecording: props => async () => 
      try 
        if (props.recording) 
          props.recording.setOnRecordingStatusUpdate(null);
          props.setState( recording: null );
        }

        await props.setAudioMode( allowsRecordingIOS: true );

        const recording = new Audio.Recording();
        recording.setOnRecordingStatusUpdate(props.recordingCallback);
        recording.setProgressUpdateInterval(200);

        props.setState( fileUrl: null );

await recording.prepareToRecordAsync(Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY);
        await recording.startAsync();

        props.setState( recording );
      } catch (error) 
        console.log(error) // eslint-disable-line
      
    },

Now you need to write a handler for the audio recording completion. After clicking the stop button, you have to stop the recording, disable it on iOS, receive and save the local URL of the recording, and set OnRecordingStatusUpdate and the recording instance to null:

onEndRecording: props => async () => 
      try 
        await props.recording.stopAndUnloadAsync();
        await props.setAudioMode( allowsRecordingIOS: false );
      } catch (error) 
        console.log(error); // eslint-disable-line
      

      if (props.recording) 
        const fileUrl = props.recording.getURI();
        props.recording.setOnRecordingStatusUpdate(null);
        props.setState( recording: null, fileUrl );
      }
    },

After this, type the audio name, click the “continue” button, and the audio note will be saved in the Redux store.

onSubmit: props => () => 
      if (props.audioName && props.fileUrl) 
        const audioItem = 
          id: uuid(),
          recordDate: moment().format(),
          title: props.audioName,
          audioUrl: props.fileUrl,
          duration: props.durationMillis,
        ;

        props.addAudio(audioItem);
        props.setState(
          audioName: '',
          isDoneRecording: false,
        );

        props.navigation.navigate(screens.LibraryTab);
      }
    },
(Large preview)

Audio Playback With React Native

You can play the audio on the screen with the saved audio notes. To start the audio playback, click one of the items on the list. Below, you can see the audio player that allows you to track the current position of playback, to set the playback starting point and to toggle the playing audio.

Here’s what my audio playback UI looks like:


Large preview

The Expo.Audio.Sound objects and Expo.Video components share a unified imperative API for media playback.

Let’s write the logic of the audio playback by using Recompose in the screen container src/screens/LibraryScreen/LibraryScreenContainer, as the audio player is available only on this screen.

If you want to display the player at any point of the application, I recommend writing the logic of the player and audio playback in Redux operations using redux-thunk.

Let’s customize the audio mode in the same way we did for the audio recording. First, set allowsRecordingIOS to false.

lifecycle(
    async componentDidMount() 
      await Audio.setAudioModeAsync(
        allowsRecordingIOS: false,
        interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX,
        playsInSilentModeIOS: true,
        shouldDuckAndroid: true,
        interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX,
      );
    },
  }),

We have created the recording instance for audio recording. As for audio playback, we need to create the sound instance. We can do it in two different ways:

  1. const playbackObject = new Expo.Audio.Sound();
  2. Expo.Audio.Sound.create(source, initialStatus = {}, onPlaybackStatusUpdate = null, downloadFirst = true)

If you use the first method, you will need to call playbackObject.loadAsync(), which loads the media from source into memory and prepares it for playing, after creation of the instance.

The second method is a static convenience method to construct and load a sound. It сreates and loads a sound from source with the optional initialStatus, onPlaybackStatusUpdate and downloadFirst parameters.

The source parameter is the source of the sound. It supports the following forms:

  • a dictionary of the form uri: 'http://path/to/file' with a network URL pointing to an audio file on the web;
  • require('path/to/file') for an audio file asset in the source code directory;
  • an Expo.Asset object for an audio file asset.

The initialStatus parameter is the initial playback status. PlaybackStatus is the structure returned from all playback API calls describing the state of the playbackObject at that point of time. It is a dictionary with the key-value pairs. You can check all of the keys of the PlaybackStatus in the documentation.

onPlaybackStatusUpdate is a function taking a single parameter, PlaybackStatus. It is called at regular intervals while the media is in the loaded state. The interval is 500 milliseconds by default. In my application, I set it to 50 milliseconds interval for a proper UI update.

Before creating the sound instance, you will need to implement the onPlaybackStatusUpdate callback. First, add some props to the screen container:

withClassVariableHandlers(
    playbackInstance: null,
    isSeeking: false,
    shouldPlayAtEndOfSeek: false,
    playingAudio: null,
  , 'setClassVariable'),
  withStateHandlers(
    position: null,
    duration: null,
    shouldPlay: false,
    isLoading: true,
    isPlaying: false,
    isBuffering: false,
    showPlayer: false,
  , 
    setState: () => obj => obj,
  ),

Now, implement onPlaybackStatusUpdate. You will need to make several validations based on PlaybackStatus for a proper UI display:

withHandlers(
    soundCallback: props => (status) => 
      if (status.didJustFinish) 
        props.playbackInstance().stopAsync();
       else if (status.isLoaded) 
        const position = props.isSeeking()
          ? props.position
          : status.positionMillis;
        const isPlaying = (props.isSeeking() );
      }
    },
  }),

After this, you have to implement a handler for the audio playback. If a sound instance is already created, you need to unload the media from memory by calling playbackInstance.unloadAsync() and clear OnPlaybackStatusUpdate:

loadPlaybackInstance: props => async (shouldPlay) => 
      props.setState( isLoading: true );

      if (props.playbackInstance() !== null) 
        await props.playbackInstance().unloadAsync();
        props.playbackInstance().setOnPlaybackStatusUpdate(null);
        props.setClassVariable( playbackInstance: null );
      }
      const  sound  = await Audio.Sound.create(
         uri: props.playingAudio().audioUrl ,
         shouldPlay, position: 0, duration: 1, progressUpdateIntervalMillis: 50 ,
        props.soundCallback,
      );

      props.setClassVariable( playbackInstance: sound );

      props.setState( isLoading: false );
    },

Call the handler loadPlaybackInstance(true) by clicking the item in the list. It will automatically load and play the audio.

Let’s add the pause and play functionality (toggle playing) to the audio player. If audio is already playing, you can pause it with the help of playbackInstance.pauseAsync(). If audio is paused, you can resume playback from the paused point with the help of the playbackInstance.playAsync() method:

onTogglePlaying: props => () => 
      if (props.playbackInstance() !== null) 
        if (props.isPlaying) 
          props.playbackInstance().pauseAsync();
         else 
          props.playbackInstance().playAsync();
        
      }
    },

When you click on the playing item, it should stop. If you want to stop audio playback and put it into the 0 playing position, you can use the method playbackInstance.stopAsync():

onStop: props => () => 
      if (props.playbackInstance() !== null) 
        props.playbackInstance().stopAsync();

        props.setShowPlayer(false);
        props.setClassVariable( playingAudio: null );
      }
    },

The audio player also allows you to rewind the audio with the help of the slider. When you start sliding, the audio playback should be paused with playbackInstance.pauseAsync().

After the sliding is complete, you can set the audio playing position with the help of playbackInstance.setPositionAsync(value), or play back the audio from the set position with playbackInstance.playFromPositionAsync(value):

onCompleteSliding: props => async (value) => 
      if (props.playbackInstance() !== null) 
        if (props.shouldPlayAtEndOfSeek) 
          await props.playbackInstance().playFromPositionAsync(value);
         else 
          await props.playbackInstance().setPositionAsync(value);
        
        props.setClassVariable( isSeeking: false );
      }
    },

After this, you can pass the props to the components MediaList and AudioPlayer (see the file src/screens/LibraryScreen/LibraryScreenView).

Video Recording Functionality With React Native

Let’s proceed to video recording.

We’ll use Expo.Camera for this purpose. Expo.Camera is a React component that renders a preview of the device’s front or back camera. Expo.Camera can also take photos and record videos that are saved to the app’s cache.

To record video, you need permission for access to the camera and microphone. Let’s add the request for camera access as we did with the audio recording (in the file src/index.js):

await Permissions.askAsync(Permissions.CAMERA);

Video recording is available on the “Video Recording” screen. After switching to this screen, the camera will turn on.

You can change the camera type (front or back) and start video recording. During recording, you can see its general duration and can cancel or stop it. When recording is finished, you will have to type the name of the video, after which it will be saved in the Redux store.

Here is what my video recording UI looks like:


Large preview

Let’s write the video recording logic by using Recompose on the container screen

src/screens/RecordVideoScreen/RecordVideoScreenContainer.

You can see the full list of all props in the Expo.Camera component in the document.

In this application, we will use the following props for Expo.Camera.

  • type: The camera type is set (front or back).
  • onCameraReady: This callback is invoked when the camera preview is set. You won’t be able to start recording if the camera is not ready.
  • style: This sets the styles for the camera container. In this case, the size is 4:3.
  • ref: This is used for direct access to the camera component.

Let’s add the variable for saving the type and handler for its changing.

cameraType: Camera.Constants.Type.back,
toggleCameraType: state => () => (
      cameraType: state.cameraType === Camera.Constants.Type.front
        ? Camera.Constants.Type.back
        : Camera.Constants.Type.front,
    ),

Let’s add the variable for saving the camera ready state and callback for onCameraReady.

isCameraReady: false,

setCameraReady: () => () => ( isCameraReady: true ),

Let’s add the variable for saving the camera component reference and setter.

cameraRef: null,

setCameraRef: () => cameraRef => ( cameraRef ),

Let’s pass these variables and handlers to the camera component.

<Camera
          type=cameraType
          onCameraReady=setCameraReady
          style=s.camera
          ref=setCameraRef
        />

Now, when calling toggleCameraType after clicking the button, the camera will switch from the front to the back.

Currently, we have access to the camera component via the reference, and we can start video recording with the help of cameraRef.recordAsync().

The method recordAsync starts recording a video to be saved to the cache directory.

Arguments:

Options (object) — a map of options:

  • quality (VideoQuality): Specify the quality of recorded video. Usage: Camera.Constants.VideoQuality[‘‘], possible values: for 16:9 resolution 2160p, 1080p, 720p, 480p (Android only) and for 4:3 (the size is 640×480). If the chosen quality is not available for the device, choose the highest one.
  • maxDuration (number): Maximum video duration in seconds.
  • maxFileSize (number): Maximum video file size in bytes.
  • mute (boolean): If present, video will be recorded with no sound.

recordAsync returns a promise that resolves to an object containing the video file’s URI property. You will need to save the file’s URI in order to play back the video hereafter. The promise is returned if stopRecording was invoked, one of maxDuration and maxFileSize is reached or the camera preview is stopped.

Because the ratio set for the camera component sides is 4:3, let’s set the same format for the video quality.

Here is what the handler for starting video recording looks like (see the full code of the container in the repository):

onStartRecording: props => async () => 
      if (props.isCameraReady) 
        props.setState( isRecording: true, fileUrl: null );
        props.setVideoDuration();
        props.cameraRef.recordAsync( quality: '4:3' )
          .then((file) => 
            props.setState( fileUrl: file.uri );
          });
      }
    },

During the video recording, we can’t receive the recording status as we have done for audio. That’s why I have created a function to set video duration.

To stop the video recording, we have to call the following function:

stopRecording: props => () => 
      if (props.isRecording) 
        props.cameraRef.stopRecording();
        props.setState( isRecording: false );
        clearInterval(props.interval);
      }
    },

Check out the entire process of video recording.

Video Playback Functionality With React Native

You can play back the video on the “Library” screen. Video notes are located in the “Video” tab.

To start the video playback, click the selected item in the list. Then, switch to the playback screen, where you can watch or delete the video.

The UI for video playback looks like this:


Large preview

To play back the video, use Expo.Video, a component that displays a video inline with the other React Native UI elements in your app.

The video will be displayed on the separate screen, PlayVideo.

You can check out all of the props for Expo.Video here.

In our application, the Expo.Video component uses native playback controls and looks like this:

<Video
        source= uri: videoUrl }
        style=s.video
        shouldPlay=isPlaying
        resizeMode="contain"
        useNativeControls=isPlaying
        onLoad=onLoad
        onError=onError
      />
  • source

    This is the source of the video data to display. The same forms as for Expo.Audio.Sound are supported.

  • resizeMode

    This is a string describing how the video should be scaled for display in the component view’s bounds. It can be “stretch”, “contain” or “cover”.

  • shouldPlay

    This boolean describes whether the media is supposed to play.

  • useNativeControls

    This boolean, if set to true, displays native playback controls (such as play and pause) within the video component.

  • onLoad

    This function is called once the video has been loaded.

  • onError

    This function is called if loading or playback has encountered a fatal error. The function passes a single error message string as a parameter.

When the video is uploaded, the play button should be rendered on top of it.

When you click the play button, the video turns on and the native playback controls are displayed.

Let’s write the logic of the video using Recompose in the screen container src/screens/PlayVideoScreen/PlayVideoScreenContainer:

const defaultState = 
  isError: false,
  isLoading: false,
  isPlaying: false,
;

const enhance = compose(
  paramsToProps('videoUrl'),
  withStateHandlers(
    ...defaultState,
    isLoading: true,
  , 
    onError: () => () => ( ...defaultState, isError: true ),
    onLoad: () => () => defaultState,
   onTogglePlaying: ( isPlaying ) => () => ( ...defaultState, isPlaying: !isPlaying ),
  }),
);

As previously mentioned, the Expo.Audio.Sound objects and Expo.Video components share a unified imperative API for media playback. That’s why you can create custom controls and use more advanced functionality with the Playback API.

Check out the video playback process:

See the full code for the application in the repository.

You can also install the app on your phone by using Expo and check out how it works in practice.

Wrapping Up

I hope you have enjoyed this article and have enriched your knowledge of React Native. You can use this audio and video recording tutorial to create your own custom-designed media player. You can also scale the functionality and add the ability to save media in the phone’s memory or on a server, synchronize media data between different devices, and share media with others.

As you can see, there is a wide scope for imagination. If you have any questions about the process of developing an audio or video recording app with React Native, feel free to drop a comment below.

Smashing Editorial
(da, lf, ra, yk, al, il)


Original article: 

How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial

Designing For Accessibility And Inclusion




Designing For Accessibility And Inclusion

Steven Lambert



“Accessibility is solved at the design stage.” This is a phrase that Daniel Na and his team heard over and over again while attending a conference. To design for accessibility means to be inclusive to the needs of your users. This includes your target users, users outside of your target demographic, users with disabilities, and even users from different cultures and countries. Understanding those needs is the key to crafting better and more accessible experiences for them.

One of the most common problems when designing for accessibility is knowing what needs you should design for. It’s not that we intentionally design to exclude users, it’s just that “we don’t know what we don’t know.” So, when it comes to accessibility, there’s a lot to know.

How do we go about understanding the myriad of users and their needs? How can we ensure that their needs are met in our design? To answer these questions, I have found that it is helpful to apply a critical analysis technique of viewing a design through different lenses.

“Good [accessible] design happens when you view your [design] from many different perspectives, or lenses.”

The Art of Game Design: A Book of Lenses

A lens is “a narrowed filter through which a topic can be considered or examined.” Often used to examine works of art, literature, or film, lenses ask us to leave behind our worldview and instead view the world through a different context.

For example, viewing art through a lens of history asks us to understand the “social, political, economic, cultural, and/or intellectual climate of the time.” This allows us to better understand what world influences affected the artist and how that shaped the artwork and its message.

Accessibility lenses are a filter that we can use to understand how different aspects of the design affect the needs of the users. Each lens presents a set of questions to ask yourself throughout the design process. By using these lenses, you will become more inclusive to the needs of your users, allowing you to design a more accessible user experience for all.

The Lenses of Accessibility are:

You should know that not every lens will apply to every design. While some can apply to every design, others are more situational. What works best in one design may not work for another.

The questions provided by each lens are merely a tool to help you understand what problems may arise. As always, you should test your design with users to ensure it’s usable and accessible to them.

Lens Of Animation And Effects

Effective animations can help bring a page and brand to life, guide the users focus, and help orient a user. But animations are a double-edged sword. Not only can misusing animations cause confusion or be distracting, but they can also be potentially deadly for some users.

Fast flashing effects (defined as flashing more than three times a second) or high-intensity effects and patterns can cause seizures, known as ‘photosensitive epilepsy.’ Photosensitivity can also cause headaches, nausea, and dizziness. Users with photosensitive epilepsy have to be very careful when using the web as they never know when something might cause a seizure.

Other effects, such as parallax or motion effects, can cause some users to feel dizzy or experience vertigo due to vestibular sensitivity. The vestibular system controls a person’s balance and sense of motion. When this system doesn’t function as it should, it causes dizziness and nausea.

“Imagine a world where your internal gyroscope is not working properly. Very similar to being intoxicated, things seem to move of their own accord, your feet never quite seem to be stable underneath you, and your senses are moving faster or slower than your body.”

A Primer To Vestibular Disorders

Constant animations or motion can also be distracting to users, especially to users who have difficulty concentrating. GIFs are notably problematic as our eyes are drawn towards movement, making it easy to be distracted by anything that updates or moves constantly.

This isn’t to say that animation is bad and you shouldn’t use it. Instead you should understand why you’re using the animation and how to design safer animations. Generally speaking, you should try to design animations that cover small distances, match direction and speed of other moving objects (including scroll), and are relatively small to the screen size.

You should also provide controls or options to cater the experience for the user. For example, Slack lets you hide animated images or emojis as both a global setting and on a per image basis.

To use the Lens of Animation and Effects, ask yourself these questions:

  • Are there any effects that could cause a seizure?
  • Are there any animations or effects that could cause dizziness or vertigo through use of motion?
  • Are there any animations that could be distracting by constantly moving, blinking, or auto-updating?
  • Is it possible to provide controls or options to stop, pause, hide, or change the frequency of any animations or effects?

Lens Of Audio And Video

Autoplaying videos and audio can be pretty annoying. Not only do they break a users concentration, but they also force the user to hunt down the offending media and mute or stop it. As a general rule, don’t autoplay media.

“Use autoplay sparingly. Autoplay can be a powerful engagement tool, but it can also annoy users if undesired sound is played or they perceive unnecessary resource usage (e.g. data, battery) as the result of unwanted video playback.”

Google Autoplay guidelines

You’re now probably asking, “But what if I autoplay the video in the background but keep it muted?” While using videos as backgrounds may be a growing trend in today’s web design, background videos suffer from the same problems as GIFs and constant moving animations: they can be distracting. As such, you should provide controls or options to pause or disable the video.

Along with controls, videos should have transcripts and/or subtitles so users can consume the content in a way that works best for them. Users who are visually impaired or who would rather read instead of watch the video need a transcript, while users who aren’t able to or don’t want to listen to the video need subtitles.

To use the Lens of Audio and Video, ask yourself these questions:

  • Are there any audio or video that could be annoying by autoplaying?
  • Is it possible to provide controls to stop, pause, or hide any audio or videos that autoplay?
  • Do videos have transcripts and/or subtitles?

Lens Of Color

Color plays an important part in a design. Colors evoke emotions, feelings, and ideas. Colors can also help strengthen a brand’s message and perception. Yet the power of colors is lost when a user can’t see them or perceives them differently.

Color blindness affects roughly 1 in 12 men and 1 in 200 women. Deuteranopia (red-green color blindness) is the most common form of color blindness, affecting about 6% of men. Users with red-green color blindness typically perceive reds, greens, and oranges as yellowish.


Color Blindness Reference Chart for Deuternaopia, Protanopia, and Tritanopia


Deuteranopia (green color blindness) is common and causes reds to appear brown/yellow and greens to appear beige. Protanopia (red color blindness) is rare and causes reds to appear dark/black and orange/greens to appear yellow. Tritanopia (blue-yellow colorblindness) is very rare and cases blues to appear more green/teal and yellows to appear violet/grey. (Source) (Large preview)

Color meaning is also problematic for international users. Colors mean different things in different countries and cultures. In Western cultures, red is typically used to represent negative trends and green positive trends, but the opposite is true in Eastern and Asian cultures.

Because colors and their meanings can be lost either through cultural differences or color blindness, you should always add a non-color identifier. Identifiers such as icons or text descriptions can help bridge cultural differences while patterns work well to distinguish between colors.


Six colored labels. Five use a pattern while the sixth doesn’t


Trello’s color blind friendly labels use different patterns to distinguish between the colors. (Large preview)

Oversaturated colors, high contrasting colors, and even just the color yellow can be uncomfortable and unsettling for some users, prominently those on the autism spectrum. It’s best to avoid high concentrations of these types of colors to help users remain comfortable.

Poor contrast between foreground and background colors make it harder to see for users with low vision, using a low-end monitor, or who are just in direct sunlight. All text, icons, and any focus indicators used for users using a keyboard should meet a minimum contrast ratio of 4.5:1 to the background color.

You should also ensure your design and colors work well in different settings of Windows High Contrast mode. A common pitfall is that text becomes invisible on certain high contrast mode backgrounds.

To use the Lens of Color, ask yourself these questions:

  • If the color was removed from the design, what meaning would be lost?
  • How could I provide meaning without using color?
  • Are any colors oversaturated or have high contrast that could cause users to become overstimulated or uncomfortable?
  • Does the foreground and background color of all text, icons, and focus indicators meet contrast ratio guidelines of 4.5:1?

Lens Of Controls

Controls, also called ‘interactive content,’ are any UI elements that the user can interact with, be they buttons, links, inputs, or any HTML element with an event listener. Controls that are too small or too close together can cause lots of problems for users.

Small controls are hard to click on for users who are unable to be accurate with a pointer, such as those with tremors, or those who suffer from reduced dexterity due to age. The default size of checkboxes and radio buttons, for example, can pose problems for older users. Even when a label is provided that could be clicked on instead, not all users know they can do so.

Controls that are too close together can cause problems for touch screen users. Fingers are big and difficult to be precise with. Accidentally touching the wrong control can cause frustration, especially if that control navigates you away or makes you lose your context.


Tweet that says Software being Done is like lawn being Mowed. Jim Benson


When touching a single line tweet, it’s very easy to accidentally click the person’s name or handle instead of opening the tweet because there’s not enough space between them. (Source) (Large preview)

Controls that are nested inside another control can also contribute to touch errors. Not only is it not allowed in the HTML spec, it also makes it easy to accidentally select the parent control instead of the one you wanted.

To give users enough room to accurately select a control, the recommended minimum size for a control is 34 by 34 device independent pixels, but Google recommends at least 48 by 48 pixels, while the WCAG spec recommends at least 44 by 44 pixels. This size also includes any padding the control has. So a control could visually be 24 by 24 pixels but with an additional 10 pixels of padding on all sides would bring it up to 44 by 44 pixels.

It’s also recommended that controls be placed far enough apart to reduce touch errors. Microsoft recommends at least 8 pixels of spacing while Google recommends controls be spaced at least 32 pixels apart.

Controls should also have a visible text label. Not only do screen readers require the text label to know what the control does, but it’s been shown that text labels help all users better understand a controls purpose. This is especially important for form inputs and icons.

To use the Lens of Controls, ask yourself these questions:

  • Are any controls not large enough for someone to touch?
  • Are any controls too close together that would make it easy to touch the wrong one?
  • Are there any controls inside another control or clickable region?
  • Do all controls have a visible text label?

Lens Of Font

In the early days of the web, we designed web pages with a font size between 9 and 14 pixels. This worked out just fine back then as monitors had a relatively known screen size. We designed thinking that the browser window was a constant, something that couldn’t be changed.

Technology today is very different than it was 20 years ago. Today, browsers can be used on any device of any size, from a small watch to a huge 4K screen. We can no longer use fixed font sizes to design our sites. Font sizes must be as responsive as the design itself.

Not only should the font sizes be responsive, but the design should be flexible enough to allow users to customize the font size, line height, or letter spacing to a comfortable reading level. Many users make use of custom CSS that helps them have a better reading experience.

The font itself should be easy to read. You may be wondering if one font is more readable than another. The truth of the matter is that the font doesn’t really make a difference to readability. Instead it’s the font style that plays an important role in a fonts readability.

Decorative or cursive font styles are harder to read for many users, but especially problematic for users with dyslexia. Small font sizes, italicized text, and all uppercase text are also difficult for users. Overall, larger text, shorter line lengths, taller line heights, and increased letter spacing can help all users have a better reading experience.

To use the Lens of Font, ask yourself these questions:

  • Is the design flexible enough that the font could be modified to a comfortable reading level by the user?
  • Is the font style easy to read?

Lens Of Images and Icons

They say, “A picture is worth a thousand words.” Still, a picture you can’t see is speechless, right?

Images can be used in a design to convey a specific meaning or feeling. Other times they can be used to simplify complex ideas. Whichever the case for the image, a user who uses a screen reader needs to be told what the meaning of the image is.

As the designer, you understand best the meaning or information the image conveys. As such, you should annotate the design with this information so it’s not left out or misinterpreted later. This will be used to create the alt text for the image.

How you describe an image depends entirely on context, or how much textual information is already available that describes the information. It also depends on if the image is just for decoration, conveys meaning, or contains text.

“You almost never describe what the picture looks like, instead you explain the information the picture contains.”

Five Golden Rules for Compliant Alt Text

Since knowing how to describe an image can be difficult, there’s a handy decision tree to help when deciding. Generally speaking, if the image is decorational or there’s surrounding text that already describes the image’s information, no further information is needed. Otherwise you should describe the information of the image. If the image contains text, repeat the text in the description as well.

Descriptions should be succinct. It’s recommended to use no more than two sentences, but aim for one concise sentence when possible. This allows users to quickly understand the image without having to listen to a lengthy description.

As an example, if you were to describe this image for a screen reader, what would you say?


Vincent van Gogh’s The Starry Night


Source (Large preview)

Since we describe the information of the image and not the image itself, the description could be Vincent van Gogh’s The Starry Night since there is no other surrounding context that describes it. What you shouldn’t put is a description of the style of the painting or what the picture looks like.

If the information of the image would require a lengthy description, such as a complex chart, you shouldn’t put that description in the alt text. Instead, you should still use a short description for the alt text and then provide the long description as either a caption or link to a different page.

This way, users can still get the most important information quickly but have the ability to dig in further if they wish. If the image is of a chart, you should repeat the data of the chart just like you would for text in the image.

If the platform you are designing for allows users to upload images, you should provide a way for the user to enter the alt text along with the image. For example, Twitter allows its users to write alt text when they upload an image to a tweet.

To use the Lens of Images and Icons, ask yourself these questions:

  • Does any image contain information that would be lost if it was not viewable?
  • How could I provide the information in a non-visual way?
  • If the image is controlled by the user, is it possible to provide a way for them to enter the alt text description?

Lens Of Keyboard

Keyboard accessibility is among the most important aspects of an accessible design, yet it is also among the most overlooked.

There are many reasons why a user would use a keyboard instead of a mouse. Users who use a screen reader use the keyboard to read the page. A user with tremors may use a keyboard because it provides better accuracy than a mouse. Even power users will use a keyboard because it’s faster and more efficient.

A user using a keyboard typically uses the tab key to navigate to each control in sequence. A logical order for the tab order greatly helps users know where the next key press will take them. In western cultures, this usually means from left to right, top to bottom. Unexpected tab orders results in users becoming lost and having to scan frantically for where the focus went.

Sequential tab order also means that they must tab through all controls that are before the one that they want. If that control is tens or hundreds of keystrokes away, it can be a real pain point for the user.

By making the most important user flows nearer to the top of the tab order, we can help enable our users to be more efficient and effective. However, this isn’t always possible nor practical to do. In these cases, providing a way to quickly jump to a particular flow or content can still allow them to be efficient. This is why “skip to content” links are helpful.

A good example of this is Facebook which provides a keyboard navigation menu that allows users to jump to specific sections of the site. This greatly speeds up the ability for a user to interact with the page and the content they want.


facebook


Facebook provides a way for all keyboard users to jump to specific sections of the page, or other pages within Facebook, as well as an Accessibility Help menu. (Large preview)

When tabbing through a design, focus styles should always be visible or a user can easily become lost. Just like an unexpected tab order, not having good focus indicators results in users not knowing what is currently focused and having to scan the page.

Changing the look of the default focus indicator can sometimes improve the experience for users. A good focus indicator doesn’t rely on color alone to indicate focus (Lens of Color), and should be distinct enough to easily allow the user to find it. For example, a blue focus ring around a similarly colored blue button may not be visually distinct to discern that it is focused.

Although this lens focuses on keyboard accessibility, it’s important to note that it applies to any way a user could interact with a website without a mouse. Devices such as mouth sticks, switch access buttons, sip and puff buttons, and eye tracking software all require the page to be keyboard accessible.

By improving keyboard accessibility, you allow a wide range of users better access to your site.

To use the Lens of Keyboard, ask yourself these questions:

  • What keyboard navigation order makes the most sense for the design?
  • How could a keyboard user get to what they want in the quickest way possible?
  • Is the focus indicator always visible and visually distinct?

Lens Of Layout

Layout contributes a great deal to the usability of a site. Having a layout that is easy to follow with easy to find content makes all the difference to your users. A layout should have a meaningful and logical sequence for the user.

With the advent of CSS Grid, being able to change the layout to be more meaningful based on the available space is easier than ever. However, changing the visual layout creates problems for users who rely on the structural layout of the page.

The structural layout is what is used by screen readers and users using a keyboard. When the visual layout changes but not the underlying structural layout, these users can become confused as their tab order is no longer logical. If you must change the visual layout, you should do so by changing the structural layout so users using a keyboard maintain a sequential and logical tab order.

The layout should be resizable and flexible to a minimum of 320 pixels with no horizontal scroll bars so that it can be viewed comfortably on a phone. The layout should also be flexible enough to be zoomed in to 400% (also with no horizontal scroll bars) for users who need to increase the font size for a better reading experience.

Users using a screen magnifier benefit when related content is in close proximity to one another. A screen magnifier only provides the user with a small view of the entire layout, so content that is related but far away, or changes far away from where the interaction occurred is hard to find and can go unnoticed.

GIF of CodePen showing that clicking on a button does not update the interface
When performing a search on CodePen, the search button is in the top right corner of the page. Clicking the button reveals a large search input on the opposite side of the screen. A user using a screen magnifier would be hard pressed to notice the change and would think the button doesn’t work. (Large preview)

To use the Lens of Layout, ask yourself these questions:

  • Does the layout have a meaningful and logical sequence?
  • What should happen to the layout when it’s viewed on a small screen or zoomed in to 400%?
  • Is content that is related or changes due to user interaction in close proximity to one another?

Lens Of Material Honesty

Material honesty is an architectural design value that states that a material should be honest to itself and not be used as a substitute for another material. It means that concrete should look like concrete and not be painted or sculpted to look like bricks.

Material honesty values and celebrates the unique properties and characteristics of each material. An architect who follows material honesty knows when each material should be used and how to use it without tarnishing itself.

Material honesty is not a hard and fast rule though. It lies on a continuum. Like all values, you are allowed to break them when you understand them. As the saying goes, they are “more what you’d call “guidelines” than actual rules.”

When applied to web design, material honesty means that one element or component shouldn’t look, behave, or function as if it were another element or component. Doing so would cheat the user and could lead to confusion. A common example of this are buttons that look like links or links that look like buttons.

Links and buttons have different behaviors and affordances. A link is activated with the enter key, typically takes you to a different page, and has a special context menu on right click. Buttons are activated with the space key, used primarily to trigger interactions on the current page, and have no such context menu.

When a link is styled to look like a button or vise versa, a user could become confused as it does not behave and function as it looks. If the “button” navigates the user away unexpectedly, they might become frustrated if they lost data in the process.

“At first glance everything looks fine, but it won’t stand up to scrutiny. As soon as such a website is stress‐tested by actual usage across a range of browsers, the façade crumbles.”

Resilient Web Design

Where this becomes the most problematic is when a link and button are styled the same and are placed next to one another. As there is nothing to differentiate between the two, a user can accidentally navigate when they thought they wouldn’t.


Three links and/or buttons shown inline with text


Can you tell which one of these will navigate you away from the page and which won’t? (Large preview)

When a component behaves differently than expected, it can easily lead to problems for users using a keyboard or screen reader. An autocomplete menu that is more than an autocomplete menu is one such example.

Autocomplete is used to suggest or predict the rest of a word a user is typing. An autocomplete menu allows a user to select from a large list of options when not all options can be shown.

An autocomplete menu is typically attached to an input field and is navigated with the up and down arrow keys, keeping the focus inside the input field. When a user selects an option from the list, that option will override the text in the input field. Autocomplete menus are meant to be lists of just text.

The problem arises when an autocomplete menu starts to gain more behaviors. Not only can you select an option from the list, but you can edit it, delete it, or even expand or collapse sections. The autocomplete menu is no longer just a simple list of selectable text.




With the addition of edit, delete, and profile buttons, this autocomplete menu is materially dishonest. (Large preview)

The added behaviors no longer mean you can just use the up and down arrows to select an option. Each option now has more than one action, so a user needs to be able to traverse two dimensions instead of just one. This means that a user using a keyboard could become confused on how to operate the component.

Screen readers suffer the most from this change of behavior as there is no easy way to help them understand it. A lot of work will be required to ensure the menu is accessible to a screen reader by using non-standard means. As such, it will might result in a sub-par or inaccessible experience for them.

To avoid these issues, it’s best to be honest to the user and the design. Instead of combining two distinct behaviors (an autocomplete menu and edit and delete functionality), leave them as two separate behaviors. Use an autocomplete menu to just autocomplete the name of a user, and have a different component or page to edit and delete users.

To use the Lens of Material Honesty, ask yourself these questions:

  • Is the design being honest to the user?
  • Are there any elements that behave, look, or function as another element?
  • Are there any components that combine distinct behaviors into a single component? Does doing so make the component materially dishonest?

Lens Of Readability

Have you ever picked up a book only to get a few paragraphs or pages in and want to give up because the text was too hard to read? Hard to read content is mentally taxing and tiring.

Sentence length, paragraph length, and complexity of language all contribute to how readable the text is. Complex language can pose problems for users, especially those with cognitive disabilities or who aren’t fluent in the language.

Along with using plain and simple language, you should ensure each paragraph focuses on a single idea. A paragraph with a single idea is easier to remember and digest. The same is true of a sentence with fewer words.

Another contributor to the readability of content is the length of a line. The ideal line length is often quoted to be between 45 and 75 characters. A line that is too long causes users to lose focus and makes it harder to move to the next line correctly, while a line that is too short causes users to jump too often, causing fatigue on the eyes.

“The subconscious mind is energized when jumping to the next line. At the beginning of every new line the reader is focused, but this focus gradually wears off over the duration of the line”

— Typographie: A Manual of Design

You should also break up the content with headings, lists, or images to give mental breaks to the reader and support different learning styles. Use headings to logically group and summarize the information. Headings, links, controls, and labels should be clear and descriptive to enhance the users ability to comprehend.

To use the Lens of Readability, ask yourself these questions:

  • Is the language plain and simple?
  • Does each paragraph focus on a single idea?
  • Are there any long paragraphs or long blocks of unbroken text?
  • Are all headings, links, controls, and labels clear and descriptive?

Lens Of Structure

As mentioned in the Lens of Layout, the structural layout is what is used by screen readers and users using a keyboard. While the Lens of Layout focused on the visual layout, the Lens of Structure focuses on the structural layout, or the underlying HTML and semantics of the design.

As a designer, you may not write the structural layout of your designs. This shouldn’t stop you from thinking about how your design will ultimately be structured though. Otherwise, your design may result in an inaccessible experience for a screen reader.

Take for example a design for a single elimination tournament bracket.


Eight person tournament bracket featuring George, Fred, Linus, Lucy, Jack, Jill, Fred, and Ginger. Ginger ultimately wins against George.


Large preview

How would you know if this design was accessible to a user using a screen reader? Without understanding structure and semantics, you may not. As it stands, the design would probably result in an inaccessible experience for a user using a screen reader.

To understand why that is, we first must understand that a screen reader reads a page and its content in sequential order. This means that every name in the first column of the tournament would be read, followed by all the names in the second column, then third, then the last.

“George, Fred, Linus, Lucy, Jack, Jill, Fred, Ginger, George, Lucy, Jack, Ginger, George, Ginger, Ginger.”

If all you had was a list of seemingly random names, how would you interpret the results of the tournament? Could you say who won the tournament? Or who won game 6?

With nothing more to work with, a user using a screen reader would probably be a bit confused about the results. To be able to understand the visual design, we must provide the user with more information in the structural design.

This means that as a designer you need to know how a screen reader interacts with the HTML elements on a page so you know how to enhance their experience.

  • Landmark Elements (header, nav, main, and footer)
    Allow a screen reader to jump to important sections in the design.
  • Headings (h1h6)
    Allow a screen reader to scan the page and get a high level overview. Screen readers can also jump to any heading.
  • Lists (ul and ol)
    Group related items together, and allow a screen reader to easily jump from one item to another.
  • Buttons
    Trigger interactions on the current page.
  • Links
    Navigate or retrieve information.
  • Form labels
    Tell screen readers what each form input is.

Knowing this, how might we provide more meaning to a user using a screen reader?

To start, we could group each column of the tournament into rounds and use headings to label each round. This way, a screen reader would understand when a new round takes place.

Next, we could help the user understand which players are playing against each other each game. We can again use headings to label each game, allowing them to find any game they might be interested in.

By just adding headings, the content would read as follows:

“__Round 1, Game 1__, George, Fred, __Game 2__, Linus, Lucy, __Game 3__, Jack, Jill, __Game 4__, Fred, Ginger, __Round 2, Game 5__, George, Lucy, __Game 6__, Jack, Ginger, __Round 3__, __Game 7__, George, Ginger, __Winner__, Ginger.”

This is already a lot more understandable than before.

The information still doesn’t answer who won a game though. To know that, you’d have to understand which game a winner plays next to see who won the previous game. For example, you’d have to know that the winner of game four plays in game six to know who advanced from game four.

We can further enhance the experience by informing the user who won each game so they don’t have to go hunting for it. Putting the text “(winner)” after the person who won the round would suffice.

We should also further group the games and rounds together using lists. Lists provide the structural semantics of the design, essentially informing the user of the connected nodes from the visual design.

If we translate this back into a visual design, the result could look as follows:


The tournament bracket


The tournament with descriptive headings and winner information (shown here with grey background). (Large preview)

Since the headings and winner text are redundant in the visual design, you could hide them just from visual users so the end visual result looks just like the first design.

“If the end result is visually the same as where we started, why did we go through all this?” You may ask.

The reason is that you should always annotate your design with all the necessary structural design requirements needed for a better screen reader experience. This way, the person who implements the design knows to add them. If you had just handed the first design to the implementer, it would more than likely end up inaccessible.

To use the Lens of Structure, ask yourself these questions:

  • Can I outline a rough HTML structure of my design?
  • How can I structure the design to better help a screen reader understand the content or find the content they want?
  • How can I help the person who will implement the design understand the intended structure?

Lens Of Time

Periodically in a design you may need to limit the amount of time a user can spend on a task. Sometimes it may be for security reasons, such as a session timeout. Other times it could be due to a non-functional requirement, such as a time constrained test.

Whatever the reason, you should understand that some users may need more time in order finish the task. Some users might need more time to understand the content, others might not be able to perform the task quickly, and a lot of the time they could just have been interrupted.

“The designer should assume that people will be interrupted during their activities”

— The Design of Everyday Things

Users who need more time to perform an action should be able to adjust or remove a time limit when possible. For example, with a session timeout you could alert the user when their session is about to expire and allow them to extend it.

To use the Lens of Time, ask yourself this question:

  • Is it possible to provide controls to adjust or remove time limits?

Bringing It All Together

So now that you’ve learned about the different lenses of accessibility through which you can view your design, what do you do with them?

The lenses can be used at any point in the design process, even after the design has been shipped to your users. Just start with a few of them at hand, and one at a time carefully analyze the design through a lens.

Ask yourself the questions and see if anything should be adjusted to better meet the needs of a user. As you slowly make changes, bring in other lenses and repeat the process.

By looking through your design one lens at a time, you’ll be able to refine the experience to better meet users’ needs. As you are more inclusive to the needs of your users, you will create a more accessible design for all your users.

Using lenses and insightful questions to examine principles of accessibility was heavily influenced by Jesse Schell and his book “The Art of Game Design: A Book of Lenses.”

Smashing Editorial
(il, ra, yk)


Taken from – 

Designing For Accessibility And Inclusion

UX In Contact Forms: Essentials To Turn Leads Into Conversions

Do you like filling out forms? I thought not. It’s not what we want from a service. All the user wants is to buy a ticket, book a hotel room, make a purchase and so on. Filling in a form is a necessary evil they have to deal with. Does this describe you? So, what actually affects a person’s attitude to submitting a form?
It might be time-consuming. Complicated forms are often hard to understand (or you just don’t feel like filling it in).

See the original post:

UX In Contact Forms: Essentials To Turn Leads Into Conversions