Tag Archives: accessibility

Building A Simple AI Chatbot With Web Speech API And Node.js

Using voice commands has become pretty ubiquitous nowadays, as more mobile phone users use voice assistants such as Siri and Cortana, and as devices such as Amazon Echo and Google Home have been invading our living rooms.

Building A Simple AI Chatbot With Web Speech API And Node.js

These systems are built with speech recognition software that allows their users to issue voice commands. Now, our web browsers will become familiar with to Web Speech API, which allows users to integrate voice data in web apps.

The post Building A Simple AI Chatbot With Web Speech API And Node.js appeared first on Smashing Magazine.

Taken from:

Building A Simple AI Chatbot With Web Speech API And Node.js

Designing A Dementia-Friendly Website


Some well-established web design basics: minimize the number of choices that someone has to make; create self-explanatory navigation tools; help people get to what they’re looking for as quickly as possible. Sounds simple enough? Now consider this…

Designing A Dementia-Friendly Website

An ever growing number of web users around the world are living with dementia. They have very varied levels of computer literacy and may be experiencing some of the following issues: memory loss, confusion, issues with vision and perception, difficulties sequencing and processing information, reduced problem-solving abilities, or problems with language.

The post Designing A Dementia-Friendly Website appeared first on Smashing Magazine.

Read this article:

Designing A Dementia-Friendly Website

Notes On Client-Rendered Accessibility

As creators of the web, we bring innovative, well-designed interfaces to life. We find satisfaction in improving our craft with each design or line of code. But this push to elevate our skills can be self-serving: Does a new CSS framework or JavaScript abstraction pattern serve our users or us as developers?

If a framework encourages best practices in development while also improving our workflow, it might serve both our users’ needs and ours as developers. If it encourages best practices in accessibility alongside other areas, like performance, then it has potential to improve the state of the web.

Despite our pursuit to do a better job every day, sometimes we forget about accessibility, the practice of designing and developing in a way that’s inclusive of people with disabilities. We have the power to improve lives through technology — we should use our passion for the craft to build a more accessible web.

These days, we build a lot of client-rendered web applications, also known as single-page apps, JavaScript MVCs and MV-whatever. AngularJS, React, Ember, Backbone.js, Spine: You may have used or seen one of these JavaScript frameworks in a recent project. Common user experience-related characteristics include asynchronous postbacks, animated page transitions, and dynamic UI filtering. With frameworks like these, creating a poor user experience for people with disabilities is, sadly, pretty easy. Fortunately, we can employ best practices to make things better.

In this article, we will explore techniques for building accessible client-rendered web applications, making our jobs as web creators even more worthwhile.

Clueless character making 'Whatever' gesture1
MV-whatever. (Show animated Gif2)

Semantics

Front-end JavaScript frameworks make it easy for us to create and consume custom HTML tags like <pizza-button>, which you’ll see in an example later on. React, AngularJS and Ember enable us to attach behavior to made-up tags with no default semantics, using JavaScript and CSS. We can even use Web Components3 now, a set of new standards holding both the promise of extensibility and a challenge to us as developers. With this much flexibility, it’s critical for users of assistive technologies such as screen readers that we use semantics to communicate what’s happening without relying on a visual experience.

Consider a common form control4: A checkbox opting you out of marketing email is pretty significant to the user experience. If it isn’t announced as “Subscribe checked check box” in a screen reader, you might have no idea you’d need to uncheck it to opt out of the subscription. In client-side web apps, it’s possible to construct a form model from user input and post JSON to a server regardless of how we mark it up — possibly even without a <form> tag. With this freedom, knowing how to create accessible forms is important.

To keep our friends with screen readers from opting in to unwanted email, we should:

  • use native inputs to easily announce their role (purpose) and state (checked or unchecked);
  • provide an accessible name using a <label>, with id and for attribute pairing — aria-label on the input or aria-labelledby pointing to another element’s id.
<form>
  <label for="subscribe">
    Subscribe
  </label>
  <input type="checkbox" id="subscribe" checked>
</form>

Native Checkbox With Label

If native inputs can’t be used (with good reason), create custom checkboxes with role=checkbox, aria-checked, aria-disabled and aria-required, and wire up keyboard events. See the W3C’s “Using WAI-ARIA in HTML385.”

Custom Checkbox With ARIA

<form>
  <some-checkbox role="checkbox" tabindex="0" aria-labelledby="subscribe" aria-checked="true">
  </some-checkbox>
  <some-label id="subscribe">Subscribe</some-label>
</form>

Form inputs are just one example of the use of semantic HTML6 and ARIA attributes to communicate the purpose of something — other important considerations include headings and page structure, buttons, anchors, lists and more. ARIA7, or Accessible Rich Internet Applications, exists to fill in gaps where accessibility support for HTML falls short (in theory, it can also be used for XML or SVG). As you can see from the checkbox example, ARIA requirements quickly pile up when you start writing custom elements. Native inputs, buttons and other semantic elements provide keyboard and accessibility support for free. The moment you create a custom element and bolt ARIA attributes onto it, you become responsible for managing the role and state of that element.

Although ARIA is great and capable of many things, understanding and using it is a lot of work. It also doesn’t have the broadest support. Take Dragon NaturallySpeaking8 — this assistive technology, which people use all the time to make their life easier, is just starting to gain ARIA support. Were I a browser implementer, I’d focus on native element support first, too — so it makes sense that ARIA might be added later. For this reason, use native elements, and you won’t often need to use ARIA roles or states (aria-checked, aria-disabled, aria-required, etc.). If you must create custom controls, read up on ARIA to learn the expected keyboard behavior9 and how to use attributes correctly.

Tip: Use Chrome’s Accessibility Developer Tools3710 to audit your code for errors, and you’ll get the bonus “Accessibility Properties” inspector.

AngularJS material in Chrome with accessibility inspector open11
AngularJS material in Chrome with accessibility inspector open. (View large version12)

Web Components and Accessibility

An important topic in a discussion on accessibility and semantics is Web Components, a set of new standards landing in browsers that enable us to natively create reusable HTML widgets. Because Web Components are still so new, the syntax is majorly in flux. In December 2014, Mozilla said it wouldn’t support HTML imports13, a seemingly obvious way to distribute new components; so, for now that technology is natively available in Chrome and Opera14 only. Additionally, up for debate is the syntax for extending native elements (see the discussion about is="" syntax15), along with how rigid the shadow DOM boundary should be. Despite these changes, here are some tips for writing semantic Web Components:

  • Small components are more reusable and easier to manage for any necessary semantics.
  • Use native elements within Web Components to gain behavior for free.
  • Element IDs within the shadow DOM do not have the same scope as the host document.
  • The same non-Web Component accessibility guidelines apply.

For more information on Web Components and accessibility, have a look at these articles:

Interactivity

Native elements such as buttons and inputs come prepackaged with events and properties that work easily with keyboards and assistive technologies. Leveraging these features means less work for us. However, given how easy JavaScript frameworks and CSS make it to create custom elements, such as <pizza-button>, we might have to do more work to deliver pizza from the keyboard if we choose to mark it up as a new element. For keyboard support, custom HTML tags need:

  • tabindex, preferably 0 so that you don’t have to manage the entire page’s tab order (WebAIM discusses this19);
  • a keyboard event such as keypress or keydown to trigger callback functions.

Focus Management

Closely related to interactivity but serving a slightly different purpose is focus management. The term “client-rendered” refers partly to a single-page browsing experience where routing is handled with JavaScript and there is no server-side page refresh. Portions of views could update the URL and replace part or all of the DOM, including where the user’s keyboard is currently focused. When this happens, focus is easily lost, creating a pretty unusable experience for people who rely on a keyboard or screen reader.

Imagine sorting a list with your keyboard’s arrow keys. If the sorting action rebuilds the DOM, then the element that you’re using will be rerendered, losing focus in the process. Unless focus is deliberately sent back to the element that was in use, you’d lose your place and have to tab all the way down to the list from the top of the page again. You might just leave the website at that point. Was it an app you needed to use for work or to find an apartment? That could be a problem.

In client-rendered frameworks, we are responsible for ensuring that focus is not lost when rerendering the DOM. The easy way to test this is to use your keyboard. If you’re focused on an item and it gets rerendered, do you bang your keyboard against the desk and start over at the top of the page or gracefully continue on your way? Here is one focus-management technique from Distiller20 using Spine, where focus is sent back into relevant content after rendering:

class App.FocusManager
constructor:
$(‘body’).on ‘focusin’, (e) =>
@oldFocus = e.target
App.bind 'rendered', (e) =>
return unless @oldFocus

if @oldFocus.getAttribute('data-focus-id')
@_focusById()
else
@_focusByNodeEquality()

_focusById: ->
focusId = @oldFocus.getAttribute('data-focus-id')
newFocus = document.querySelector("##focusId")
App.focus(newFocus) if newFocus

_focusByNodeEquality: ->
allNodes = $('body *:visible').get()
for node in allNodes
if App.equalNodes(node, @oldFocus)
App.focus(node)

In this helper class, JavaScript (implemented in CoffeeScript) binds a focusin listener to document.body that checks anytime an element is focused, using event delegation21, and it stores a reference to that focused element. The helper class also subscribes to a Spine rendered event, tapping into client-side rendering so that it can gracefully handle focus. If an element was focused before the rendering happened, it can focus an element in one of two ways. If the old node is identical to a new one somewhere in the DOM, then focus is automatically sent to it. If the node isn’t identical but has a data-focus-id attribute on it, then it looks up that id’s value and sends focus to it instead. This second method is useful for when elements aren’t identical anymore because their text has changed (for example, “item 1 of 5” becoming labeled off screen as “item 2 of 5”).

Each JavaScript MV-whatever framework will require a slightly different approach to focus management. Unfortunately, most of them won’t handle focus for you, because it’s hard for a framework to know what should be focused upon rerendering. By testing rendering transitions with your keyboard and making sure focus is not dropped, you’ll be empowered to add support to your application. If this sounds daunting, inquire in your framework’s support community about how focus management is typically handled (see React’s GitHub repo22 for an example). There are people who can help!

Cat 'helping' by laying on keyboard23
Cat “helping”. (View animated Gif24)

Notifying The User

There is a debate about whether client-side frameworks are actually good for users25, and plenty of people have an opinion26 on them. Clearly, most client-rendered app frameworks could improve the user experience by providing easy asynchronous UI filtering, form validation and live content updates. To make these dynamic updates more inclusive, developers should also update users of assistive technologies when something is happening away from their keyboard focus.

Imagine a scenario: You’re typing in an autocomplete widget and a list pops up, filtering options as you type. Pressing the down arrow key cycles through the available options, one by one. One technique to announce these selections would be to append messages to an ARIA live region27, a mechanism that screen readers can use to subscribe to changes in the DOM. As long as the live region exists when the element is rendered, any text appended to it with JavaScript will be announced (meaning you can’t add bind aria-live and add the first message at the same time). This is essentially how Angular Material28’s autocomplete handles dynamic screen-reader updates:

<md-autocomplete md-selected-item="ctrl.selectedItem" aria-disabled="false">
<md-autocomplete-wrap role="listbox">
  <input type="text" aria-label="ariaLabel}" aria-owns="ul_001">
</md-autocomplete-wrap>
<ul role="presentation" id="ul_001">
  <li ng-repeat="(index, item) in $mdAutocompleteCtrl.matches" role="option" tabIndex="0">
</ul>
<aria-status class="visually-hidden" role="alert">
  <p ng-repeat="message in messages">message}</p>
</aria-status>
</md-autocomplete> 

In the simplified code above (the full directive29 and related controller30 source are on GitHub), when a user types in the md-autocomplete text input, list items for results are added to a neighboring unordered list. Another neighboring element, aria-status, gets its aria-live functionality from the alert role. When results appear, a message is appended to aria-status announcing the number of items, “There is one match” or “There are four matches,” depending on the number of options. When a user arrows through the list, that item’s text is also appended to aria-status, announcing the currently highlighted item without the user having to move focus from the input. By curating the list of messages sent to an ARIA live region, we can implement an inclusive design that goes far beyond the visual. Similar regions can be used to validate forms.

For more information on accessible client-side validation, read Marco Zehe’s “Easy ARIA Tip #3: aria-invalid and Role alert31” or Deque’s post on accessible forms32.

Conclusion

So far, we’ve talked about accessibility with screen readers and keyboards. Also consider readability: This includes color contrast, readable fonts and obvious interactions. In client-rendered applications, all of the typical web accessibility principles33 apply, in addition to the specific ones outlined above. The resources listed below will help you incorporate accessibility in your current or next project.

It is up to us as developers and designers to ensure that everyone can use our web applications. By knowing what makes an accessible user experience, we can serve a lot more people, and possibly even make their lives better. We need to remember that client-rendered frameworks aren’t always the right tool for the job. There are plenty of legitimate use cases for them, hence their popularity. There are definitely drawbacks to rendering everything on the client34. However, even as solutions for seamless server- and client-side rendering improve over time, these same accessibility principles of focus management, semantics and alerting the user will remain true, and they will enable more people to use your apps. Isn’t it cool that we can use our craft to help people through technology?

Resources

Thanks to Heydon Pickering for reviewing this article.

(hp, al, ml)

Footnotes

  1. 1 https://www.smashingmagazine.com/wp-content/uploads/2015/05/whatever.gif
  2. 2 https://www.smashingmagazine.com/wp-content/uploads/2015/05/whatever.gif
  3. 3 http://www.smashingmagazine.com/2014/03/04/introduction-to-custom-elements/
  4. 4 http://webaim.org/techniques/forms/controls
  5. 5 http://www.w3.org/TR/aria-in-html/
  6. 6 http://webaim.org/techniques/semanticstructure/
  7. 7 http://www.w3.org/TR/wai-aria/
  8. 8 http://assistivetechnology.about.com/od/SpeechRecognition/p/Dragon-Naturallyspeaking-As-Assistive-Technology.htm
  9. 9 http://www.w3.org/WAI/PF/aria-practices/#keyboard
  10. 10 https://chrome.google.com/webstore/detail/accessibility-developer-t/fpkknkljclfencbdbgkenhalefipecmb
  11. 11 https://www.smashingmagazine.com/wp-content/uploads/2015/04/02-RaWOSKs-opt.png
  12. 12 https://www.smashingmagazine.com/wp-content/uploads/2015/04/02-RaWOSKs-opt.png
  13. 13 https://hacks.mozilla.org/2014/12/mozilla-and-web-components/
  14. 14 http://caniuse.com/#feat=imports
  15. 15 https://lists.w3.org/Archives/Public/public-webapps/2015JanMar/0361.html
  16. 16 http://unobfuscated.blogspot.com/2015/03/polymer-and-web-component-accessibility.html
  17. 17 http://www.paciellogroup.com/blog/2014/09/web-components-punch-list/
  18. 18 https://www.polymer-project.org/0.5/articles/accessible-web-components.html
  19. 19 http://webaim.org/techniques/keyboard/tabindex
  20. 20 http://drinkdistiller.com
  21. 21 http://learn.jquery.com/events/event-delegation/
  22. 22 https://github.com/facebook/react/issues/1791#issuecomment-82987932
  23. 23 https://www.smashingmagazine.com/wp-content/uploads/2015/05/cat-helping.gif
  24. 24 https://www.smashingmagazine.com/wp-content/uploads/2015/05/cat-helping.gif
  25. 25 http://tantek.com/2015/069/t1/js-dr-javascript-required-dead
  26. 26 https://adactio.com/journal/8245
  27. 27 https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/ARIA_Live_Regions
  28. 28 https://material.angularjs.org/
  29. 29 https://github.com/angular/material/blob/master/src/components/autocomplete/js/autocompleteDirective.js#L43
  30. 30 https://github.com/angular/material/blob/master/src/components/autocomplete/js/autocompleteController.js
  31. 31 https://www.marcozehe.de/2008/07/16/easy-aria-tip-3-aria-invalid-and-role-alert/
  32. 32 http://www.deque.com/blog/accessible-client-side-form-validation-html5-wai-aria/
  33. 33 http://webaim.org/intro/
  34. 34 http://alistapart.com/article/let-links-be-links
  35. 35 http://www.smashingmagazine.com/2014/10/22/color-contrast-tips-and-tools-for-accessibility/
  36. 36 http://webaim.org/resources/designers/
  37. 37 https://chrome.google.com/webstore/detail/accessibility-developer-t/fpkknkljclfencbdbgkenhalefipecmb
  38. 38 http://www.w3.org/TR/aria-in-html/
  39. 39 http://substantial.com/blog/2014/07/22/how-i-audit-a-website-for-accessibility/
  40. 40 http://angularjs.blogspot.com/2014/11/using-ngaria.html
  41. 41 http://marcysutton.com/angular-protractor-accessibility-plugin/

The post Notes On Client-Rendered Accessibility appeared first on Smashing Magazine.

Credit:

Notes On Client-Rendered Accessibility

“It’s Alive!”: Apps That Feed Back Accessibly

It’s one thing to create a web application and quite another to create an accessible web application. That’s why Heydon Pickering1, both author and editor at Smashing Magazine, wrote an eBook Apps For All: Coding Accessible Web Applications2, outlining the roadmap for the accessible applications we should all be making.

The following is an extract from the chapter “It’s Alive” from Heydon’s book, which explores how to use ARIA live regions. Javascript applications are driven by events and the user should be informed of what important events are happening in the interface. Live regions help us provide accessible messaging systems, keeping users informed of events in a way that is compatible with assistive technologies.

Getting The Message

Picture the scene: it’s a day like any other and you’re at your desk, enclosed in a semicircular bank of monitors that make up your extended desktop, intently cranking out enterprise-level CSS for MegaDigiSpaceHub Ltd. You are one of many talented front-end developers who share this floor in your plush London office.

You don’t know it, but a fire has broken out on the floor below you due to a “mobile strategist” spontaneously combusting. Since no expense was spared on furnishing the office with adorable postmodern ornaments, no budget remained for installing a fire alarm system. It is up to the floor manager in question to travel throughout the office, warning individual departments in person.

He does this by walking silently into each room, holding a business card aloft with the word “fire” written on it in 12pt Arial for a total of three seconds, then leaving. You and the other developers — ensconced behind your monitors — have no idea he even visited the room.

Three monitors for coding
Three monitors for coding

What I cover in my eBook is, for the most part, about making using your websites and applications accessible. That is, we’re concerned with everyone being able to do things with them easily. However, it is important to acknowledge that when something is done (or simply happens), something else will probably happen as a result: there are actions and reactions.

“When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction to that of the first body.”

– Newton’s third law of motion (Newton’s laws of motion, Wikipedia3)

Providing feedback to users, to confirm the course they’ve taken, address the result of a calculation they’ve made or to insert helpful commentary of all sorts, is an important part of application design. The problem which needs to be addressed is that interrupting a user visually, by making a message appear on screen, is a silent occurrence. It is also one which — in the case of dialogs — often involves the activation of an element that originates from a completely remote part of the document, many DOM nodes away from the user’s location of focus.

To address these issues and to ensure users (unlike the poor developers in the introductory story) get the message, ARIA provides live regions4. As their name suggests, live regions are elements whose contents may change in the course of the application’s use. They are living things, so don’t always stand still. By adorning them with the appropriate ARIA attributes, these regions will interrupt the user to announce their changes as they happen.

In the following example, we will look at how to alert users to changes which they didn’t ask for, but — like the building being on fire — really ought to know about anyway.

Alert!

Perhaps the only thing worse than a fire that could happen to the office of a web development company would be losing connectivity to the web. Certainly, if I was working using an online application, I’d like to know the application will no longer behave in the way I expect and perhaps store my data properly. This is why Google Mail inserts a warning whenever you go offline. As noted in Marco Zehe’s 2008 blog post5, Google was an early adopter of ARIA live regions.

Yellow box reads unable to reach G mail please check your internet connection
Yellow box reads unable to reach G mail please check your internet connection.

We are going to create a script which tests whether the user is online or off and uses ARIA to warn screen reader users of the change in this status so they know whether it’s worth staying at their desk or giving up and going for a beer.

The Setup

For live regions, ARIA provides a number of values for both the role and aria-live attributes. This can be confusing because there is some crossover between the two and some screen readers only support either the role or aria-live alternatives. It’s OK, there are ways around this.

At the most basic level, there are two common types of message:

  1. “This is pretty important but I’m going to wait and tell you when you’re done doing whatever it is you’re doing.”
  2. “Drop everything! You need to know this now or we’re all in big trouble. AAAAAAAAAAGHH!”

Mapped to the respective role and aria-live attributes, these common types are written as follows:

  1. “This is pretty important but I’m going to wait and tell you when you’re done doing whatever it is you’re doing.” (aria-live="polite" or role="status")
  2. “Drop everything! You need to know this now or we’re all in big trouble. AAAAAAAAAAGHH.” (aria-live="assertive" or role="alert")

When marking up our own live region, we’re going to maximize compatibility by putting both of the equivalent attributes and values in place. This is because, unfortunately, some user agents do not support one or other of the equivalent attributes. More detailed information on maximizing compatibility6 of live regions is available from Mozilla.

Since losing internet connectivity is a major disaster, we’re going to use the more aggressive form.

<div id="message" role="alert" aria-live="assertive" class="online">
    <p>You are online.</p>
</div>

The code above doesn’t alert in any way by itself — the contents of the live region would have to dynamically change for that to take place. The script below will run a check to see if it can load test_resource.html every three seconds. If it fails to load it, or it has failed to load it but has subsequently succeeded, it will update the live region’s class value and change the wording of the paragraph. If you go offline unexpectedly, it will display <p>There’s no internets. Time to go to the pub!</p>.

The change will cause the contents of that #message live region to be announced, abruptly interrupting whatever else is currently being read on the page.

// Function to run when going offline

var offline = function() 
  if (!$('#message').hasClass('offline')) 
    $('#message') // the element with [role="alert"] and
[aria-live="assertive"]
.attr('class', 'offline') .text('There's no internets. Go to the pub!'); } // Function to run when back online var online = function() if (!$('#message').hasClass('online')) $('#message') // the element with [role="alert"] and
[aria-live="assertive"]
.attr('class', 'online') .text('You are online.'); } // Test by trying to poll a file function testConnection(url) var xmlhttp = new XMLHttpRequest(); xmlhttp.onload = function() online(); xmlhttp.onerror = function() offline(); xmlhttp.open("GET",url,true); xmlhttp.send(); } // Loop the test every three seconds for "test_resource.html" function start() rand = Math.floor(Math.random()*90000) + 10000; testConnection('test_resource.html?fresh=' + rand); setTimeout(start, 3000); // Start the first test start();
Alert reads alert there’s no internets. Go to the pub.
Alert reads “Alert: there’s no internets. Go to the pub!”

There are more comprehensive ways to test to see if your application is online or not, including a dedicated script called offline.js7, but this little one is included for context. Note that some screen readers will prefix the announcement with “Alert!”, so you probably don’t want to include “Alert!” in the actual text as well, unless it’s really, really important information.

There is a demo of this example8 available.

test.css

We would like to maximize compatibility of live regions across browsers and assistive technologies. We can add a rule in our test.css to make sure equivalent attributes are all present like so:

[role="status"]:not([aria-live="polite"]), 
[role="alert"]:not([aria-live="assertive"]) 
	content: 'Warning: For better support, you should include
a politeness setting for your live region role using the
aria-live attribute'; [aria-live="polite"]:not([role="status"]), [aria-live="assertive"]:not([role="alert"]) content: 'Warning: For better support, you should
include a corresponding role for your aria-live
politeness setting';

I Want The Whole Story

“Taken out of context, I must seem so strange.”

– Fire Door by Ani DiFranco

By default, when the contents of a live region alter, only the nodes (HTML elements, to you and me) which have actually changed are announced. This is helpful behavior in most situations because you don’t want a huge amount of content reread to you just because a tiny part of it is different. In fact, if it’s all read out at once, how would you tell which part had changed? It would be like the memory tray game where you have to memorize the contents of a tray to recall which things were removed.

Tray full of bits of HTML
Tray full of bits of HTML

In some cases, however, a bit of context is desirable for clarification. This is where the aria-atomic attribute comes in. With no
aria-atomic set, or with an aria-atomic value of false, only the elements which have actually changed will be notified to the user. When aria-atomic is set to true, all of the contents of the element with aria-atomic set on it will be read.

The term atomic is a little confusing. To be true means to treat the contents of this element as one, indivisible thing (an atom), not to smash the element into little pieces (atoms). Whether or not you think atomic is a good piece of terminology, the expected behavior is what counts and it is the first of the two behaviors which is defined.

One atom compared to lots of atoms
One atom compared to lots of atoms

Gez Lemon offers a great example of aria-atomic9. In his example, we imagine an embedded music player which tells users what the currently playing track is, whenever it changes.

<div aria-live="polite" role="status" aria-atomic="true">
  <h3>Currently playing:</h3>
  <p>Jake Bugg — Lightning Bolt</p>
</div>

Even though only the name of the artist and song within the paragraph will change, because aria-atomic is set to true the whole region will be read out each time: “Currently playing: Jake Bugg — Lightning Bolt”. The “Currently playing” prefix is important for context.

Note that the politeness setting of the live region is polite not
assertive as in the previous example. If the user is busy reading something else or typing, the notification will wait until they have stopped. It isn’t important enough to interrupt the user, not least because it’s their playlist: they might recognize all the songs anyway.

Box showing a graphic equalizer which reads currently playing, Jake bug lightning bolt
Box showing a graphic equalizer which reads currently playing, Jake bug lightning bolt

The aria-atomic attribute doesn’t have to be used on the same element that defines the live region, as in Lemon’s example. In fact, you could use aria-atomic on separate child elements within the same region. According to the specification:

“When the content of a live region changes, user agents SHOULD examine the changed element and traverse the ancestors to find the first element with aria-atomic set, and apply the appropriate behavior.”

Supported States and Properties10

This means we could also include another block within our live region to tell users which track is coming up next.

<div aria-live="polite" role="status">

   <div aria-atomic="true">
     <h3>Currently playing:</h3>
     <p>Jake Bugg — Lightning Bolt</p>
   </div>

   <div aria-atomic="true">
     <h3>Next in queue:</h3>
     <p>Napalm Death — You Suffer</p>
   </div>

</div>

Now, when Jake Bugg’s Lightning Bolt is nearing an end, we update the <p> within the next in queue block to warn users that Napalm Death are ready to take the mic: “Next in queue: Napalm Death — You Suffer”. As Napalm Death begin to play, the currently playing block also updates with their credentials and at the next available juncture the user is reminded that the noise they are being subjected to is indeed Napalm Death.

aria-busy

I was a bit mischievous using Napalm Death’s You Suffer as an example track because, at 1.316 seconds long, the world’s shortest recorded song would have ended before the live region could finish telling you it had started! If every track was that short, the application would go haywire.

In cases where lots of complex changes to a live region must take place before the result would be understandable to the user, you can include the aria-busy attribute11. You simply set this to true while the region is busy updating and back to false when it’s done. It’s effectively the equivalent of a loading spinner used when loading assets in JavaScript applications.

Typical loading spinner labelled ARIA atomic true
Typical loading spinner labelled ARIA atomic true

Usually you set aria-busy="true" before the first element (or addition) in the live region is loaded or altered, and false when the last expected element has been dealt with. In the case of our music player example, we’d probably want to set a timeout of ten seconds or so, making sure only music tracks longer than the announcement of those tracks get announced.

$('#music-info').attr('aria-busy', 'true');

// Update the song artist & title here, then...

setTimeout(function() 
   $('#music-info').attr('aria-busy', 'false');
, 10000);

Buy The eBook

That concludes your extract from “It’s Alive!”, a chapter which goes on to explore the intricacies of designing accessible web-based dialogs. But that’s not all. There’s plenty more about creating accessible experiences in the book, from basic button control design to ARIA tab interfaces and beyond. Reviews for the eBook and purchasing options are available here12. The inimitable Bruce Lawson has written a lovely post13 about it, too.

Footnotes

  1. 1 https://twitter.com/heydonworks
  2. 2 https://shop.smashingmagazine.com/apps-for-all-coding-accessible-web-applications.html
  3. 3 http://en.wikipedia.org/wiki/Newton%27s_laws_of_motion
  4. 4 https://developer.mozilla.org/en-US/docs/Accessibility/ARIA/ARIA_Live_Regions
  5. 5 http://www.marcozehe.de/2008/08/04/aria-in-gmail-1-alerts/
  6. 6 https://developer.mozilla.org/en-US/docs/Accessibility/ARIA/ARIA_Live_Regions
  7. 7 http://github.hubspot.com/offline/docs/welcome/
  8. 8 http://heydonworks.com/practical_aria_examples/#offline-alert
  9. 9 http://juicystudio.com/article/wai-aria_live-regions_updated.php
  10. 10 http://www.w3.org/TR/wai-aria/states_and_properties#aria-atomic
  11. 11 http://www.w3.org/TR/wai-aria/states_and_properties#aria-busy
  12. 12 https://shop.smashingmagazine.com/apps-for-all-coding-accessible-web-applications.html
  13. 13 http://www.brucelawson.co.uk/2014/apps-for-all-coding-accessible-web-applications-book-review/

The post “It’s Alive!”: Apps That Feed Back Accessibly appeared first on Smashing Magazine.

Originally posted here: 

“It’s Alive!”: Apps That Feed Back Accessibly

Thumbnail

Accessibility APIs: A Key To Web Accessibility

Web accessibility is about people. Successful web accessibility is about anticipating the different needs of all sorts of people, understanding your fellow web users and the different ways they consume information, empathizing with them and their sense of what is convenient and what frustratingly unnecessary barriers you could help them to avoid.

Armed with this understanding, accessibility becomes a cold, hard technical challenge. A firm grasp of the technology is paramount to making informed decisions about accessible design.

How do assistive technologies present a web application to make it accessible for their users? Where do they get the information they need? One of the keys is a technology known as the accessibility API (or accessibility application programming interface, to use its full formal title).

Reading The Screen

To understand the role of an accessibility API in making Web applications accessible, it helps to know a bit about how assistive technologies provide access to applications and how that has evolved over time.

A World of Text

With the text-based DOS operating system, the characters on the screen and the cursor position were held in a screen buffer in the computer’s memory. Assistive technologies could obtain this information by reading directly from the screen buffer or by intercepting signals being sent to a monitor. The information could then be manipulated — for example, magnified or converted into an alternative format such as synthetic speech.

Getting Graphic

The arrival of graphical interfaces such as OS/2, Mac OS and Windows meant that key information about what was on the screen could no longer be simply read from a buffer. Everything was now drawn on screen as a picture, including pictures of text. So, assistive technologies on those platforms had to find a new way to obtain information from the interface.

They dealt with this by intercepting the drawing calls sent to the graphics engine and using that information to create an alternate off-screen version of the interface. As applications made drawing calls through the graphics engine to draw text, carets, text highlights, drop-down windows and so on, information about the appearance of objects on the screen could be captured and stored in a database called an off-screen model. That model could be read by screen readers or used by screen magnifiers to zoom in on the user’s current point of focus within the interface. Rich Schwerdtfeger’s seminal 1991 article in Byte, “Making the GUI Talk1,” describes the then-emerging paradigm in detail.

Off-Screen Models

Recognizing the objects in this off-screen model was done through heuristic analysis. For example, the operating system might issue instructions to draw a rectangle on screen, with a border and some shapes inside it that represent text. A human might look at that object (in the context of other information on screen) and correctly deduce it is a button. The heuristics required for an assistive technology to make the same deduction are actually very complex, which causes some problems.

To inform a user about an object, an assistive technology would try to determine what the object is by looking for identifying information. For example, in a Windows application, the screen reader might present the Window Class name of an object. The assistive technology would also try to obtain information about the state of an object by the way it is drawn — for example, tracking highlighting might help deduce when an object has been selected. This works when an object’s role or state can easily be determined, but in many cases the relevant information is unclear, ambiguous or not available programmatically.

This reverse engineering of information is both fallible and restrictive. An assistive technology could implement support for a new feature only once it had been introduced into the operating system or application. An object might not convey useful information, and in any case it took some time to identify it, develop the heuristics needed to support it and then ship a new version of the screen reader. This created a delay between the introduction of new features and assistive technology’s ability to support it.

The off-screen model needs to shadow the graphics engine, but the engines don’t make this easy. The off-screen model has to independently calculate things like white-space management and alignment coordination, and errors would almost inevitably mount up. These errors could result in anomalies in the information conveyed to assistive technology users or in garbage buildup and memory leaks that lead to crashes.

Accessibility APIs

From the late 1990s, operating system accessibility APIs were introduced as a more reliable way to pass information to assistive technologies. Instead of applying complex heuristics to determine what an on-screen object might be, assistive technologies could query the accessibility API for specific information about each object. Authors could now provide the necessary information about an application in a form that they knew assistive technology would understand.

An accessibility API represents objects in a user interface, exposing information about each object within the application. Typically, there are several pieces of information for an object, including:

  • its role (for example, it might be a button, an application window or an image);
  • a name that identifies it within the interface (if there is a visible label like text on a button, this will typically be its name, but it could be encoded directly in the object);
  • its state or current condition (for example, a checkbox might currently be selected, partially selected or not selected).

The first platform accessibility API, Microsoft Active Accessibility (MSAA), was made available in a 1997 update to Windows 95. MSAA provided information about the role and state of objects and some of their properties. But it gave no access to things like text formatting, and the relationships between objects in the interface were difficult or impossible to determine.

In 1998, IBM and Sun Microsystems built a cross-platform accessibility API for Java. Java Swing 1.0 gave access to rich text information, relationships, tables, hyperlinks and more. The Java Jive screen reader, built on this platform, was the first time a screen reader’s information about the components of a user interface included role, state and associated properties, as well as rich text formatting details.

Notably, Java Jive was written by three developers in roughly five months; developing a screen reader through an off-screen model typically took several years.

Accessibility APIs Go Mainstream

In 2001 the Assistive Technology Service Provider Interface (AT-SPI) for Linux was released, based on the work done on Java, and in 2002 Apple included the NSAccessibility protocol with Mac OS X (10.2 Jaguar).

Meanwhile on Windows, the situation was getting complicated. Microsoft shipped the User Interface Automation (UIA) API as part of Windows 7, while IBM released IAccessible2 as an open standard for Windows and Linux, again evolved from the work done on Java.

Accessibility APIs existed for mobile platforms before touchscreen smartphones became dominant, but in 2009 Apple added the UI Accessibility API to iOS 3, and Android 1.6 (Donut) shipped with the Accessibility Framework.

By the beginning of 2015, Chrome OS stood out as the most mainstream platform lacking a standard accessibility API. But Google was beta testing its Automation API, intended to fill that gap in the platform.

Modern Accessibility APIs

In modern accessibility APIs, user interfaces are represented as a hierarchical tree. For example, an application window would contain several objects, the first of which might be a menu bar. The menu bar would contain a number of menus, each of which contains a number of menu items, and so on. The accessibility API describes an object’s relationship to other objects to provide context. For example, a radio button would probably be one “sibling” within a group.

Other features such as information about text formatting, applicable headers for content sections or table cells and things such as event notifications have all become commonplace in modern accessibility APIs.

Assistive technologies now make standard method calls to the operating system to get information about the objects on the screen. This is far more reliable, and far more efficient, than intercepting low-level operating system messages and trying to deconstruct them into something meaningful.

From The Web To The Accessibility API

In browsers, the platform accessibility API is used both to make information about the browser itself available to assistive technologies and to expose information about the currently rendered content.

Browsers typically support one or more of the available accessibility APIs for the platform they’re running on. For example, on Windows, Firefox, Chrome, Opera and Yandex support MSAA/IAccessible and IAccessible2, while Internet Explorer supports MSAA/IAccessible and UIAExpress. Safari and Chrome support NSAccessibility on OS X and UIAccessibility on iOS.

The browser uses the HTML DOM, along with further information derived from CSS, to generate an accessibility tree hierarchy of the content it is displaying, and it passes that information to the platform accessibility API. Information such as the role, name and state of each object in the content, as well as how it relates to other objects in the content, can then be queried by assistive technologies.

Let’s see how this works with some HTML:

<p><img src="mc.png" alt="My cat" longdesc="meeow.html">Rocks!</p>

We have an image, rendered as part of a paragraph. A browser exposes several pieces of information about the image to the accessibility API:

  1. It has a role of “image” (or “graphic” — details vary between platforms). This is implicitly determined from the fact that it is an HTML img element.
  2. Its name is “My cat”. For images, the name is typically derived from the alt attribute.
  3. A description is available on request, at the URL meeow.html (at the same “base” as the image).
  4. The parent is a paragraph element, with a role of “text.”
  5. The image has a “sibling” in the same container, the text node “Rocks!”

An assistive technology would query the accessibility API for this information, which it would present so the user can interact with it. For example, a screen reader might announce, “Graphic: My cat. Description available.”

(Does a cat picture need a full description? Perhaps not, but try explaining that to people who really want to tell you just how amazing and talented their feline friends actually are — or those of their readers who want to know all about what this cat looks like! Meanwhile, the philistines among us can ignore the extra information.)

Roles

Most HTML elements have what are called “roles,” which are a way of describing elements. If you are familiar with WAI-ARIA, you will be aware of the role attribute, which sets a role explicitly. Most elements already have implicit roles, however, which go along with the element type. For example:

  • <ul> and <ol> have “list” as implicit role,
  • <a> has “link” or “hyperlink” as implicit role,
  • <body> has “document” as implicit role.

These role mappings are being standardized and documented in the W3C’s “HTML Accessibility API Mappings2” specification.

Names

While roles are typically derived from the type of HTML element, the name (sometimes referred to as the “accessible name”) of an object often comes from one of several different sources. In the case of a form field, the name is usually taken from the label associated with the field:

<input type="radio" id="tequila" name="drinks" checked>
<label for="tequila">Reposado</label>

In this example, a button has the “radio button” role. Its accessible name will be “Reposado,” the text content of the label element. So, when a speech-recognition tool is instructed to “Click Radio button Reposado,” it can target the correct object within the interface.

The checked attribute indicates the state of the button, so that a screen reader can announce “Radio button Reposado Checked” or allow a user to navigate directly between the checked options in order to rapidly review a form that contains multiple sets of radio buttons.

Authors have an important role to play, providing the key information that assistive technologies need. If authors don’t do the “right thing,” assistive technologies must look in other places to try to get an accessible name — if there is no label, then a title or some text content might be near the radio button, or its relationship to other elements might help the user through context.

It is important to note that authors should not rely on an assistive technology’s ability to do this, because it is generally unreliable. It is a “repair” strategy that gives assistive technology users some chance of using a poorly authored page or website, such as the following:

<p>How good is reposado?<br>
<!--BAD CODE EXAMPLE: DON'T DO THIS-->
<input type="radio" id="fantastic" name="reposado" checked >
<label for="reposado">Fantastic</label><br>
<input type="radio" id="notBad" name="tequila"><br>
<input type="radio" id="meh" name="tequila" title="meh"> Meh

Faced with this case, a screen reader might provide information such as “second of three options,” based on information that the browser provides to the accessibility API about the form. Little else can be determined reliably from the code, though.

Nothing in the code associates the question with the set of radio buttons, and nothing informs the browser of what the accessible name for the first two buttons should be. The for and id attributes of the <label> and <input> for the first button do not share a common value, and nothing associates the nearby text content with the second button. The browser could use the title of the third button as an accessible name, but it duplicates the nearby text and unnecessarily bloats the code.

A well-authored version of this would use the fieldset element to group the radio buttons and use a legend element to associate the question with the group. Each of the buttons would also have a properly associated label.

<fieldset><legend>How good is reposado?</legend>
<!-- THIS IS A BETTER WAY TO CODE THE EXAMPLE -->
<input type="radio" id="fantastic" name="reposado" checked>
<label for="fantastic">Fantastic</label><br>
<input type="radio" id="notBad" name="reposado">
<label for="notBad">Not bad</label><br>
<input type="radio" id="meh" name="reposado">
<label for="meh">Meh</label><br>
</fieldset>

Making this information available through the accessibility API is more efficient and less prone to error than relying on assistive technologies to create an off-screen model or guess at the information they need.

Conclusion

Today’s technologies — operating systems, browsers and assistive technologies — work together to extract accessibility information from a web interface and appropriately present it to the user. If appropriate content semantics are not available, then assistive technologies will use old and unreliable techniques to make the interface usable.

The value of accessibility APIs is in allowing the operating system, browser and assistive technology to efficiently and reliably give users the information they need. It is now easy to make an interface developed with well-written HTML, CSS and JavaScript very accessible and usable for assistive technology users. A big part of accessibility is, therefore, an easily met responsibility of web developers: Know your job, use your tools well, and many pieces will fall into place as if by magic.

With thanks to Rich Schwerdtfeger, Steve Faulkner and Dominic Mazzoni.

(hp, al, ml)

Footnotes

  1. 1 http://www.paciellogroup.com/blog/2015/01/making-the-gui-talk-1991-by-rich-schwerdtfeger/
  2. 2 http://rawgit.com/w3c/aria/master/html-aam/html-aam.html

The post Accessibility APIs: A Key To Web Accessibility appeared first on Smashing Magazine.

Visit link:

Accessibility APIs: A Key To Web Accessibility

Thumbnail

Accessibility Originates With UX: A BBC iPlayer Case Study

Not long after I started working at the BBC, I fielded a complaint from a screen reader user who was having trouble finding a favorite show via the BBC iPlayer’s home page1. The website had recently undergone an independent accessibility audit which indicated that, other than the odd minor issue here and there, it was reasonably accessible.

I called the customer to establish what exactly the problem was, and together we navigated the home page using a screen reader. It was at that point I realized that, while all of the traditional ingredients of an accessible page were in place — headings, WAI ARIA Landmarks2, text alternatives and so on — it wasn’t very usable for a screen reader user.

The old iPlayer homepage3
iPlayer’s old home page. (View large version4)

The first issue was that the subnavigation was made up of only two links: “TV” and “Radio,” with links to other key areas such as “Categories,” “Channels” and “A to Z” buried further down the content order of the page, making them harder for the user to find.

The old iPlayer homepage with Categories, Channels and A to Z highlighted5
iPlayer’s old home page showing “Categories,” “Channels” and “A to Z” far down the content order. (View large version6)

The second issue was how verbose the page was to the screen reader user. Instead of hearing a link to a program once, the program would be announced twice because the thumbnail image and the heading for the program were presented as two separate links. This made the page longer to listen to and was confusing because links to the same destination were worded differently.

Duplicated links highlighted on the old iPlayer homepage7
iPlayer’s old home page showing duplicate links. (View large version8)

Finally, keyboard access on the page was illogical. In the “Categories” area, for example, a single click on a category would reveal four items in a panel next to it. To access the full list of items in that category, you had to click again on the same link to be taken to a listing page. This was a major hurdle for the user and the place where the customer I was talking to gave up using the application altogether.

Categories, highlighted on the old iPlayer homepage9
iPlayer’s old home page showing the “Categories” links highlighted. (View large version10)

It was clear that, while the website had been built with accessibility in mind, it hadn’t been designed with accessibility in mind and this is where the issues originated.

The Challenge

At the BBC, a number of internal standards and guidelines are in place that teams are required to follow when delivering accessible website and mobile applications. Key ones are:

There is also a strong culture of accessibility; the BBC is a publicly funded organization14, and accessibility is considered central to its remit and is a stronger driver than any legal requirement. So, how did this happen?

Part of the issue is that standards and guidelines tend to focus more on code than design, more on output than outcome, more on compliance than experience. As such, technically compliant pages could be built that are not the most usable for disabled users.

It may not seem immediately obvious, but visual design can have a massive impact on users who cannot see the page. I often find that mobile applications and websites that are problematic to make accessible are the ones where the visual design, by dictating structure, does not allow it.

This does not mean that standards and guidelines are redundant — far from it. But what we have found at the BBC is that standards need to sit within, and inform, an accessibility framework that runs through product management, user experience, development and quality assurance. As such, accessibility originates with UX. Most of the thinking and requirements should be considered up front so that poor accessibility isn’t designed in.

While redesigning the BBC iPlayer website, renewed focus was given to inclusive design, which, while adhering to the BBC’s standards and guidelines, is driven by four principles (more on that below). We then distilled our standards and guidelines to create a focused list of requirements for the UX to follow. We also started to train designers to annotate their own designs for accessibility.

UX Principles

Our four main principles are the following:

  • Give users choice.
  • Put users in control.
  • Design with familiarity in mind.
  • Prioritize features that add value.

Give Users Choice

Never assume that just because a user can access content one way that they want to access content in that one way. Because BBC’s iPlayer has “audio described” and “sign language” formats, it was never in any doubt that both of these should have their own dedicated listing pages, accessed via the “Categories” dropdown link. (Note that all on-demand content is subtitled, which is why there is no “Subtitled” category. Subtitles can be switched on in the media player.)

The Categories dowpdown with Audio Described and Signed sections15
The “Categories” dropdown with “Audio Described” and “Signed” sections. (View large version16)

User research and feedback indicated, however, that although people want dedicated categories, they also want to be able to search for and browse content in the same way that any other users would and to select their preferred format from there. I have stayed in touch over the years with the gentleman who complained about the old iPlayer page, and he’s said himself, “Don’t send us into disability silos!”

This means that from the outset the designs need to signpost “Audio Description” and “Signed” content via search results, A to Z, category and other listing pages. Not making any assumptions or not stereotyping users with disabilities is important — for instance, a person with a severe vision impairment might not always use audio descriptions; news, sports, music programs and live events often aren’t supported by audio description because commentators already provide enriched commentary.

Alternative formats shown in listing pages17
List pages such as search, shown here, indicate what formats programs are available in. (View large version18)

On-demand pages also list alternative formats, allowing users to choose what they want. Looking ahead, the option to choose your format could also be included in the Standard Media Player19 — the BBC media player used for on-demand and live streaming video across all BBC products, including iPlayer.

Playback pages showing high definition and audio described formats20
Screenshot of the playback page showing HD and AD formats. (View large version21)

Put Users in Control

Never taking control away from the user is essential. A key aspect of this in iPlayer, which is responsive, is not suppressing pinch zoom. Time and again in user testing, we have observed users zooming content, even on responsive websites, where text might be intentionally larger.

Due to an iOS bug that was rectified in iOS 6, the ability to pinch zoom was suppressed on many websites due to poor resizing when the orientation is changed from portrait to landscape. Now that this has been fixed, there is no reason to continue suppressing zoom.

Another aspect of control is autoplay. While iPlayer currently has autoplay for live content, this can be a problem because the sound of the video can make it difficult for a screen reader user to hear their reader’s output. However, we do know of screen reader users who request autoplay because it means they don’t have to navigate to the player, find the play button and activate play. The answer is to look at ways to give users control over playback by opting in or out of autoplay, such as by using a popup and saving preferences with cookies.

Design With Familiarity in Mind

There needs to be a balance between the new and the familiar. Users understand how to interact with pages and apps that use familiar design patterns. This is especially important in native apps for iOS and Android, where standard UI components come with accessibility built in.

Equally important is the language used across the BBC’s native iPlayer apps and responsive website. Where the platform allows, consistent labels for headings, links and buttons — not just visually, but also via alternatives for screen reader users — ensure that the experience is familiar and recognizably “BBC iPlayer,” regardless of the platform.

Tied into this, the new designs reinforce a logical heading structure within the code, which in turn supports navigation for screen reader users. Key to this is ensuring that the pattern used for the heading structure is repeated across pages, so that users do not find main headings in different places depending on what page they are on. While structure is typically viewed as a responsibility of developers, it needs to be decided before designs are signed off in order to prevent poor structure getting coded in — more on that later.

Prioritize Features That Add Value

Accessibility at the BBC is not just about meeting code, content and design requirements, but also about incorporating helpful features that add value for all users, including disabled users. A large proportion of feedback we get from our disabled users pertains to usability issues that could be experienced by anyone on some level but that seriously adversely affect disabled users. When we incorporate features to help users with specific disabilities, everyone gains access to a richer and easier experience.

One obstacle that comes up time and again is finding a favorite show. I’ve spoken with many screen reader users who say they save shortcuts to their favorite shows on their desktop but, due to changing URLs, often lose content. A simple way to address this that benefits all users is to ensure that there is a mechanism for saving favorites on the website. Adding in options to sort favorites and list them the way you want further improves this. It may sound unrelated to accessibility, but it was the single most requested feature received from disabled users. Simply accessing the favorites page to watch the latest episode of something, rather than having to search the website, makes all the difference.

Sorting favourites using A to Z and recent options22
The “Favourites” page, with options to sort by “A to Z” and “Recent”. (View large version23)

Finding ways to allow people to get to the content they want more quickly has also influenced what is available within the media player itself. Once an episode has finished playing, exiting the media player and navigating back to the website to find the next episode is a massive overhead for some users. Adding a “More” button to the player itself — showing the next episode or programs similar to the current one — cuts down on the amount of effort it takes users to find new content.

The Standard Media Player plug in for related content24
The “You may also like” plugin shows related content and next episodes within the Standard Media Player. (View large version25)

One key feature that has added value to BBC iPlayer’s native iOS and Android apps, as well as the website (when viewed in Chrome), is support for Google Chromecast26. Being able to control what content you view on TV without having to use a remote or complex TV user interface is invaluable. Using one’s device of choice, whether it be iOS or Android, is much easier for a disabled user than using a remote control and a potentially inaccessible TV interface.

Chromcast on BBC iPlayer27
BBC iPlayer and Chromecast. (View large version28)

Guidelines

The principles above exist to create a mindset that helps product owners and UX practitioners alike when shaping and designing inclusive products. In addition to the four principles, a set of guidelines is used to design more accessible interfaces. The following are a subset taken from the “BBC Mobile Accessibility Standards and Guidelines29”:

  1. Color contrast
    Ensure that text and backgrounds exceed the WCAG Double A 4.5:1 contrast minimum.
  2. Color and meaning
    Information conveyed with color must also be identifiable from context or markup.
  3. Content order
    Content order must be logical.
  4. Structure
    When supported by the platform, pages must provide a logical and hierarchical heading structure.
  5. Containers and landmarks
    When supported by the platform, page containers or landmarks should be used to describe page structure.
  6. Duplicate links
    Controls, objects and grouped interface elements must be represented as a single component.
  7. Touch target size
    Targets must be large enough to touch accurately (44 pixels).
  8. Spacing
    An inactive space must surround all active elements (unless they are large blocks exceeding 44 pixels).
  9. Zoom
    Where zoom is supported by the platform, it must not be suppressed.
  10. Actionable elements
    Links and other actionable elements must be clearly distinguishable.

The New iPlayer

Keeping in mind this backdrop of principles and guidelines, along with the renewed focus on adding value and features that enhance the experience for disabled users, here are a few of the changes introduced in the BBC’s new iPlayer:

The new BBC iPlayer homepage30
The BBC’s new iPlayer home page has better content order, search tools, structure and keyboard access. (View large version31)

At launch, the iPlayer’s navigation housed the BBC’s channels, a “TV Guide,” “Favourites” and “Categories.” These all sit at the start of the page, high up in the content order. While they are visually easy to see, they are also easily discoverable by screen reader users via a hidden heading and labeled navigation landmark:

<div role="navigation">
<h2>iPlayer navigation</h2>

Where previously the “Categories” were unusable for the screen reader user I spoke with, they are now prominent in the page and fully keyboard navigable. Since launch, the addition of more channels has meant that the channel links have been rehoused in their own dropdown menu.

Search tools have also been added, enabling users to carry out predictive search, browse A to Z or view their most recently watched program. This is all keyboard accessible, it makes use of headings, and it has landmarks where appropriate.

The home page carousel is also fully keyboard accessible. Each program in the stream is presented as one link, with the reading order of text starting with the primary information first: channel attribution, program name, episode information, abstract and program duration.

Work has also been carried out to improve visible focus and bring both the iPlayer website and the Standard Media Player in line with the BBC header and footer. The pink underline used for the hover and focus states in the main BBC navigation is now used within the Standard Media Player to indicate when a button is selected — for example, when the subtitles are switched on. This replaces the use of color only to indicate a selected state, which was indistinguishable from the hover and focus states.

BBC navigation hover and focus states32
The hover and focus pink underline used in the BBC header for iPlayer. (View large version33)
Hover and focus states used for the subtitle button on the Standard Media Player34
Active and inactive hover and focus states on the subtitle button in the Standard Media Player. (View large version35)

You can read more about what steps were taken to make iPlayer web-accessible36 and to make the Standard Media Player accessible37, including creation of an accessible media player in Flash38, on the BBC’s Internet Blog.

Annotated UX

All of the thinking around inclusive design that comes from product owners, UX practitioners and designers needs to be captured and communicated to developers and engineers. At the BBC, we are moving to a model where designs need to be annotated for accessibility. This includes:

  • headings,
  • containers,
  • content order,
  • color contrast,
  • alternatives to color and meaning,
  • visible focus,
  • keyboard and input interactions.
Annotated UX for the iPlayer homepage showing headings, lists, labels and content order39
An example of an annotated UX showing headings and labels. (View large version40)

The design above, showing an early version of the BBC One home page in iPlayer, outlines where the <h1> to <h6> headings should be. The UX practitioner doesn’t need an in-depth knowledge of code, but rather an understanding of the hierarchy of data within a page. As such, an equally acceptable approach would be to indicate the “main heading,” “secondary heading,” “third-level heading” and so on. Developers can then take this and translate it into semantic markup.

Equally, indicating the logical order of content helps developers to code content in the right sequence (i.e. source order) — something that is essential to a screen reader or sighted keyboard user’s comprehension of the page.

Annotating the UX in this way is key to identifying designs that don’t allow for a logical page structure, content order or behavior. It is the first step to generating a style guide that documents focus states, colors and so on. Further down the line, these requirements can also be used to generate user acceptance criteria and automated quality assurance tests.

Even if you’re working in an agile way, where designs are iterative and not delivered in a complete form, annotation still works. As long as the basic framework of the page is well defined, the visual design can evolve from that.

Summary

It’s very easy to get bogged down by accessible output and to forget that, ultimately, accessibility is about people. As such, keep the following in mind, whether you are working in product, UX, development or quality assurance:

  • Design with choice in mind.
  • Always give users control over the page.
  • Prioritize features that add value for disabled users.
  • Design with familiarity in mind.
  • Integrate accessibility into annotated UX and style guides.
  • Make no assumptions. Test ideas and concepts.

Fostering these key principles across the entire team will go a long way to ensuring that products are inclusive and usable for disabled people. Listening to users and actively including their feedback, along with adhering to organizational standards and guidelines, are essential.

(hp, il, al, ml)

Footnotes

  1. 1 http://www.bbc.co.uk/iplayer
  2. 2 http://www.w3.org/TR/wai-aria/roles#landmark_roles
  3. 3 http://www.smashingmagazine.com/wp-content/uploads/2015/02/101-iPlayerHomePage-opt.png
  4. 4 http://www.smashingmagazine.com/wp-content/uploads/2015/02/101-iPlayerHomePage-opt.png
  5. 5 http://www.smashingmagazine.com/wp-content/uploads/2015/02/102-iPlayerHomePage-opt.png
  6. 6 http://www.smashingmagazine.com/wp-content/uploads/2015/02/102-iPlayerHomePage-opt.png
  7. 7 http://www.smashingmagazine.com/wp-content/uploads/2015/02/103-iPlayerHomepage-opt.png
  8. 8 http://www.smashingmagazine.com/wp-content/uploads/2015/02/103-iPlayerHomepage-opt.png
  9. 9 http://www.smashingmagazine.com/wp-content/uploads/2015/02/104-iPlayerHomepage-opt.png
  10. 10 http://www.smashingmagazine.com/wp-content/uploads/2015/02/104-iPlayerHomepage-opt.png
  11. 11 http://www.bbc.co.uk/guidelines/futuremedia/accessibility/
  12. 12 http://www.bbc.co.uk/guidelines/futuremedia/accessibility/screenreader.shtml
  13. 13 http://www.bbc.co.uk/guidelines/futuremedia/accessibility/mobile_access.shtml
  14. 14 http://www.bbc.co.uk/corporate2/insidethebbc/whoweare
  15. 15 http://www.smashingmagazine.com/wp-content/uploads/2015/02/105-iPLayerHomePage-categories-opt.png
  16. 16 http://www.smashingmagazine.com/wp-content/uploads/2015/02/105-iPLayerHomePage-categories-opt.png
  17. 17 http://www.smashingmagazine.com/wp-content/uploads/2015/02/106-iPlayerListings-opt.png
  18. 18 http://www.smashingmagazine.com/wp-content/uploads/2015/02/106-iPlayerListings-opt.png
  19. 19 http://www.bbc.co.uk/blogs/internet/posts/Standard-Media-Player
  20. 20 http://www.smashingmagazine.com/wp-content/uploads/2015/02/107-iPlayerMediaPlayer-opt.png
  21. 21 http://www.smashingmagazine.com/wp-content/uploads/2015/02/107-iPlayerMediaPlayer-opt.png
  22. 22 http://www.smashingmagazine.com/wp-content/uploads/2015/02/108-iPlayerFavourites-opt.png
  23. 23 http://www.smashingmagazine.com/wp-content/uploads/2015/02/108-iPlayerFavourites-opt.png
  24. 24 http://www.smashingmagazine.com/wp-content/uploads/2015/02/109-iPlayerMediaPlayerPlugin-opt.png
  25. 25 http://www.smashingmagazine.com/wp-content/uploads/2015/02/109-iPlayerMediaPlayerPlugin-opt.png
  26. 26 http://www.bbc.co.uk/blogs/internet/posts/Accessibility-on-BBC-iPlayer-on-Chromecast
  27. 27 http://www.smashingmagazine.com/wp-content/uploads/2015/02/110-iPlayerChromecast-opt.jpg
  28. 28 http://www.smashingmagazine.com/wp-content/uploads/2015/02/110-iPlayerChromecast-opt.jpg
  29. 29 http://www.bbc.co.uk/guidelines/futuremedia/accessibility/mobile
  30. 30 http://www.smashingmagazine.com/wp-content/uploads/2015/02/111-iPLayerHomepage-opt.png
  31. 31 http://www.smashingmagazine.com/wp-content/uploads/2015/02/111-iPLayerHomepage-opt.png
  32. 32 http://www.smashingmagazine.com/wp-content/uploads/2015/02/112-iPlayerNavigationFocusState-opt.png
  33. 33 http://www.smashingmagazine.com/wp-content/uploads/2015/02/112-iPlayerNavigationFocusState-opt.png
  34. 34 http://www.smashingmagazine.com/wp-content/uploads/2015/02/113-iPlayerHoverStates-opt.png
  35. 35 http://www.smashingmagazine.com/wp-content/uploads/2015/02/113-iPlayerHoverStates-opt.png
  36. 36 http://www.bbc.co.uk/blogs/internet/posts/Making-the-new-iPlayer-accessible-for-all-users
  37. 37 http://www.bbc.co.uk/blogs/internet/posts/Standard-Media-Player-accessibility
  38. 38 http://www.bbc.co.uk/blogs/internet/posts/Creating-an-accessible-media-player-in-Flash
  39. 39 http://www.smashingmagazine.com/wp-content/uploads/2015/02/114-iPlayerCarousel-opt.png
  40. 40 http://www.smashingmagazine.com/wp-content/uploads/2015/02/114-iPlayerCarousel-opt.png

The post Accessibility Originates With UX: A BBC iPlayer Case Study appeared first on Smashing Magazine.

Read this article:  

Accessibility Originates With UX: A BBC iPlayer Case Study

Thumbnail

Enhancing User Experience With The Web Speech API

It’s an exciting time for web APIs, and one to watch out for is the Web Speech API. It enables websites and web apps not only to speak to you, but to listen, too. It’s still early days, but this functionality is set to open a whole array of use cases. I’d say that’s pretty awesome.

In this article, we’ll look at the technology and its proposed usage, as well as some great examples of how it can be used to enhance the user experience.

1
Image credit: Sebastian Schöld2

Disclaimer: This technology is pretty cutting-edge, and the specification is currently with the W3C as an “unofficial editor’s draft” (as of 6 June 2014). The likelihood that usage will differ slightly from the code snippets in this article is high. Checking the specification3 and testing thoroughly before releasing code are always wise.

Speech Synthesis

The API comes in two parts. To start, let’s look at the speech synthesis part, the bit that speaks to you. If your website has some textual content — whether body copy, forms inputs, alt tags, etc. — you could run some lovely functions and the device would speak the words to the user.

Let’s look at some of the code needed to make this happen. First, you would create a new instance of the SpeechSynthesisUtterance interface. Then, you would specify the text to be spoken. Then, you would add this instance to a queue, which tells the browser what to speak and when.

Below I have wrapped all of this in a function for us to call, named speak, with the text we want spoken as a parameter.

function speak(textToSpeak) 
   // Create a new instance of SpeechSynthesisUtterance
   var newUtterance = new SpeechSynthesisUtterance();

   // Set the text
   newUtterance.text = textToSpeak;

   // Add this text to the utterance queue
   window.speechSynthesis.speak(newUtterance);

All we need to do now is call this function and pass in some words to be spoken:

speak('Welcome to Smashing Magazine');

More functionality is included in SpeechSynthesisUtterance. You can stop, start and pause the queue, as well as set the language, rate and voice for each utterance. Stopping, starting or pausing an utterance fires an event that you can hook into, as does changing the voice. Plenty to play around with!

At the moment, speech synthesis is supported only in Chrome and Safari (both on desktop and mobile devices). Also, the voices available to you via the API largely depend on the operating system. Google has its own set of default voices for Chrome, available on Mac OS X, Windows and Ubuntu. However, Mac OS X’s voices are also available and, thus, are the same as in Safari on OSX. You can easily see which voices are available in the Developer Tools console:

window.speechSynthesis.getVoices();

Tip: If you’re on OS X, check out the voice “Zarvox.”

Speech Recognition

The other part of the Web Speech API is speech recognition, which enables the user to speak into the device’s microphone and have their speech recognized by the website or web app.

Let’s run through some code. This time, we’ll create a new instance of the SpeechRecognition interface. Because this part is supported only in Chrome, we’ll have to include the webkit prefix.

var newRecognition = webkitSpeechRecognition();

SpeechRecognition comes with quite a few attributes. One that we are likely to change is continuous, whose default state of false means that the browser will stop listening after a break in speech. If you want your website or web app to keep listening, then set the attribute to true:

newRecognition.continuous = true;

To start and stop speech recognition, call the start() and stop() methods:

// start recognition
newRecognition.start();

// stop recognition
newRecognition.stop();

Again, we can hook into plenty of events, such as soundstart, speechstart, result and error. I have prepared a demo4 that shows how to access the words detected, from the result event method. The code goes on to match the words spoken against some simple navigation, activating the appropriate link if detected.

Uses

Dictation

At the moment, the most common use of the Speech API is as a dictation or reading mechanism. That is, the user speaks into the mic and the device translates the speech into text (as demoed by Chrome’s development team5), or the user passes in text to be read out by the device.

Having a device speak out some information definitely has its advantages. Imagine your mirror telling you what the weather will be like first thing in the morning.

Plenty of car manufacturers have installed text-to-speech capabilities over the last couple of years. Imagine, in the not-too-distant future, your browser’s reading list being read out to you as you drive.

Voice Control

Dictation could easily be turned into voice control, as we saw with the recognition demo above, which could be modified to allow for navigation around a website. Add it to web-enabled TVs and we might just be living in the 2015 of Back to the Future 2.

I’m fortunate to work with some very talented colleagues, one of whom created a tennis scoring app. I was delighted to find that he could control the app with his voice, speaking the score out loud as he was playing a game.

Translation

Translation would look very different when done in real time. Someone could converse in one language, and another person’s device would speak out what is being said in their own language. Hook that up to a Bluetooth ear piece and eat your heart out Arthur Dent6. We’re getting a little closer to each person having their own Babel fish7.

Limitations

Offline capability needs more consideration. As it stands, Chrome sends the recorded audio to its servers and pings back the result. Thus, an Internet connection is needed for it to work — not ideal.

Conclusion

Nevertheless, it is still exciting, and the technology is opening up. I look forward to the day when looking for the remote is a thing of the past, and I can just tell the TV to stream the latest Sin City movie.

Would we actually use the web for this? Why not? It’s already universal. You can take the web and its speech wherever you go.

I have met some resistance when talking about this API. People either can’t see a need for it with the web, or they would feel uncomfortable talking to their device — both valid views. However, I hope I have inspired you to at least give it a go and think about it the next time you are building something. Start welcoming speech: It might be just what you’re listening for.

(ml, al, il)

Footnotes

  1. 1 http://slides.com/schold/web-speech-api#/
  2. 2 http://slides.com/schold/web-speech-api#/
  3. 3 https://dvcs.w3.org/hg/speech-api/raw-file/tip/webspeechapi.html
  4. 4 http://codepen.io/Rumyra/pen/bCphe
  5. 5 https://www.google.com/intl/en/chrome/demos/speech.html
  6. 6 http://en.wikipedia.org/wiki/Arthur_Dent
  7. 7 http://en.wikipedia.org/wiki/List_of_races_and_species_in_The_Hitchhiker%27s_Guide_to_the_Galaxy#Babel_fish

The post Enhancing User Experience With The Web Speech API appeared first on Smashing Magazine.

View original – 

Enhancing User Experience With The Web Speech API

Thumbnail

Design Accessibly, See Differently: Color Contrast Tips And Tools

When you browse your favorite website or check the latest version of your product on your device of choice, take a moment to look at it differently. Step back from the screen. Close your eyes slightly so that your vision is a bit clouded by your eyelashes.

  • Can you still see and use the website?
  • Are you able to read the labels, fields, buttons, navigation and small footer text?
  • Can you imagine how someone who sees differently would read and use it?

In this article, I’ll share one aspect of design accessibility: making sure that the look and feel (the visual design of the content) are sufficiently inclusive of differently sighted users.

Web page viewed with NoCoffee low-vision simulation1
Web page viewed with NoCoffee low-vision simulation. (View large version2)

I am a design consultant on PayPal’s accessibility team. I assess how our product designs measure up to the Web Content Accessibility Guidelines (WCAG) 2.0, and I review our company’s design patterns and best practices.

I created our “Designers’ Accessibility Checklist,” and I will cover one of the most impactful guidelines on the checklist in this article: making sure that there is sufficient color contrast for all content. I’ll share the strategies, tips and tools that I use to help our teams deliver designs that most people can see and use without having to customize the experiences.

Our goal is to make sure that all visual designs meet the minimum color-contrast ratio for normal and large text on a background, as described in the WCAG 2.0, Level AA, “Contrast (Minimum): Understanding Success Criterion 1.4.3523.”

Who benefits from designs that have sufficient contrast? Quoting from the WCAG’s page:

The 4.5:1 ratio is used in this provision to account for the loss in contrast that results from moderately low visual acuity, congenital or acquired color deficiencies, or the loss of contrast sensitivity that typically accompanies aging.

As an accessibility consultant, I’m often asked how many people with disabilities use our products. Website analytics do not reveal this information. Let’s estimate how many people could benefit from designs with sufficient color contrast by reviewing the statistics:

  • 15% of the world’s population have some form of disability4, which includes conditions that affect seeing, hearing, motor abilities and cognitive abilities.
  • About 4% of the population have low vision, whereas 0.6% are blind.
  • 7 to 12% of men have some form of color-vision deficiency (color blindness), and less than 1% of women do.
  • Low-vision conditions increase with age, and half of people over the age of 50 have some degree of low-vision condition.
  • Worldwide, the fastest-growing population is 60 years of age and older5.
  • Over the age of 40, most everyone will find that they need reading glasses or bifocals to clearly see small objects or text, a condition caused by the natural aging process, called presbyopia6.

Let’s estimate that 10% of the world population would benefit from designs that are easier to see. Multiply that by the number of customers or potential customers who use your website or application. For example, out of 2 million online customers, 200,000 would benefit.

Some age-related low-vision conditions7 include the following:

  • Macular degeneration
    Up to 50% of people are affected by age-related vision loss.
  • Diabetic retinopathy
    In people with diabetes, leaking blood vessels in the eyes can cloud vision and cause blind spots.
  • Cataracts
    Cataracts clouds the lens of the eye and decreases visual acuity.
  • Retinitis pigmentosa
    This inherited condition gradually causes a loss of vision.

All of these conditions reduce sensitivity to contrast, and in some cases reduce the ability to distinguish colors.

Color-vision deficiencies, also called color-blindness, are mostly inherited and can be caused by side effects of medication and age-related low-vision conditions.

Here are the types of color-vision deficiencies8:

  • Deuteranopia
    This is the most common and entails a reduced sensitivity to green light.
  • Protanopia
    This is a reduced sensitivity to red light.
  • Tritanopia
    This is a reduced sensitivity to blue light, but not very common.
  • Achromatopsia
    People with this condition cannot see color at all, but it is not very common.

Reds and greens or colors that contain red or green can be difficult to distinguish for people with deuteranopia or protanopia.

Experience Seeing Differently

Creating a checklist and asking your designers to use it is easy, but in practice how do you make sure everyone follows the guidelines? We’ve found it important for designers not only to intellectually understand the why, but to experience for themselves what it is like to see differently. I’ve used a couple of strategies: immersing designers in interactive experiences through our Accessibility Showcase, and showing what designs look like using software simulations.

In mid-2013, we opened our PayPal Accessibility Showcase9 (video). Employees get a chance to experience first-hand what it is like for people with disabilities to use our products by interacting with web pages using goggles and/or assistive technology. We require that everyone who develops products participates in a tour. The user scenarios for designing with sufficient color contrast include wearing goggles that simulate conditions of low or partial vision and color deficiencies. Visitors try out these experiences on a PC, Mac or tablet. For mobile experiences, visitors wear the goggles and use their own mobile devices.

Fun fact: One wall in the showcase was painted with magnetic paint. The wall contains posters, messages and concepts that we want people to remember. At the end of the tour, I demonstrate vision simulators on our tablet. I view the message wall with the simulators to emphasize the importance of sufficient color contrast.

Showcase visitors wear goggles that simulate low-vision and color-blindness conditions
Showcase visitors wear goggles that simulate low-vision and color-blindness conditions.
Some of the goggles used in the Accessibility Showcase
Some of the goggles used in the Accessibility Showcase.

Software Simulators

Mobile Apps

Free mobile apps are available for iOS and Android devices:

  • Chromatic Vision Simulator
    Kazunori Asada’s app simulates three forms of color deficiencies: protanope (protanopia), deuteranope (deuteranopia) and tritanope (tritanopia). You can view and then save simulations using the camera feature, which takes a screenshot in the app. (Available for iOS6210 and Android6311.)
  • VisionSim
    The Braille Institute’s app simulates a variety of low-vision conditions and provides a list of causes and symptoms for each condition. You can view and then save simulations using the camera feature, which takes a screenshot in the app. (Available for iOS6412 and Android.)13

Chromatic Vision Simulator

The following photos show orange and green buttons viewed through the Chromatic Vision Simulator:

Seen through Chromatic Vision Simulator, the green and orange buttons show normal (C), protanope (P), deuteranope (D) and tritanope (T).14
Seen through Chromatic Vision Simulator, the green and orange buttons show normal (C), protanope (P), deuteranope (D) and tritanope (T). (View large version15)

This example highlights the importance of another design accessibility guideline: Do not use color alone to convey meaning. If these buttons were online icons representing a system’s status (such as up or down), some people would have difficulty understanding it because there is no visible text and the shapes are the same. In this scenario, include visible text (i.e. text labels), as shown in the following example:

The green and orange buttons are viewed in Photoshop with deuteranopia soft proof and normal (text labels added).16
The green and orange buttons are viewed in Photoshop with deuteranopia soft proof and normal (text labels added). (View large version17)

Mobile Device Simulations

Checking for sufficient color contrast becomes even more important on mobile devices. Viewing mobile applications through VisionSim or Chromatic Vision Simulator is easy if you have two mobile phones. View the mobile app that you want to test on the second phone running the simulator.

If you only have one mobile device, you can do the following:

  1. Take screenshots of the mobile app on the device using the built-in camera.
  2. Save the screenshots to a laptop or desktop.
  3. Open and view the screenshots on the laptop, and use the simulators on the mobile device to view and save the simulations.

How’s the Weather in Cupertino?

The following example highlights the challenges of using a photograph as a background while making essential information easy to see. Notice that the large text and bold text are easier to see than the small text and small icons.

The Weather mobile app, viewed with Chromatic Vision Simulator, shows normal, deuteranope, protanope and tritanope simulations.18
The Weather mobile app, viewed with Chromatic Vision Simulator, shows normal, deuteranope, protanope and tritanope simulations. (View large version19)

Low-Vision Simulations

Using the VisionSim app, you can simulate macular degeneration, diabetic retinopathy, retinitis pigmentosa and cataracts.

The Weather mobile app is being viewed with the supported condition simulations.20
The Weather mobile app is being viewed with the supported condition simulations. (View large version21)

Adobe Photoshop

PayPal’s teams use Adobe Photoshop to design the look and feel of our user experiences. To date, a color-contrast ratio checker or tester is not built into Photoshop. But designers can use a couple of helpful features in Photoshop to check their designs for sufficient color contrast:

  • Convert designs to grayscale by going to “Select View” → “Image” → “Adjustments” → “Grayscale.”
  • Simulate color blindness conditions by going to “Select View” → “Proof Setup” → “Color Blindness” and choosing protanopia type or deuteranopia type. Adobe provides soft-proofs for color blindness22.

Examples

If you’re designing with gradient backgrounds, verify that the color-contrast ratio passes for the text color and background color on both the lightest and darkest part of the gradient covered by the content or text.

In the following example of buttons, the first button has white text on a background with an orange gradient, which does not meet the minimum color-contrast ratio. A couple of suggested improvements are shown:

  • add a drop-shadow color that passes (center button),
  • change the text to a color that passes (third button).

Checking in Photoshop with the grayscale and deuteranopia proof, the modified versions with the drop shadow and dark text are easier to read than the white text.

If you design in sizes larger than actual production sizes, make sure to check how the design will appear in the actual web page or mobile device.

Button with gradients: normal view; view in grayscale; and as a proof, deuteranopia.23
Button with gradients: normal view; view in grayscale; and as a proof, deuteranopia. (View large version24)

In the following example of a form, the body text and link text pass the minimum color-contrast ratio for both the white and the gray background. I advise teams to always check the color contrast of text and links against all background colors that are part of the experience.

Even though the “Sign Up” link passes, if we view the experience in grayscale or with proof deuteranopia, distinguishing that “Sign Up” is a link might be difficult. To improve the affordance of “Sign Up” as a link, underline the link or link the entire phrase, “New to PayPal? Sign Up.”

Form example: normal view; in Photoshop, a view in grayscale; and as a proof, deuteranopia.25
Form example: normal view; in Photoshop, a view in grayscale; and as a proof, deuteranopia. (View large version26)

Because red and green can be more difficult to distinguish for people with conditions such as deuteranopia and protanopia, should we avoid using them? Not necessarily. In the following example, a red minus sign (“-”) indicates purchasing or making a payment. Money received or refunded is indicated by a green plus sign (“+”). Viewing the design with proof, deuteranopia, the colors are not easy to distinguish, but the shapes are legible and unique. Next to the date, the description describes the type of payment. Both shape and content provide context for the information.

Also shown in this example, the rows for purchases and refunds alternate between white and light-gray backgrounds. If the same color text is used for both backgrounds, verify that all of the text colors pass for both white and gray backgrounds.

Normal view and as a proof, deuteranopia: Check the text against the alternating background colors.27
Normal view and as a proof, deuteranopia: Check the text against the alternating background colors. (View large version28)

In some applications, form fields and/or buttons may be disabled until information has been entered by the user. Our design guidance does not require disabled elements to pass, in accordance with the WCAG 2.0’s “Contrast (Minimum): Understanding Success Criterion 1.4.34129:

Incidental: Text or images of text that are part of an inactive user interface component,… have no contrast requirement.

In the following example of a mobile app’s form, the button is disabled until a phone number and PIN have been entered. The text labels for the fields are a very light gray over a white background, which does not pass the minimum color-contrast ratio.

If the customer interprets that form elements with low contrast are disabled, would they assume that the entire form is disabled?

Mobile app form showing disabled fields and button (left) and then enabled (right).30
Mobile app form showing disabled fields and button (left) and then enabled (right). (View large version31)

The same mobile app form is shown in a size closer to what I see on my phone in the following example. At a minimum, the text color needs to be changed or darkened to pass the minimum color-contrast ratio for normal body text and to improve readability.

To help distinguish between labels in fields and user-entered information, try to explore alternative visual treatments of form fields. Consider reversing foreground and background colors or using different font styles for labels and for user-entered information.

Mobile app form example: normal, grayscale and proof deuteranopia.32
Mobile app form example: normal, grayscale and proof deuteranopia. (View large version33)

NoCoffee Vision Simulator for Chrome

NoCoffee Vision Simulator6634 can be used to simulate color-vision deficiencies and low-vision conditions on any pages that are viewable in the Chrome browser. Using the “Color Deficiency” setting “achromatopsia,” you can view web pages in grayscale.

The following example shows the same photograph (featuring a call to action) viewed with some of the simulations available in NoCoffee. The message and call to action are separated from the background image by a practically opaque black container. This improves readability of the message and call to action. Testing the color contrast of the blue color in the headline against solid black passes for large text. Note that the link “Mobile” is not as easy to see because the blue does not pass the color-contrast standard for small body text. Possible improvements could be to change the link color to white and underline it, and/or make the entire phrase “Read more about Mobile” a link.

Simulating achromatopsia (no color), deuteranopia, protanopia using NoCoffee.35
Simulating achromatopsia (no color), deuteranopia, protanopia using NoCoffee. (View large version36)
Simulating low visual acuity, diabetic retinopathy, macular degeneration and low visual acuity plus retinitus pigmentosa, using NoCoffee.37
Simulating low visual acuity, diabetic retinopathy, macular degeneration and low visual acuity plus retinitus pigmentosa, using NoCoffee. (View large version38)

Using Simulators

Simulators are useful tools to visualize how a design might be viewed by people who are aging, have low-vision conditions or have color-vision deficiencies.

For design reviews, I use the simulators to mock up a design in grayscale, and I might use color-blindness filters to show designers possible problems with color contrast. Some of the questions I ask are:

  • Is anything difficult to read?
  • Is the call to action easy to find and read?
  • Are links distinguishable from other content?

After learning how to use simulators to build empathy and to see their designs differently, I ask designers to use tools to check color contrast to verify that all of their designs meet the minimum color-contrast ratio of the WCAG 2.0 AA. The checklist includes a couple of tools they can use to test their designs.

Color-Contrast Ratio Checkers

The tools we cite in the designers’ checklist are these:

There are many tools to check color contrast, including ones that check live products. I’ve kept the list short to make it easy for designers to know what to use and to allow for consistent test results.

Our goal is to meet the WCAG 2.0 AA color-contrast ratio, which is 4.5 to 1 for normal text and 3 to 1 for large text.

What are the minimum sizes for normal text and large text? The guidance provides recommendations on size ratios in the WCAG’s Contrast (Minimum): Understanding Success Criterion 1.4.34129 but not a rule for a minimum size for body text. As noted in the WCAG’s guidance, thin decorative fonts might need to be larger and/or bold.

Testing Color-Contrast Ratio

You should test:

  • early in the design process;
  • when creating a visual design specification for any product or service (this documents all of the color codes and the look and feel of the user experience);
  • all new designs that are not part of an existing visual design guideline.

Test Hexadecimal Color Codes for Web Designs

Let’s use the WebAIM Color Contrast Checker4239 to test sample body-text colors on a white background (#FFFFFF):

  • dark-gray text (#333333).
  • medium-gray text (#666666).
  • light-gray text (#999999).

We want to make sure that body and normal text passes the WCAG 2.0 AA. Note that light gray (#999999) does not pass on a white background (#FFFFFF).

Test dark-gray, medium-gray and light-gray using the WebAim Color Contrast Checker.43
Test dark-gray, medium-gray and light-gray using the WebAim Color Contrast Checker.(View large version44)

In the tool, you can modify the light gray (#999999) to find a color that does pass the AA. Select the “Darken” option to slightly change the color until it passes. By clicking the color field, you will have more options, and you can change color and luminosity, as shown in the second part of this example.

Modify colors to pass45
In the WebAim Color Contrast Checker, modify the light gray using the “Darken” option, or use the color palette to find a color that passes. (View large version46)

Tabular information may be designed with alternating white and gray backgrounds to improve readability. Let’s test medium-gray text (#666666) and light-gray text (#757575) on a gray background (#E6E6E6).

Note that with the same background, the medium text passes, but the lighter gray passes only for large text. In this case, use medium gray for body text instead of white or gray backgrounds. Use the lighter gray only for large text, such as headings on white and gray backgrounds.

Test light-gray and medium-gray text on a gray background.47
Test light-gray and medium-gray text on a gray background. (View large version48)

Test RGB Color Codes

For mobile applications, designers might use RGB color codes to specify visual designs for engineering. You can use the TPG Colour Contrast Checker49. you will need to install either the PC or Mac version and run it side by side with Photoshop.

Let’s use the Colour Contrast Checker to test medium-gray text (102 102 102 in RGB and #666666 in hexadecimal) and light-gray text (#757575 in hexadecimal) on a gray background (230 230 230 in RGB and #E6E6E6 in hexadecimal).

  1. Open the Colour Contrast Checker application.
  2. Select “Options” → “Displayed Color Values” → “RGB.”
  3. Under “Algorithm,” select “Luminosity.”
  4. Enter the foreground and background colors in RGB: 102 102 102 for foreground and 230 230 230 for background. Mouse click or tab past the fields to view the results. Note that this combination passes for both text and large text (AA).
  5. Select “Show details” to view the hexadecimal color values and information about both AA and AAA requirements.
Colour Contrast Analyser, and color wheel to modify colors50
Colour Contrast Analyser, and color wheel to modify colors. (View large version51)

In our example, light-gray text (117 117 117 in RGB) on a gray background (230 230 230 in RGB) does not meet the minimum AA contrast ratio for body text. To modify the colors, view the color wheels by clicking in the “Color” select box to modify the foreground or background. Or you can select “Options” → “Show Color Sliders,” as shown in the example.

Colour Contrast Analyser, with RGB codes. Show color sliders to modify any color that does not meet minimum AA guidelines.
Colour Contrast Analyser, with RGB codes. Show color sliders to modify any color that does not meet minimum AA guidelines.

In most cases, minor adjustments to colors will meet the minimum contrast ratio, and comparisons before and after will show how better contrast enables most people to see and read more easily.

Best Practices

Test for color-contrast ratio, and document the styles and color codes used for all design elements. Create a visual design specification that includes the following:

  • typography for all textual elements, including headings, text links, body text and formatted text;
  • icons and glyphs and text equivalents;
  • form elements, buttons, validation and system error messaging;
  • background color and container styles (making sure text on these backgrounds all pass);
  • the visual treatments for disabled links, form elements and buttons (which do not need to pass a minimum color-contrast ratio).

Documenting visual guidelines for developers brings several benefits:

  • Developers don’t have to guess what the designers want.
  • Designs can be verified against the visual design specification during quality testing cycles, by engineers and designers.
  • A reference point that meets design accessibility guidelines for color contrast can be shared and leveraged by other teams.

Summary

If you are a designer, try out the simulators and tools on your next design project. Take time to see differently. One of the stellar designers who reviewed my checklist told me a story about using Photoshop’s color-blindness proofs. On his own, he used the proofs to refine the colors used in a design for his company’s product. When the redesigned product was released, his CEO thanked him because it was the first time he was able to see the design. The CEO shared that he was color-blind. In many cases, you may be unaware that your colleague, leader or customers have moderate low-vision or color-vision deficiencies. If meeting the minimum color-contrast ratio for a particular design element is difficult, take the challenge of thinking beyond color. Can you innovate so that most people can pick up and use your application without having to customize it?

If you are responsible for encouraging teams to build more accessible web or mobile experiences, be prepared to use multiple strategies:

  • Use immersive experiences to engage design teams and gain empathy for people who see differently.
  • Show designers how their designs might look using simulators.
  • Test designs that have low contrast, and show how slight modifications to colors can make a difference.
  • Encourage designers to test, and document visual specifications early and often.
  • Incorporate accessible design practices into reusable patterns and templates both in the code and the design.

Priorities and deadlines make it challenging for teams to deliver on all requests from multiple stakeholders. Be patient and persistent, and continue to engage with teams to find strategies to deliver user experiences that are easier to see and use by more people out of the box.

References

Low-Vision Goggles and Resources

(hp, al, il, ml)

Footnotes

  1. 1 http://www.smashingmagazine.com/wp-content/uploads/2014/10/nocoffeevis-large.png
  2. 2 http://www.smashingmagazine.com/wp-content/uploads/2014/10/nocoffeevis-large.png
  3. 3 http://www.w3.org/TR/UNDERSTANDING-WCAG20/visual-audio-contrast-contrast.html
  4. 4 http://www.who.int/mediacentre/factsheets/fs352/en/
  5. 5 http://www.un.org/esa/population/publications/worldageing19502050/
  6. 6 http://www.mayoclinic.org/diseases-conditions/presbyopia/basics/causes/con-20032261
  7. 7 https://www.nei.nih.gov/healthyeyes/aging_eye.asp
  8. 8 http://webaim.org/articles/visual/colorblind
  9. 9 https://www.youtube.com/watch?feature=player_embedded&v=7MyHZofcNnk
  10. 10 https://itunes.apple.com/us/app/chromatic-vision-simulator/id389310222?mt=8
  11. 11 https://play.google.com/store/apps/details?id=asada0.android.cvsimulator&hl=en
  12. 12 https://itunes.apple.com/us/app/visionsim-by-braille-institute/id525114829?mt=8
  13. 13 https://play.google.com/store/apps/details?id=com.BrailleIns.VisionSim&hl=en
  14. 14 http://www.smashingmagazine.com/wp-content/uploads/2014/10/CVSbuttonsOG-large.jpg
  15. 15 http://www.smashingmagazine.com/wp-content/uploads/2014/10/CVSbuttonsOG-large.jpg
  16. 16 http://www.smashingmagazine.com/wp-content/uploads/2014/10/textonbuttons-large.png
  17. 17 http://www.smashingmagazine.com/wp-content/uploads/2014/10/textonbuttons-large.png
  18. 18 http://www.smashingmagazine.com/wp-content/uploads/2014/10/weatherCVS-large.png
  19. 19 http://www.smashingmagazine.com/wp-content/uploads/2014/10/weatherCVS-large.png
  20. 20 http://www.smashingmagazine.com/wp-content/uploads/2014/10/weathervisionsim-large.png
  21. 21 http://www.smashingmagazine.com/wp-content/uploads/2014/10/weathervisionsim-large.png
  22. 22 http://help.adobe.com/en_US/creativesuite/cs/using/WS3F71DA01-0962-4b2e-B7FD-C956F8659BB3.html#WS473A333A-7F61-4aba-8F67-5553208E349C
  23. 23 http://www.smashingmagazine.com/wp-content/uploads/2014/10/buttongradients-large.png
  24. 24 http://www.smashingmagazine.com/wp-content/uploads/2014/10/buttongradients-large.png
  25. 25 http://www.smashingmagazine.com/wp-content/uploads/2014/10/logindev-large.png
  26. 26 http://www.smashingmagazine.com/wp-content/uploads/2014/10/logindev-large.png
  27. 27 http://www.smashingmagazine.com/wp-content/uploads/2014/10/rowsandicons-large.png
  28. 28 http://www.smashingmagazine.com/wp-content/uploads/2014/10/rowsandicons-large.png
  29. 29 http://www.w3.org/TR/2014/NOTE-UNDERSTANDING-WCAG20-20140311/visual-audio-contrast-contrast.html
  30. 30 http://www.smashingmagazine.com/wp-content/uploads/2014/10/mobiledisabledfields-large.png
  31. 31 http://www.smashingmagazine.com/wp-content/uploads/2014/10/mobiledisabledfields-large.png
  32. 32 http://www.smashingmagazine.com/wp-content/uploads/2014/10/mobiledisabledfields_bwcc-large.png
  33. 33 http://www.smashingmagazine.com/wp-content/uploads/2014/10/mobiledisabledfields_bwcc-large.png
  34. 34 https://chrome.google.com/webstore/search/NoCoffee%20Vision%20Simulator?hl=en&gl=US
  35. 35 http://www.smashingmagazine.com/wp-content/uploads/2014/10/nocoffeecolorsim-large.png
  36. 36 http://www.smashingmagazine.com/wp-content/uploads/2014/10/nocoffeecolorsim-large.png
  37. 37 http://www.smashingmagazine.com/wp-content/uploads/2014/10/nocoffeevisionsims-large.png
  38. 38 http://www.smashingmagazine.com/wp-content/uploads/2014/10/nocoffeevisionsims-large.png
  39. 39 http://webaim.org/resources/contrastchecker
  40. 40 http://paciellogroup.com/resources/contrastAnalyser
  41. 41 http://www.w3.org/TR/2014/NOTE-UNDERSTANDING-WCAG20-20140311/visual-audio-contrast-contrast.html
  42. 42 http://webaim.org/resources/contrastchecker
  43. 43 http://www.smashingmagazine.com/wp-content/uploads/2014/10/colorcontrastgrays-large.png
  44. 44 http://www.smashingmagazine.com/wp-content/uploads/2014/10/colorcontrastgrays-large.png
  45. 45 http://www.smashingmagazine.com/wp-content/uploads/2014/10/modifylightgray-large.png
  46. 46 http://www.smashingmagazine.com/wp-content/uploads/2014/10/modifylightgray-large.png
  47. 47 http://www.smashingmagazine.com/wp-content/uploads/2014/10/gray_graybackground-large.png
  48. 48 http://www.smashingmagazine.com/wp-content/uploads/2014/10/gray_graybackground-large.png
  49. 49 http://paciellogroup.com/resources/contrastAnalyser
  50. 50 http://www.smashingmagazine.com/wp-content/uploads/2014/10/ccanalysercolorwheel-large.png
  51. 51 http://www.smashingmagazine.com/wp-content/uploads/2014/10/ccanalysercolorwheel-large.png
  52. 52 http://www.w3.org/TR/UNDERSTANDING-WCAG20/visual-audio-contrast-contrast.html
  53. 53 http://www.w3.org/TR/2014/NOTE-UNDERSTANDING-WCAG20-20140311/visual-audio-contrast-contrast.html
  54. 54 https://www.paypal-engineering.com/2014/03/13/get-a-sneak-peek-into-paypal-accessibility-showcase/
  55. 55 http://www.adobe.com/accessibility/products/photoshop.html
  56. 56 http://help.adobe.com/en_US/creativesuite/cs/using/WS3F71DA01-0962-4b2e-B7FD-C956F8659BB3.html#WS473A333A-7F61-4aba-8F67-5553208E349C
  57. 57 http://webaim.org
  58. 58 http://webaim.org/resources/contrastchecker/
  59. 59 http://wave.webaim.org
  60. 60 http://webaim.org/articles/visual/colorblind
  61. 61 http://www.paciellogroup.com/resources/contrastAnalyser/
  62. 62 https://itunes.apple.com/us/app/chromatic-vision-simulator/id389310222?mt=8
  63. 63 https://play.google.com/store/apps/details?id=asada0.android.cvsimulator&hl=en
  64. 64 https://itunes.apple.com/us/app/visionsim-by-braille-institute/id525114829?mt=8
  65. 65 https://play.google.com/store/apps/details?id=com.BrailleIns.VisionSim&hl=en
  66. 66 https://chrome.google.com/webstore/search/NoCoffee%20Vision%20Simulator?hl=en&gl=US
  67. 67 http://accessgarage.wordpress.com/2013/02/09/458/
  68. 68 https://www.nei.nih.gov/healthyeyes/aging_eye.asp
  69. 69 http://www.who.int/mediacentre/factsheets/fs352/en/
  70. 70 http://www.mayoclinic.org/diseases-conditions/presbyopia/basics/causes/con-20032261
  71. 71 http://www.un.org/esa/population/publications/worldageing19502050/
  72. 72 http://www.lowvisionsimulationkit.com
  73. 73 http://www.lowvisionsimulators.com/find-the-right-low-vision-simulator

The post Design Accessibly, See Differently: Color Contrast Tips And Tools appeared first on Smashing Magazine.

Taken from – 

Design Accessibly, See Differently: Color Contrast Tips And Tools